Back to feed
UX
Trend

The case for the uncertain AI: Why chatbots should say “I’m not sure”

UX Collective

Article Overview

The article, "The case for the uncertain AI: Why chatbots should say 'I’m not sure'," highlights a critical issue in human-AI interaction: the detrimental impact of AI chatbots' overconfidence. Author Alexandre Tempel observes that AI frequently presents false or unverified information with an unwavering certainty, leading users to mistakenly believe it as fact. This 'confidently wrong' behavior stems from two primary technical factors: Reinforcement Learning from Human Feedback (RLHF) and tokenization costs.

RLHF (Reinforcement Learning from Human Feedback - a machine learning technique where human preferences are used to fine-tune AI models) often trains AI to prioritize 'helpfulness' and 'fluency' over strict 'honesty' or 'accuracy.' This means the AI is rewarded for providing a coherent, seemingly complete answer, even if it lacks strong evidence. Additionally, the computational cost associated with 'deep reasoning' in AI, often measured in 'tokens' (the process of breaking down text into smaller units for AI processing, which incurs processing costs), limits the AI's ability to thoroughly verify information. Instead, it defaults to a confident, albeit potentially superficial, response.

The article uses a practical scenario to illustrate this: a user attempting to set up a web server. When faced with complex or non-straightforward configuration processes, an overconfident AI might provide instructions that are confidently stated but ultimately incorrect or incomplete, leaving the user frustrated and misinformed. This lack of transparency about the AI's internal state—its level of certainty or the basis of its knowledge—erodes trust over time.

Tempel concludes that to foster true trust, AI systems must emulate human intellectual humility. This involves designing AI to explicitly communicate its limitations, admit when it's unsure, and provide 'Attribution' (citing its sources) for the information it presents. The core solution lies in better UI/UX design that integrates these transparency mechanisms, allowing users to understand the reliability and origin of the AI's responses.

Impact on Design Practice

This article profoundly impacts how UX/UI designers should approach the creation of AI-powered interfaces. Designers are no longer just crafting aesthetically pleasing or functional layouts; they are now responsible for designing for trust and transparency in AI interactions. This means moving beyond simply displaying AI-generated text to actively designing cues that communicate the AI's confidence level, potential uncertainties, or the sources of its information.

For instance, instead of a chatbot just stating a fact, a designer might implement UI elements like a subtle 'confidence score' indicator, a 'verify this information' prompt, or a clickable link to the source material. This is akin to a human expert saying, 'Based on X, I believe Y, but you might want to cross-reference with Z.' Designers must consider the 'persona' of the AI—should it always sound omniscient, or is there value in designing a more human-like, intellectually humble persona that admits when it's unsure? This shift requires designers to integrate ethical considerations and AI limitations directly into their design process, ensuring that the interface doesn't inadvertently mislead users through false confidence.

Practically, this means incorporating new design patterns for uncertainty, attribution, and clarification. Designers will need to prototype and user-test these patterns to ensure they are intuitive and effectively build trust, rather than creating confusion or diminishing the AI's perceived utility. It's about empowering users to critically evaluate AI responses, fostering a healthier, more transparent human-AI collaboration.

Design AI interactions to explicitly communicate uncertainty and source attribution, fostering genuine user trust over false confidence by mimicking human intellectual humility.

How to Apply This

To build more trustworthy AI experiences, designers can implement the following actionable steps:

1

Design explicit uncertainty cues: Integrate UI elements like 'I'm not sure,' 'This is an educated guess,' or 'Information may be incomplete' when AI confidence is low.

2

Integrate source attribution: Provide clear, clickable links or references to the data sources the AI used, allowing users to verify information independently.

3

Prioritize honesty in AI persona: Develop an AI persona that values transparency and humility, rather than always projecting an all-knowing, infallible image.

4

Conduct user testing for trust: Test how users perceive and react to AI uncertainty and attribution features to ensure they build trust effectively, not confusion.

Industry Context

This article fits squarely within the broader industry trend of ethical AI and responsible AI design. As AI becomes more pervasive, there's a growing recognition that its utility must be balanced with trustworthiness and transparency. This discussion moves beyond simply optimizing for AI performance to considering its societal and psychological impact on users. It highlights the evolving role of UX in shaping not just the usability, but also the ethical footprint of AI systems, pushing designers to be advocates for user understanding and informed decision-making in an AI-driven world.