AI has a trust problem.

One method to help users feel more comfortable is to be upfront and clear with users when they interacting with AI. When users can opt into the experience feeling fully informed, they may be more likely to suspend their reservations long enough for you to deliver value that keeps them engaged, and builds trust..

Disclosure patterns label AI interactions and content so users can distinguish them from content created by other humans or interaction patterns that don't include AI. Depending on your situation, there are a few options to choose from:

  • For strictly AI products, this might not be needed. Tools like Perplexity and ChatGPT are built entirely around AI features, so users will expect its presence. You can help users by separating content they have uploaded or referenced from sources that the bot captures.
  • For blended products where AI-generated or edited content is interspersed with human-created content, consider how you might label the content created by the computer. This way users don't inadvertently present AI writing as their own, and ensures you give users the agency to manage the content within their system.
  • For  AI agents label content delivered by a bot within the chat. People have mixed feelings about interacting with bots when they think they are speaking with other humans. Avoid any sticky situations or impacts to your brand by being up front. This will become even more important as Agentive AI grows in adoption.
  • In all cases, proactively inform users when they are interacting with a product powered by AI, particularly if their data can be collected and they don't have the ability to opt out.

Different approaches

Intercom's Fin explicitly labels the messages sent by the AI. When the conversation is passed to a human, the inline badge on the individual message persists. A user can work backwards in their conversation to see exactly when they started talking to a real person.

Intercom's Fin uses a clear label of "AI" to make computer-generated responses stand out clearly

IA Writer 7 explicitly differentiates between AI-authored text and human-authored text. Copy will appear grey if it was brought in from an AI source, and only be set in the standard high contrast text color when a human has revised it.

IA Writer differentiates content written by AI through the use of text color

Limitless.ai has also taken an opinionated approach to consent when it comes to AI recording tools. Their new pendent will only capture the words spoken by others if their consent has been registered by the device. Otherwise, only the wearer's words will be recorded in their Limitless account.

The Limitless pendant will only capture text in a transcript after a speaker has given consent

Dark patterns

On the other extreme are tools that allow no opt-out functionality, such as Meta's suite of tools. They have the benefit of gaining access to more training data, but at the cost of further degrading their trust and integrity with the public. For a company as large as theirs, perhaps the means justify the end (yuck). It's up to you to determine if you are willing to take that risk at the cost of user autonomy. I guess I made my opinion known.

Details and variations

  • Disclosures tend to include some text informing users that AI is present. An icon or symbol is optional
  • Users should know when they are being transferred to a human, and vice-versa
  • Give users the option to opt out from interacting with AI by having it announce its presence or by requiring consent before someone is recorded

Considerations

Positives

Transparency as a feature

The companies I listed above that are taking innovative approaches to address transparency and private are also companies breaking out from the pack in their market. User agency is not mutually exclusive to business success. Ask yourself, how could this constraint fuel creativity? How might we make transparency and privacy a differentiator?

Trust is currency

It's uncertain how the lack of general trust in AI will impact future outcomes within the market and shape our perception of user experience. There is a business benefit to investing in features and experiences that build trust, beyond the innovative gains as listed above. Companies that can build positive relationships with users now will have more resilience if they need to adapt to changing market forces, and adjust to a shifting policy and legal landscape as well.

Potential risks

Don't ship the org chart

The inverse to the points raised above is that some companies will incentivize business outcomes over the needs and experiences of our users. Designers have limited agency in these decisions, but we can push back to show how we can protect both at the same time. Be mindful of product requirements and business decisions that can harm individual users for the benefit of business scale. If nothing else, get caught trying!

Use when:
Content or interactions generated by AI cannot be clearly distinguished from materials generated humans. Disclosures give users agency over how they interact with content online.

Examples

Intercom's Fin and similar chat bots will label messages and conversations where the user is interacting with AI
IA writer sets all AI-generated text in a lower-contrast grey. The text is only set in its default high-contrast state when the user had modified it themselves
Productboard labels any insights derived from AI reviews from notes to distinguish them from notes contributed by humans
Productboard places a similar label on their auto-generated summaries, similar to other labels seen in AI-summarization tools like Grain or Gong
When AI recording software like Tactiq is added to a conversation, messages in the chat alert any other participants so they are aware of its presence
Zoom notifies all participants directly via a dialogue that disrupts their flow so it cannot be missed
For conversations without a chat or screen interface, Limitless demonstrates what consent can look like so all participants in a conversation know AI is involved
No items found.