How do we encourage responsible use of an emerging technology?

Users span technical backgrounds. Their familiarity with the limits of generativeAI, ethical concerns, or how to responsibly interpret results will vary widely. Meanwhile, companies are looking for simple ways to shield themselves from liability and push responsibility onto users.

This conflict between user capabilities and the legal exposure of companies has led to near universal adoption of the caveat pattern.

Caveats can be helpful. For less technical users, they are simple ways to signal limitations to the product. In that regard, it‘s not much different than a warning label on your hair dryer telling you not to use it in the bath.

At the same time, these models are complex and rapidly evolving. It may be sufficient to smack a caveat on as a first step, but companies will eventually need to find better ways to help users navigate the nuances of these models if they want to expand adoption.

Furthermore, we should question whether users are already blind to this pattern given its ubiquity. Similar to the standard UX pattern of requiring that users opt into terms of service before signing up for a product, should we believe that users critically understand why a caveat exists and what it means?

While your lawyers will likely insist that you smack a caveat onto your AI interface, consider going a step further to help users get better outcomes from the model. Use wayfinders to guide them to create better prompts, use sources and citations to help them understand how the AI derived its response, and make your AI more transparent so users can understand what is happening behind the scenes.

Details and variations

  • The standard convention is to include text conveying, in some form, that AI is prone to mistakes
  • Consider linking to other documentation like a knowledge bank to help users get better context
  • Give users advice for how to improve their result, such as using references or primary sources, or providing advanced prompting techniques

Considerations

Positives

Encourages responsible usage
Reduces the likelihood that users could mistake the model’s response as absolute truth.

Potential risks

Doesn't protect companies from hallucinations or weird behavior
As models are tuned, there are still many unknown unknowns that could impact the experience a user has with the technology. AirCanada recently had to provide a refund to a customer, despite it being against their policy, due to their chatbot providing inaccurate information. A caveat is not sufficient to remove responsibility.

Companies could use these to skirt ethical responsibility
In the race to get AI products to market, companies are not prioritizing crafting well-constructed experiences. Chatbots that quickly turn racist, image generators that ignore entire races, aggregated sources with inaccurate data–these are recent examples of this technology gone awry. Caveats are a bandaid, and not a suitable substitute to creating thoughtful experiences. There is an ethical imperative for companies to think about how to ensure responsible use and interpretation of the content they provide

Use when:
You don't want users to automatically trust information returned by the AI, since the strength of results depend on the strength of the training data and the model itself.

Examples

Github CoPilot
Figjam
Adobe Acrobat
Google Search
ChatGPT
Notion
No items found.