How do we encourage responsible use of an emerging technology?
Users span technical backgrounds. Their familiarity with the limits of generativeAI, ethical concerns, or how to responsibly interpret results will vary widely. Meanwhile, companies are looking for simple ways to shield themselves from liability and push responsibility onto users.
This conflict between user capabilities and the legal exposure of companies has led to near universal adoption of the caveat pattern.
Caveats can be helpful. For less technical users, they are simple ways to signal limitations to the product. In that regard, it‘s not much different than a warning label on your hair dryer telling you not to use it in the bath.
At the same time, these models are complex and rapidly evolving. It may be sufficient to smack a caveat on as a first step, but companies will eventually need to find better ways to help users navigate the nuances of these models if they want to expand adoption.
Furthermore, we should question whether users are already blind to this pattern given its ubiquity. Similar to the standard UX pattern of requiring that users opt into terms of service before signing up for a product, should we believe that users critically understand why a caveat exists and what it means?
While your lawyers will likely insist that you smack a caveat onto your AI interface, consider going a step further to help users get better outcomes from the model. Use wayfinders to guide them to create better prompts, use sources and citations to help them understand how the AI derived its response, and make your AI more transparent so users can understand what is happening behind the scenes.