Signals are emerging to suggest that AI is better at writing prompts than humans. Even behind the scenes, AI generators take the prompt you write and make logical adjustments to get the best outcome. This is great in terms of having AI handle the manual work, but what if you want to understand how AI reached some result?

Prompt transparency supports that ability.

Whether shared explicitly or implicitly, when a user can understand how the AI logically produced its generation, they can control for bias, learn how to improve their own prompting capabilities, and remain in control.

Implicit transparency

Most AI products don't show users what is happening behind the scenes as it processes an input. This means that users don't know what adjustments the AI is making. Thankfully, with generative AI this can be easily remedied by simply asking.

ChatGPT will tell you its revised prompt if you ask it

Explicit transparency

Some generators share the prompts upfront by showing it in galleries or alongside a sample generation or offering the ability to have the AI improve your prompt for you.

The Udio gallery shows the prompt used to create the song

Co-generation

Foundational models and many AI products include a workspace or playground for users to test various prompts against the model. Anthropic includes the option to have AI co-write your prompt with you, taking the user's input and using advanced prompt techniques to include it. In this way, a user can understand how AI processes their prompt into an improved form to logically get the best outcome.

Example of Anthropic transparently showing the user how it would improve their prompt

Details and variations

  • Prompts can be shared up front or the AI can offer them on demand
  • Prompts in galleries and suggestions might be simplified from their actual form for the sake of space
  • Consider lettings the AI improve the prompt up front for the user so they don't have to guess

Considerations

Positives

Show your work

Prompt transparency makes users more savvy users of AI by improving their own prompting techniques or getting more comfortable ceding the responsibility to the AI itself where it makes more sense

Potential risks

Undirected manipulation

Users may be surprised to learn that the AI is changing their words behind the scenes. When they discover this through implicit transparency it can reduce trust and cause confusion

Use when:
The AI alters the original prompt or builds it in plain language from a collection of tokens

Examples

Galleries like Midjourney show you the prompt behind featured images
The website generator Musho.ai also showcases the prompt that was used
Prompts for images can be reverse engineered using Midjourney's /describe function
ChatGPT and other AI chats can be directed to share how they interpreted your prompt and changed it to be more clear
Copy.ai can improve your prompt on command from the input box
Ideogram takes a basic prompt and generates a more detailed prompt on command for the image
If you ask Dall-E to tell you how it modified your prompt behind the scenes, it will tell you
No items found.