Working with generative AI can feel like wandering through a maze in the dark. Even if you start to find your way, you often don’t know how you got there in the first place.
Footprints trace the relationship of sources and results through multiple prompts. This way, a user can understand the path the AI took through its training data, and give the user agency to intervene.
This pattern manifests in a few ways:
- In the product flow: Some products, namely creative generators like Midjourney, show a link back to the original prompt on subsequent regenerations and remixes.
- Exported from the product: Midjourney, Adobe Firefly and other image generators include the dominant tokens from the prompt in the meta data of the image it produced.
- As annotations: Github Copilot takes a distinct approach by allowing the user to instruct it to annotate its own work. This causes the AI to share its logic openly, letting the user trace back and understand how it came to a decision in producing the code.
Inadvertent footprints
Not all footprints are designed to help the user. AI generated content leaves its own traces behind, which can reveal the presence of AI generated output in content ostensibly written by humans. As the Verge points out, you can search any social media app or content site for the words "as an AI language model" and find hoards to results from people who either copy-pasted right out of the AI output, or incorporated it into their work.
An opportunity
Perhaps the most notable aspect of the footprints pattern is how rare it is.
Countless AI tools let you remix your content over and over across multiple modalities, but very few examples exist of tools that maintain the scent of generation. This means it's nearly impossible to trace your steps as you work with AI to generate some result
This is strange, given the number of patterns that exist to help users generate multiple iterations of their intent through variants and inpainting.
Take Jasper.ai for example. It's easy and impressive to highlight some text and ask the AI to generate a different way of writing it. Once you accept the change though, you can't go back. You can't compare the two approaches, and if you later want to assess the instructions you gave to the AI around voice and tone, content, or even reference sources, you can't.
This lack of support for capabilities to trace your footprints seems likely to be a problem, especially when we consider what other data out footprints leave behind. Tools like Content Credential's Verify tool have popped up to identify the presence of AI content, but so far it isn't capable of tracing back the source.