Filters help users constrain and direct the AI upfront. They are a close cousin to parameters, which guide the tokens and focus of the AI. Filters on the other hand provide more strict boundaries, either limiting which references should beconsidered by the AI in the first place, or limiting the output to specific formats or modalities.
Different modes of prompting
Working with AI is a process of tuning inputs and controlling outcomes. There are generally four scenarios users find themselves in:
- Focusing: When the user has a specific outcome in mind and a clear sense of how to get there (ex: editing an existing piece of content with the intent of changing its tone, format, etc)
- Navigating: Moments when users have a goal in mind but aren't sure how to get there (ex: when you are searching for the answer to a specific question but don't know which inputs will get you there)
- Synthesizing: Drawing unknown conclusions or decisions from a fixed set of stuff (ex: looking for trends and insights from a known set of data)
- Browsing: Pure discovery mode, when you are playing with different inputs and seeing where it takes you (ex: creatively prompting with different tokens to see what images they produce)
Filters can help users in all of these scenarios. When the user has a clear sense of what inputs they want to direct the AI to, they can constrain the data that it pulls in as References. When they know the outcome they are targeting but want to leave the input open, they can constrain the format. Each of these are examples of filters helping the user drive the AI where they want it to go.
Commercial considerations
The commercial implications to this is massive, and it‘s something companies have only just started to explore
Could authors assign meta data to their work that allows it to be licensed to models that follow specific commercial terms, allowing their content to be introduced as a premium filter?
Could this information be traced through the watermarked fingerprints of the sources aggregated in existing LLMs?
Could specific modalities, or modality format (e.g. length) be introduced as a premium filter on the AI outcome? Or perhaps some are only available when using data that attributes the original author.
There are many ways designers could use filters to improve the ethics and the results of the models they are designing, and the interfaces they are designing to interact with those models.