Synthesis

The synthesize action combines data from multiple sources and restructures or reorganizes it to extract patterns, insights, or themes. This sets synthesis apart from summarization, which compresses the original information without introducing any new interpretation.

Because of its more opinionated nature, AI completing a synthesis can distort evidence or overstate confidence. The UX challenge is to make the act of reasoning visible and reviewable. Users should see what was combined, how it was weighed, and where the AI is speculating.

Variations

All synthesis actions are distinctly sensitive because they introduce reasoning, injecting a layer of synthetic interpretation onto the source material. Depending on the context, supporting features and information become critical to maintaining user agency and oversight:

  • Aggregated Synthesis gathers findings from multiple sources to rephrase and report without added interpretation. Examples include search result interfaces like Perplexity, Google, or Notion AI. This use case is closest to the summary action, varied only by inferred prioritization or narrative structure generated by the AI. Provide citations and a list of references for users to verify.
  • Comparitive Synthesis aligns, contrasts, or reconcile multiple viewpoints or data sets, seen in document review in tools like ChatGPT or legal platforms. Allow for visual diffs to support the AI's claims and allow the user to verify.
  • Thematic synthesis extracts underlying patterns or categories from a set of information, like customer feedback or research notes. To ensure the user can facilitate this process and analyze the results for bias, combine this action with a conversational interface or other surface where the AI can share its logic. Always allow user overrides in grouping and labeling.
  • Generative synthesis allows the model to build new interpretations or implications from references and communicate them objectively, such as “Key findings” presented after a deep research run in Perplexity or ChatGPT. This form of synthesis is highly susceptible to hallucinations and erroneous conclusions. Ensure users have access to the model's stream of thought, including reasoning steps, reference analysis, and conclusive findings.

Design considerations

  • Keep reasoning visible. Treat synthesis as a transparent process, not a polished result. Show how the system grouped sources, what evidence it used, and which logic connects them. Visibility turns AI reasoning from a black box into a tool users can inspect and trust.
  • Separate evidence from interpretation. Present factual statements and inferred insights differently. Use distinct sections or visual cues to signal when the AI is quoting, summarizing, or speculating. Users should always know which ideas come from the data and which from the model’s reasoning.
  • Expose uncertainty. Synthesis should acknowledge when confidence is low or evidence is incomplete. Visual confidence indicators work on cards and in dahshboards, where in written form cues like “limited support” or “conflicting data” are effective.
  • Verify key decisions. Allow users to check the model’s conclusions and verify them for accuracy before adopting synthesized findings or structure. This might take the form of verifying labels on data clusters or synthesized data columns, or reviewing key themes in user research and support tickets before adding additional data.