Giving users the ability to rate their interactions has become a table stakes pattern in service and conversational experiences (think chat support, or Uber).
On its face this pattern is rather tame and familiar. Its potential risk factor may be buried and hidden to the user. What happens after they rate their experience?
- In scenario A, the user knows they have been interacting with the model. A thumbs-up or down signals to prompt engineers whether the design of the model itself is effective. This could be especially helpful for proprietary internal models or secure models trained on sensitive data.
- In scenario B, the user doesn’t know if they are interacting with a human or a model. OR they don’t know what experiments the company is running to potential replace human engagement with digital engagement. Even an average person could feel put off by the ethical implications from that lack of transparency.
The effects on this may be innocuous, but with unknown actors in the space, and a lack of transparency into training data, asking for this type of input from the user without providing a direct benefit can degrade trust.
In application, this pattern is fairly standardized, as thumbs or stars, with a few outliers. We should not expect it to change much.
What we should expect to see, or at least hope to see, is more information about what happens based on the user's rating, and transparency to the user about whether they are rating the response to their request, or the model as a whole.