Giving users the ability to rate their interactions has become a table stakes pattern in service and conversational experiences (think chat support, or Uber).
On its face this pattern is rather tame and familiar. Its potential risk factor may be buried and hidden to the user. What happens after they rate their experience?
The effects on this may be innocuous, but with unknown actors in the space, and a lack of transparency into training data, asking for this type of input from the user without providing a direct benefit can degrade trust.
In application, this pattern is fairly standardized, as thumbs or stars, with a few outliers. We should not expect it to change much.
What we should expect to see, or at least hope to see, is more information about what happens based on the user's rating, and transparency to the user about whether they are rating the response to their request, or the model as a whole.