Leveraging Implicit Feedback from Deployment Data in Dialogue

Richard Yuanzhe Pang, Stephen Roller, Kyunghyun Cho, He He, Jason Weston


Abstract
We study improving social conversational agents by learning from natural dialogue between users and a deployed model, without extra annotations. To implicitly measure the quality of a machine-generated utterance, we leverage signals like user response length, sentiment and reaction of the future human utterances in the collected dialogue episodes. Our experiments use the publicly released deployment data from BlenderBot (Xu et al., 2023). Human evaluation indicates improvements in our new models over baseline responses; however, we find that some proxy signals can lead to more generations with undesirable properties as well. For example, optimizing for conversation length can lead to more controversial or unfriendly generations compared to the baseline, whereas optimizing for positive sentiment or reaction can decrease these behaviors.
Anthology ID:
2024.eacl-short.8
Volume:
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
60–75
Language:
URL:
https://aclanthology.org/2024.eacl-short.8
DOI:
Bibkey:
Cite (ACL):
Richard Yuanzhe Pang, Stephen Roller, Kyunghyun Cho, He He, and Jason Weston. 2024. Leveraging Implicit Feedback from Deployment Data in Dialogue. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 2: Short Papers), pages 60–75, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
Leveraging Implicit Feedback from Deployment Data in Dialogue (Pang et al., EACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.eacl-short.8.pdf
Video:
 https://aclanthology.org/2024.eacl-short.8.mp4