Adi Renduchintala
2023
AutoReply: Detecting Nonsense in Dialogue with Discriminative Replies
Weiyan Shi
|
Emily Dinan
|
Adi Renduchintala
|
Daniel Fried
|
Athul Jacob
|
Zhou Yu
|
Mike Lewis
Findings of the Association for Computational Linguistics: EMNLP 2023
We show that dialogue models can detect errors in their own messages, by calculating the likelihood of replies that are indicative of poor messages. For example, if an agent believes its partner is likely to respond “I don’t understand” to a candidate message, that message may not make sense, so an alternative message should be chosen. We evaluate our approach on a dataset from the game Diplomacy, which contains long dialogues richly grounded in the game state, on which existing models make many errors. We first show that hand-crafted replies can be effective for the task of detecting nonsense in applications as complex as Diplomacy. We then design AutoReply, an algorithm to search for such discriminative replies automatically, given a small number of annotated dialogue examples. We find that AutoReply-generated replies outperform handcrafted replies and perform on par with supervised learning approaches.
Search
Co-authors
- Weiyan Shi 1
- Emily Dinan 1
- Daniel Fried 1
- Athul Jacob 1
- Zhou Yu 1
- show all...