Do Encoder Representations of Generative Dialogue Models have sufficient summary of the Information about the task ?

Prasanna Parthasarathi, Joelle Pineau, Sarath Chandar


Abstract
Predicting the next utterance in dialogue is contingent on encoding of users’ input text to generate appropriate and relevant response in data-driven approaches. Although the semantic and syntactic quality of the language generated is evaluated, more often than not, the encoded representation of input is not evaluated. As the representation of the encoder is essential for predicting the appropriate response, evaluation of encoder representation is a challenging yet important problem. In this work, we showcase evaluating the text generated through human or automatic metrics is not sufficient to appropriately evaluate soundness of the language understanding of dialogue models and, to that end, propose a set of probe tasks to evaluate encoder representation of different language encoders commonly used in dialogue models. From experiments, we observe that some of the probe tasks are easier and some are harder for even sophisticated model architectures to learn. And, through experiments we observe that RNN based architectures have lower performance on automatic metrics on text generation than transformer model but perform better than the transformer model on the probe tasks indicating that RNNs might preserve task information better than the Transformers.
Anthology ID:
2021.sigdial-1.50
Volume:
Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue
Month:
July
Year:
2021
Address:
Singapore and Online
Editors:
Haizhou Li, Gina-Anne Levow, Zhou Yu, Chitralekha Gupta, Berrak Sisman, Siqi Cai, David Vandyke, Nina Dethlefs, Yan Wu, Junyi Jessy Li
Venue:
SIGDIAL
SIG:
SIGDIAL
Publisher:
Association for Computational Linguistics
Note:
Pages:
477–488
Language:
URL:
https://aclanthology.org/2021.sigdial-1.50
DOI:
10.18653/v1/2021.sigdial-1.50
Bibkey:
Cite (ACL):
Prasanna Parthasarathi, Joelle Pineau, and Sarath Chandar. 2021. Do Encoder Representations of Generative Dialogue Models have sufficient summary of the Information about the task ?. In Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 477–488, Singapore and Online. Association for Computational Linguistics.
Cite (Informal):
Do Encoder Representations of Generative Dialogue Models have sufficient summary of the Information about the task ? (Parthasarathi et al., SIGDIAL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.sigdial-1.50.pdf
Video:
 https://www.youtube.com/watch?v=AwHuUPEpJFA
Code
 ppartha03/Dialogue-Probe-Tasks-Public