Similarity or deeper understanding? Analyzing the TED-Q dataset of evoked questions

Matthijs Westera, Jacopo Amidei, Laia Mayol


Abstract
We take a close look at a recent dataset of TED-talks annotated with the questions they implicitly evoke, TED-Q (Westera et al., 2020). We test to what extent the relation between a discourse and the questions it evokes is merely one of similarity or association, as opposed to deeper semantic/pragmatic interpretation. We do so by turning the TED-Q dataset into a binary classification task, constructing an analogous task from explicit questions we extract from the BookCorpus (Zhu et al., 2015), and fitting a BERT-based classifier alongside models based on different notions of similarity. The BERT-based classifier, achieving close to human performance, outperforms all similarity-based models, suggesting that there is more to identifying true evoked questions than plain similarity.
Anthology ID:
2020.coling-main.439
Volume:
Proceedings of the 28th International Conference on Computational Linguistics
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Editors:
Donia Scott, Nuria Bel, Chengqing Zong
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
5004–5012
Language:
URL:
https://aclanthology.org/2020.coling-main.439
DOI:
10.18653/v1/2020.coling-main.439
Bibkey:
Cite (ACL):
Matthijs Westera, Jacopo Amidei, and Laia Mayol. 2020. Similarity or deeper understanding? Analyzing the TED-Q dataset of evoked questions. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5004–5012, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Cite (Informal):
Similarity or deeper understanding? Analyzing the TED-Q dataset of evoked questions (Westera et al., COLING 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.coling-main.439.pdf
Code
 amore-upf/ted-q_eval
Data
BookCorpusQuAC