Competence-based Question Generation

Jingxuan Tu, Kyeongmin Rim, James Pustejovsky


Abstract
Models of natural language understanding often rely on question answering and logical inference benchmark challenges to evaluate the performance of a system. While informative, such task-oriented evaluations do not assess the broader semantic abilities that humans have as part of their linguistic competence when speaking and interpreting language. We define competence-based (CB) question generation, and focus on queries over lexical semantic knowledge involving implicit argument and subevent structure of verbs. We present a method to generate such questions and a dataset of English cooking recipes we use for implementing the generation method. Our primary experiment shows that even large pretrained language models perform poorly on CB questions until they are provided with additional contextualized semantic information. The data and the source code is available at: https: //github.com/brandeis-llc/CompQG.
Anthology ID:
2022.coling-1.131
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
1521–1533
Language:
URL:
https://aclanthology.org/2022.coling-1.131
DOI:
Bibkey:
Cite (ACL):
Jingxuan Tu, Kyeongmin Rim, and James Pustejovsky. 2022. Competence-based Question Generation. In Proceedings of the 29th International Conference on Computational Linguistics, pages 1521–1533, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
Competence-based Question Generation (Tu et al., COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.131.pdf