RoMQA: A Benchmark for Robust, Multi-evidence, Multi-answer Question Answering

Victor Zhong, Weijia Shi, Wen-tau Yih, Luke Zettlemoyer


Abstract
We introduce RoMQA, the first benchmark for robust, multi-evidence, multi-answer question answering (QA). RoMQA contains clusters of questions that are derived from related constraints mined from the Wikidata knowledge graph. RoMQA evaluates robustness of QA models to varying constraints by measuring worst-case performance within each question cluster. Compared to prior QA datasets, RoMQA has more human-written questions that require reasoning over more evidence text and have, on average, many more correct answers. In addition, human annotators rate RoMQA questions as more natural or likely to be asked by people. We evaluate state-of-the-art large language models in zero-shot, few-shot, and fine-tuning settings, and find that RoMQA is challenging: zeroshot and few-shot models perform similarly to naive baselines, while supervised retrieval methods perform well below gold evidence upper bounds. Moreover, existing models are not robust to variations in question constraints, but can be made more robust by tuning on clusters of related questions. Our results show that RoMQA is a challenging benchmark for large language models, and provides a quantifiable test to build more robust QA methods.
Anthology ID:
2023.findings-emnlp.470
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7055–7067
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.470
DOI:
10.18653/v1/2023.findings-emnlp.470
Bibkey:
Cite (ACL):
Victor Zhong, Weijia Shi, Wen-tau Yih, and Luke Zettlemoyer. 2023. RoMQA: A Benchmark for Robust, Multi-evidence, Multi-answer Question Answering. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 7055–7067, Singapore. Association for Computational Linguistics.
Cite (Informal):
RoMQA: A Benchmark for Robust, Multi-evidence, Multi-answer Question Answering (Zhong et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.470.pdf