Exploring the Reasons for Non-generalizability of KBQA systems

Sopan Khosla, Ritam Dutt, Vinayshekhar Bannihatti Kumar, Rashmi Gangadharaiah


Abstract
Recent research has demonstrated impressive generalization capabilities of several Knowledge Base Question Answering (KBQA) models on the GrailQA dataset. We inspect whether these models can generalize to other datasets in a zero-shot setting. We notice a significant drop in performance and investigate the causes for the same. We observe that the models are dependent not only on the structural complexity of the questions, but also on the linguistic styles of framing a question. Specifically, the linguistic dimensions corresponding to explicitness, readability, coherence, and grammaticality have a significant impact on the performance of state-of-the-art KBQA models. Overall our results showcase the brittleness of such models and the need for creating generalizable systems.
Anthology ID:
2023.insights-1.11
Volume:
Proceedings of the Fourth Workshop on Insights from Negative Results in NLP
Month:
May
Year:
2023
Address:
Dubrovnik, Croatia
Editors:
Shabnam Tafreshi, Arjun Akula, João Sedoc, Aleksandr Drozd, Anna Rogers, Anna Rumshisky
Venues:
insights | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
88–93
Language:
URL:
https://aclanthology.org/2023.insights-1.11
DOI:
10.18653/v1/2023.insights-1.11
Bibkey:
Cite (ACL):
Sopan Khosla, Ritam Dutt, Vinayshekhar Bannihatti Kumar, and Rashmi Gangadharaiah. 2023. Exploring the Reasons for Non-generalizability of KBQA systems. In Proceedings of the Fourth Workshop on Insights from Negative Results in NLP, pages 88–93, Dubrovnik, Croatia. Association for Computational Linguistics.
Cite (Informal):
Exploring the Reasons for Non-generalizability of KBQA systems (Khosla et al., insights-WS 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.insights-1.11.pdf
Video:
 https://aclanthology.org/2023.insights-1.11.mp4