Open Domain Question Answering over Tables via Dense Retrieval

Jonathan Herzig, Thomas Müller, Syrine Krichene, Julian Eisenschlos


Abstract
Recent advances in open-domain QA have led to strong models based on dense retrieval, but only focused on retrieving textual passages. In this work, we tackle open-domain QA over tables for the first time, and show that retrieval can be improved by a retriever designed to handle tabular context. We present an effective pre-training procedure for our retriever and improve retrieval quality with mined hard negatives. As relevant datasets are missing, we extract a subset of Natural Questions (Kwiatkowski et al., 2019) into a Table QA dataset. We find that our retriever improves retrieval results from 72.0 to 81.1 recall@10 and end-to-end QA results from 33.8 to 37.7 exact match, over a BERT based retriever.
Anthology ID:
2021.naacl-main.43
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
June
Year:
2021
Address:
Online
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
512–519
Language:
URL:
https://aclanthology.org/2021.naacl-main.43
DOI:
10.18653/v1/2021.naacl-main.43
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2021.naacl-main.43.pdf
Code
 google-research/tapas
Data
Natural Questions