Multi-modal Retrieval of Tables and Texts Using Tri-encoder Models

Bogdan Kostić, Julian Risch, Timo Möller


Abstract
Open-domain extractive question answering works well on textual data by first retrieving candidate texts and then extracting the answer from those candidates. However, some questions cannot be answered by text alone but require information stored in tables. In this paper, we present an approach for retrieving both texts and tables relevant to a question by jointly encoding texts, tables and questions into a single vector space. To this end, we create a new multi-modal dataset based on text and table datasets from related work and compare the retrieval performance of different encoding schemata. We find that dense vector embeddings of transformer models outperform sparse embeddings on four out of six evaluation datasets. Comparing different dense embedding models, tri-encoders with one encoder for each question, text and table increase retrieval performance compared to bi-encoders with one encoder for the question and one for both text and tables. We release the newly created multi-modal dataset to the community so that it can be used for training and evaluation.
Anthology ID:
2021.mrqa-1.8
Volume:
Proceedings of the 3rd Workshop on Machine Reading for Question Answering
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Venue:
MRQA
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
82–91
Language:
URL:
https://aclanthology.org/2021.mrqa-1.8
DOI:
10.18653/v1/2021.mrqa-1.8
Bibkey:
Cite (ACL):
Bogdan Kostić, Julian Risch, and Timo Möller. 2021. Multi-modal Retrieval of Tables and Texts Using Tri-encoder Models. In Proceedings of the 3rd Workshop on Machine Reading for Question Answering, pages 82–91, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Multi-modal Retrieval of Tables and Texts Using Tri-encoder Models (Kostić et al., MRQA 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.mrqa-1.8.pdf
Data
HybridQANatural QuestionsOTT-QA