Simple and Efficient ways to Improve REALM

Vidhisha Balachandran, Ashish Vaswani, Yulia Tsvetkov, Niki Parmar


Abstract
Dense retrieval has been shown to be effective for Open Domain Question Answering, surpassing sparse retrieval methods like BM25. One such model, REALM, (Guu et al., 2020) is an end-to-end dense retrieval system that uses MLM based pretraining for improved downstream QA performance. However, the current REALM setup uses limited resources and is not comparable in scale to more recent systems, contributing to its lower performance. Additionally, it relies on noisy supervision for retrieval during fine-tuning. We propose REALM++, where we improve upon the training and inference setups and introduce better supervision signal for improving performance, without any architectural changes. REALM++ achieves ~5.5% absolute accuracy gains over the baseline while being faster to train. It also matches the performance of large models which have 3x more parameters demonstrating the efficiency of our setup.
Anthology ID:
2021.mrqa-1.16
Volume:
Proceedings of the 3rd Workshop on Machine Reading for Question Answering
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Adam Fisch, Alon Talmor, Danqi Chen, Eunsol Choi, Minjoon Seo, Patrick Lewis, Robin Jia, Sewon Min
Venue:
MRQA
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
158–164
Language:
URL:
https://aclanthology.org/2021.mrqa-1.16
DOI:
10.18653/v1/2021.mrqa-1.16
Bibkey:
Cite (ACL):
Vidhisha Balachandran, Ashish Vaswani, Yulia Tsvetkov, and Niki Parmar. 2021. Simple and Efficient ways to Improve REALM. In Proceedings of the 3rd Workshop on Machine Reading for Question Answering, pages 158–164, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Simple and Efficient ways to Improve REALM (Balachandran et al., MRQA 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.mrqa-1.16.pdf
Video:
 https://aclanthology.org/2021.mrqa-1.16.mp4
Data
Natural Questions