VAULT: VAriable Unified Long Text Representation for Machine Reading Comprehension

Haoyang Wen, Anthony Ferritto, Heng Ji, Radu Florian, Avi Sil


Abstract
Existing models on Machine Reading Comprehension (MRC) require complex model architecture for effectively modeling long texts with paragraph representation and classification, thereby making inference computationally inefficient for production use. In this work, we propose VAULT: a light-weight and parallel-efficient paragraph representation for MRC based on contextualized representation from long document input, trained using a new Gaussian distribution-based objective that pays close attention to the partially correct instances that are close to the ground-truth. We validate our VAULT architecture showing experimental results on two benchmark MRC datasets that require long context modeling; one Wikipedia-based (Natural Questions (NQ)) and the other on TechNotes (TechQA). VAULT can achieve comparable performance on NQ with a state-of-the-art (SOTA) complex document modeling approach while being 16 times faster, demonstrating the efficiency of our proposed model. We also demonstrate that our model can also be effectively adapted to a completely different domain – TechQA – with large improvement over a model fine-tuned on a previously published large PLM.
Anthology ID:
2021.acl-short.131
Volume:
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
Month:
August
Year:
2021
Address:
Online
Editors:
Chengqing Zong, Fei Xia, Wenjie Li, Roberto Navigli
Venues:
ACL | IJCNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1035–1042
Language:
URL:
https://aclanthology.org/2021.acl-short.131
DOI:
10.18653/v1/2021.acl-short.131
Bibkey:
Cite (ACL):
Haoyang Wen, Anthony Ferritto, Heng Ji, Radu Florian, and Avi Sil. 2021. VAULT: VAriable Unified Long Text Representation for Machine Reading Comprehension. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 1035–1042, Online. Association for Computational Linguistics.
Cite (Informal):
VAULT: VAriable Unified Long Text Representation for Machine Reading Comprehension (Wen et al., ACL-IJCNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.acl-short.131.pdf
Video:
 https://aclanthology.org/2021.acl-short.131.mp4
Data
Natural QuestionsSQuADTechQA