Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection

Luca Di Liello, Siddhant Garg, Luca Soldaini, Alessandro Moschitti


Abstract
An important task for designing QA systems is answer sentence selection (AS2): selecting the sentence containing (or constituting) the answer to a question from a set of retrieved relevant documents. In this paper, we propose three novel sentence-level transformer pre-training objectives that incorporate paragraph-level semantics within and across documents, to improve the performance of transformers for AS2, and mitigate the requirement of large labeled datasets. Specifically, the model is tasked to predict whether: (i) two sentences are extracted from the same paragraph, (ii) a given sentence is extracted from a given paragraph, and (iii) two paragraphs are extracted from the same document. Our experiments on three public and one industrial AS2 datasets demonstrate the empirical superiority of our pre-trained transformers over baseline models such as RoBERTa and ELECTRA for AS2.
Anthology ID:
2022.emnlp-main.810
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11806–11816
Language:
URL:
https://aclanthology.org/2022.emnlp-main.810
DOI:
10.18653/v1/2022.emnlp-main.810
Bibkey:
Cite (ACL):
Luca Di Liello, Siddhant Garg, Luca Soldaini, and Alessandro Moschitti. 2022. Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11806–11816, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection (Di Liello et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.810.pdf