Cross-sentence Pre-trained Model for Interactive QA matching

Jinmeng Wu, Yanbin Hao


Abstract
Semantic matching measures the dependencies between query and answer representations, it is an important criterion for evaluating whether the matching is successful. In fact, such matching does not examine each sentence individually, context information outside a sentence should be considered equally important to the syntactic context inside a sentence. We proposed a new QA matching model, built upon a cross-sentence context-aware architecture. An interactive attention mechanism with a pre-trained language model is proposed to automatically select salient positional answer representations that contribute more significantly to the answer relevance of a given question. In addition to the context information captured at each word position, we incorporate a new quantity of context information jump to facilitate the attention weight formulation. This reflects the amount of new information brought by the next word and is computed by modeling the joint probability between two adjacent word states. The proposed method is compared to multiple state-of-the-art ones evaluated using the TREC library, WikiQA, and the Yahoo! community question datasets. Experimental results show that the proposed method outperforms satisfactorily the competing ones.
Anthology ID:
2020.lrec-1.666
Volume:
Proceedings of the Twelfth Language Resources and Evaluation Conference
Month:
May
Year:
2020
Address:
Marseille, France
Editors:
Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Asuncion Moreno, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
5417–5424
Language:
English
URL:
https://aclanthology.org/2020.lrec-1.666
DOI:
Bibkey:
Cite (ACL):
Jinmeng Wu and Yanbin Hao. 2020. Cross-sentence Pre-trained Model for Interactive QA matching. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 5417–5424, Marseille, France. European Language Resources Association.
Cite (Informal):
Cross-sentence Pre-trained Model for Interactive QA matching (Wu & Hao, LREC 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.lrec-1.666.pdf