Refining Raw Sentence Representations for Textual Entailment Recognition via Attention

Jorge Balazs, Edison Marrese-Taylor, Pablo Loyola, Yutaka Matsuo


Abstract
In this paper we present the model used by the team Rivercorners for the 2017 RepEval shared task. First, our model separately encodes a pair of sentences into variable-length representations by using a bidirectional LSTM. Later, it creates fixed-length raw representations by means of simple aggregation functions, which are then refined using an attention mechanism. Finally it combines the refined representations of both sentences into a single vector to be used for classification. With this model we obtained test accuracies of 72.057% and 72.055% in the matched and mismatched evaluation tracks respectively, outperforming the LSTM baseline, and obtaining performances similar to a model that relies on shared information between sentences (ESIM). When using an ensemble both accuracies increased to 72.247% and 72.827% respectively.
Anthology ID:
W17-5310
Volume:
Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Venues:
RepEval | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
51–55
Language:
URL:
https://aclanthology.org/W17-5310
DOI:
10.18653/v1/W17-5310
Bibkey:
Cite (ACL):
Jorge Balazs, Edison Marrese-Taylor, Pablo Loyola, and Yutaka Matsuo. 2017. Refining Raw Sentence Representations for Textual Entailment Recognition via Attention. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP, pages 51–55, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Refining Raw Sentence Representations for Textual Entailment Recognition via Attention (Balazs et al., 2017)
Copy Citation:
PDF:
https://aclanthology.org/W17-5310.pdf
Code
 jabalazs/repeval_rivercorners
Data
MultiNLI