CNRL at SemEval-2020 Task 5: Modelling Causal Reasoning in Language with Multi-Head Self-Attention Weights Based Counterfactual Detection

Rajaswa Patil, Veeky Baths


Abstract
In this paper, we describe an approach for modelling causal reasoning in natural language by detecting counterfactuals in text using multi-head self-attention weights. We use pre-trained transformer models to extract contextual embeddings and self-attention weights from the text. We show the use of convolutional layers to extract task-specific features from these self-attention weights. Further, we describe a fine-tuning approach with a common base model for knowledge sharing between the two closely related sub-tasks for counterfactual detection. We analyze and compare the performance of various transformer models in our experiments. Finally, we perform a qualitative analysis with the multi-head self-attention weights to interpret our models’ dynamics.
Anthology ID:
2020.semeval-1.55
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Editors:
Aurelie Herbelot, Xiaodan Zhu, Alexis Palmer, Nathan Schneider, Jonathan May, Ekaterina Shutova
Venue:
SemEval
SIG:
SIGLEX
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
451–457
Language:
URL:
https://aclanthology.org/2020.semeval-1.55
DOI:
10.18653/v1/2020.semeval-1.55
Bibkey:
Cite (ACL):
Rajaswa Patil and Veeky Baths. 2020. CNRL at SemEval-2020 Task 5: Modelling Causal Reasoning in Language with Multi-Head Self-Attention Weights Based Counterfactual Detection. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 451–457, Barcelona (online). International Committee for Computational Linguistics.
Cite (Informal):
CNRL at SemEval-2020 Task 5: Modelling Causal Reasoning in Language with Multi-Head Self-Attention Weights Based Counterfactual Detection (Patil & Baths, SemEval 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.semeval-1.55.pdf