Sequence Classification with Human Attention

Maria Barrett, Joachim Bingel, Nora Hollenstein, Marek Rei, Anders Søgaard


Abstract
Learning attention functions requires large volumes of data, but many NLP tasks simulate human behavior, and in this paper, we show that human attention really does provide a good inductive bias on many attention functions in NLP. Specifically, we use estimated human attention derived from eye-tracking corpora to regularize attention functions in recurrent neural networks. We show substantial improvements across a range of tasks, including sentiment analysis, grammatical error detection, and detection of abusive language.
Anthology ID:
K18-1030
Volume:
Proceedings of the 22nd Conference on Computational Natural Language Learning
Month:
October
Year:
2018
Address:
Brussels, Belgium
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
302–312
Language:
URL:
https://aclanthology.org/K18-1030
DOI:
10.18653/v1/K18-1030
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/K18-1030.pdf