Entity Recognition at First Sight: Improving NER with Eye Movement Information

Nora Hollenstein, Ce Zhang


Abstract
Previous research shows that eye-tracking data contains information about the lexical and syntactic properties of text, which can be used to improve natural language processing models. In this work, we leverage eye movement features from three corpora with recorded gaze information to augment a state-of-the-art neural model for named entity recognition (NER) with gaze embeddings. These corpora were manually annotated with named entity labels. Moreover, we show how gaze features, generalized on word type level, eliminate the need for recorded eye-tracking data at test time. The gaze-augmented models for NER using token-level and type-level features outperform the baselines. We present the benefits of eye-tracking features by evaluating the NER models on both individual datasets as well as in cross-domain settings.
Anthology ID:
N19-1001
Volume:
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–10
Language:
URL:
https://aclanthology.org/N19-1001
DOI:
10.18653/v1/N19-1001
Bibkey:
Cite (ACL):
Nora Hollenstein and Ce Zhang. 2019. Entity Recognition at First Sight: Improving NER with Eye Movement Information. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1–10, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
Entity Recognition at First Sight: Improving NER with Eye Movement Information (Hollenstein & Zhang, NAACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/N19-1001.pdf
Video:
 https://vimeo.com/347364761
Code
 DS3Lab/ner-at-first-sight