Relative Importance in Sentence Processing

Nora Hollenstein, Lisa Beinborn


Abstract
Determining the relative importance of the elements in a sentence is a key factor for effortless natural language understanding. For human language processing, we can approximate patterns of relative importance by measuring reading fixations using eye-tracking technology. In neural language models, gradient-based saliency methods indicate the relative importance of a token for the target objective. In this work, we compare patterns of relative importance in English language processing by humans and models and analyze the underlying linguistic patterns. We find that human processing patterns in English correlate strongly with saliency-based importance in language models and not with attention-based importance. Our results indicate that saliency could be a cognitively more plausible metric for interpreting neural language models. The code is available on github: https://github.com/beinborn/relative_importance.
Anthology ID:
2021.acl-short.19
Volume:
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
Month:
August
Year:
2021
Address:
Online
Venues:
ACL | IJCNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
141–150
Language:
URL:
https://aclanthology.org/2021.acl-short.19
DOI:
10.18653/v1/2021.acl-short.19
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2021.acl-short.19.pdf
Optional supplementary material:
 2021.acl-short.19.OptionalSupplementaryMaterial.zip