A Cross-lingual Comparison of Human and Model Relative Word Importance

Felix Morger, Stephanie Brandl, Lisa Beinborn, Nora Hollenstein


Abstract
Relative word importance is a key metric for natural language processing. In this work, we compare human and model relative word importance to investigate if pretrained neural language models focus on the same words as humans cross-lingually. We perform an extensive study using several importance metrics (gradient-based saliency and attention-based) in monolingual and multilingual models, including eye-tracking corpora from four languages (German, Dutch, English, and Russian). We find that gradient-based saliency, first-layer attention, and attention flow correlate strongly with human eye-tracking data across all four languages. We further analyze the role of word length and word frequency in determining relative importance and find that it strongly correlates with length and frequency, however, the mechanisms behind these non-linear relations remain elusive. We obtain a cross-lingual approximation of the similarity between human and computational language processing and insights into the usability of several importance metrics.
Anthology ID:
2022.clasp-1.2
Volume:
Proceedings of the 2022 CLASP Conference on (Dis)embodiment
Month:
September
Year:
2022
Address:
Gothenburg, Sweden
Venue:
CLASP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11–23
Language:
URL:
https://aclanthology.org/2022.clasp-1.2
DOI:
Bibkey:
Cite (ACL):
Felix Morger, Stephanie Brandl, Lisa Beinborn, and Nora Hollenstein. 2022. A Cross-lingual Comparison of Human and Model Relative Word Importance. In Proceedings of the 2022 CLASP Conference on (Dis)embodiment, pages 11–23, Gothenburg, Sweden. Association for Computational Linguistics.
Cite (Informal):
A Cross-lingual Comparison of Human and Model Relative Word Importance (Morger et al., CLASP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.clasp-1.2.pdf