Eye-Tracking Features Masking Transformer Attention in Question-Answering Tasks

Leran Zhang, Nora Hollenstein


Abstract
Eye movement features are considered to be direct signals reflecting human attention distribution with a low cost to obtain, inspiring researchers to augment language models with eye-tracking (ET) data. In this study, we select first fixation duration (FFD) and total reading time (TRT) as the cognitive signals to guide Transformer attention in question-answering (QA) tasks. We design three different ET attention masks based on the two features, either collected from human reading events or generated by a gaze-predicting model. We augment BERT and ALBERT models with attention masks structured based on the ET data. We find that augmenting a model with ET data carries linguistic features complementing the information captured by the model. It improves the models’ performance but compromises the stability. Different Transformer models benefit from different types of ET attention masks, while ALBERT performs better than BERT. Moreover, ET data collected from real-life reading events has better model augmenting ability than the model-predicted data.
Anthology ID:
2024.lrec-main.619
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
7057–7070
Language:
URL:
https://aclanthology.org/2024.lrec-main.619
DOI:
Bibkey:
Cite (ACL):
Leran Zhang and Nora Hollenstein. 2024. Eye-Tracking Features Masking Transformer Attention in Question-Answering Tasks. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 7057–7070, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Eye-Tracking Features Masking Transformer Attention in Question-Answering Tasks (Zhang & Hollenstein, LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.619.pdf