Maria Mouratidi


2025

pdf bib
Comparing Eye-gaze and Transformer Attention Mechanisms in Reading Tasks
Maria Mouratidi | Massimo Poesio
Proceedings of the First International Workshop on Gaze Data and Natural Language Processing

As transformers become increasingly prevalent in NLP research, evaluating their cognitive alignment with human language processing has become essential for validating them as models of human language. This study compares eye-gaze patterns in human reading with transformer attention using different attention representations (raw attention, attention flow, gradient-based saliency). We employ both statistical correlation analysis and predictive modeling using PCA-reduced representations of eye-tracking features across two reading tasks. The findings reveal lower correlations and predictive capacity for the decoder model compared to the encoder model, with implications for the gap between behavioral performance and cognitive plausibility of different transformer designs.