Nadia Khlif


2024

pdf bib
Comparative Evaluation of Computational Models Predicting Eye Fixation Patterns During Reading: Insights from Transformers and Simpler Architectures
Alessandro Lento | Andrea Nadalini | Nadia Khlif | Vito Pirrelli | Claudia Marzi | Marcello Ferro
Proceedings of the 10th Italian Conference on Computational Linguistics (CLiC-it 2024)

Eye tracking data during reading provides significant insights into the cognitive processes underlying language comprehension. It allows for the estimation of lexical, contextual, and higher-level structural effects on word identification through metrics such as fixation duration. Despite advancements in psycholinguistic experiments that have elucidated these effects, the extent to which computational models can predict gaze patterns remains unclear. Recent developments in computational modeling, particularly the use of pre-trained transformer language models, have shown promising results in mirroring human reading behaviors. However, previous studies have not adequately compared these models to alternative architectures or considered various input features comprehensively. This paper addresses these gaps by replicating prior findings on English data, critically evaluating performance metrics, and proposing a stricter accuracy measurement method. Furthermore, it compares different computational models, demonstrating that simpler architectures can achieve results comparable to or better than transformers. The study also emphasizes the significance of individual differences in reading behavior, presenting challenges for simulating natural reading tasks.