Francesca Zermiani
2024
InteRead: An Eye Tracking Dataset of Interrupted Reading
Francesca Zermiani
|
Prajit Dhar
|
Ekta Sood
|
Fabian Kögel
|
Andreas Bulling
|
Maria Wirzberger
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Eye movements during reading offer a window into cognitive processes and language comprehension, but the scarcity of reading data with interruptions – which learners frequently encounter in their everyday learning environments – hampers advances in the development of intelligent learning technologies. We introduce InteRead – a novel 50-participant dataset of gaze data recorded during self-paced reading of real-world text. InteRead further offers fine-grained annotations of interruptions interspersed throughout the text as well as resumption lags incurred by these interruptions. Interruptions were triggered automatically once readers reached predefined target words. We validate our dataset by reporting interdisciplinary analyses on different measures of gaze behavior. In line with prior research, our analyses show that the interruptions as well as word length and word frequency effects significantly impact eye movements during reading. We also explore individual differences within our dataset, shedding light on the potential for tailored educational solutions. InteRead is accessible from our datasets web-page: https://www.ife.uni-stuttgart.de/en/llis/research/datasets/.
2021
Retrodiction as Delayed Recurrence: the Case of Adjectives in Italian and English
Raquel G. Alhama
|
Francesca Zermiani
|
Atiqah Khaliq
Proceedings of the 19th Annual Workshop of the Australasian Language Technology Association
We address the question of how to account for both forward and backward dependencies in an online processing account of human language acquisition. We focus on descriptive adjectives in English and Italian, and show that the acquisition of adjectives in these languages likely relies on tracking both forward and backward regularities. Our simulations confirm that forward-predicting models like standard Recurrent Neural Networks (RNN) cannot account for this phenomenon due to the lack of backward prediction, but the addition of a small delay (as proposed in Turek et al., 2019) endows the RNN with the ability to not only predict but also retrodict.
Search
Co-authors
- Raquel G. Alhama 1
- Atiqah Khaliq 1
- Prajit Dhar 1
- Ekta Sood 1
- Fabian Kögel 1
- show all...