Noga Zaslavsky


pdf bib
A Rate–Distortion view of human pragmatic reasoning?
Noga Zaslavsky | Jennifer Hu | Roger P. Levy
Proceedings of the Society for Computation in Linguistics 2021


pdf bib
Cloze Distillation: Improving Neural Language Models with Human Next-Word Prediction
Tiwalayo Eisape | Noga Zaslavsky | Roger Levy
Proceedings of the 24th Conference on Computational Natural Language Learning

Contemporary autoregressive language models (LMs) trained purely on corpus data have been shown to capture numerous features of human incremental processing. However, past work has also suggested dissociations between corpus probabilities and human next-word predictions. Here we evaluate several state-of-the-art language models for their match to human next-word predictions and to reading time behavior from eye movements. We then propose a novel method for distilling the linguistic information implicit in human linguistic predictions into pre-trained LMs: Cloze Distillation. We apply this method to a baseline neural LM and show potential improvement in reading time prediction and generalization to held-out human cloze data.

pdf bib
Semantic categories of artifacts and animals reflect efficient coding
Noga Zaslavsky | Terry Regier | Naftali Tishby | Charles Kemp
Proceedings of the Society for Computation in Linguistics 2020