Transparency at the Source: Evaluating and Interpreting Language Models With Access to the True Distribution

Jaap Jumelet, Willem Zuidema


Abstract
We present a setup for training, evaluating and interpreting neural language models, that uses artificial, language-like data. The data is generated using a massive probabilistic grammar (based on state-split PCFGs), that is itself derived from a large natural language corpus, but also provides us complete control over the generative process. We describe and release both grammar and corpus, and test for the naturalness of our generated data. This approach allows us define closed-form expressions to efficiently compute exact lower bounds on obtainable perplexity using both causal and masked language modelling. Our results show striking differences between neural language modelling architectures and training objectives in how closely they allow approximating the lower bound on perplexity. Our approach also allows us to directly compare learned representations to symbolic rules in the underlying source. We experiment with various techniques for interpreting model behaviour and learning dynamics. With access to the underlying true source, our results show striking differences and outcomes in learning dynamics between different classes of words.
Anthology ID:
2023.findings-emnlp.288
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4354–4369
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.288
DOI:
10.18653/v1/2023.findings-emnlp.288
Bibkey:
Cite (ACL):
Jaap Jumelet and Willem Zuidema. 2023. Transparency at the Source: Evaluating and Interpreting Language Models With Access to the True Distribution. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 4354–4369, Singapore. Association for Computational Linguistics.
Cite (Informal):
Transparency at the Source: Evaluating and Interpreting Language Models With Access to the True Distribution (Jumelet & Zuidema, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.288.pdf