Benchmarking Neural Network Generalization for Grammar Induction

Nur Lan, Emmanuel Chemla, Roni Katzir


Abstract
How well do neural networks generalize? Even for grammar induction tasks, where the target generalization is fully known, previous works have left the question open, testing very limited ranges beyond the training set and using different success criteria. We provide a measure of neural network generalization based on fully specified formal languages. Given a model and a formal grammar, the method assigns a generalization score representing how well a model generalizes to unseen samples in inverse relation to the amount of data it was trained on. The benchmark includes languages such as anbn, anbncn, anbmcn+m, and Dyck-1 and 2. We evaluate selected architectures using the benchmark and find that networks trained with a Minimum Description Length objective (MDL) generalize better and using less data than networks trained using standard loss functions. The benchmark is available at https://github.com/taucompling/bliss.
Anthology ID:
2023.clasp-1.15
Volume:
Proceedings of the 2023 CLASP Conference on Learning with Small Data (LSD)
Month:
September
Year:
2023
Address:
Gothenburg, Sweden
Editors:
Ellen Breitholtz, Shalom Lappin, Sharid Loaiciga, Nikolai Ilinykh, Simon Dobnik
Venue:
CLASP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
131–140
Language:
URL:
https://aclanthology.org/2023.clasp-1.15
DOI:
Bibkey:
Cite (ACL):
Nur Lan, Emmanuel Chemla, and Roni Katzir. 2023. Benchmarking Neural Network Generalization for Grammar Induction. In Proceedings of the 2023 CLASP Conference on Learning with Small Data (LSD), pages 131–140, Gothenburg, Sweden. Association for Computational Linguistics.
Cite (Informal):
Benchmarking Neural Network Generalization for Grammar Induction (Lan et al., CLASP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.clasp-1.15.pdf