TafBERTa: Learning Grammatical Rules from Small-Scale Language Acquisition Data in Hebrew

Anita Gelboim, Elior Sulem


Abstract
We present TafBERTa, a compact RoBERTa based language model tailored for Hebrew child-directed speech (CDS). This work builds upon the BabyBERTa framework to address data scarcity and morphological complexity in Hebrew. Focusing on determiner-noun grammatical agreement phenomena, we show that TafBERTa achieves competitive performance compared to large-scale Hebrew language models while requiring significantly less data and computational resources. As part of this work, we also introduce a new corpus of Hebrew CDS, HTBerman, aligned with morphological metadata and our new grammatical evaluation benchmark for Hebrew, HeCLiMP, based on minimal pairs. Our results demonstrate the effectiveness of TafBERTa in grammaticality judgments and its potential for efficient NLP in low-resource settings.
Anthology ID:
2025.babylm-main.6
Volume:
Proceedings of the First BabyLM Workshop
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Lucas Charpentier, Leshem Choshen, Ryan Cotterell, Mustafa Omer Gul, Michael Y. Hu, Jing Liu, Jaap Jumelet, Tal Linzen, Aaron Mueller, Candace Ross, Raj Sanjay Shah, Alex Warstadt, Ethan Gotlieb Wilcox, Adina Williams
Venue:
BabyLM
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
76–90
Language:
URL:
https://aclanthology.org/2025.babylm-main.6/
DOI:
Bibkey:
Cite (ACL):
Anita Gelboim and Elior Sulem. 2025. TafBERTa: Learning Grammatical Rules from Small-Scale Language Acquisition Data in Hebrew. In Proceedings of the First BabyLM Workshop, pages 76–90, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
TafBERTa: Learning Grammatical Rules from Small-Scale Language Acquisition Data in Hebrew (Gelboim & Sulem, BabyLM 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.babylm-main.6.pdf