Investigating grammatical abstraction in language models using few-shot learning of novel noun gender

Priyanka Sukumaran, Conor Houghton, Nina Kazanina


Abstract
Humans can learn a new word and infer its grammatical properties from very few examples. They have an abstract notion of linguistic properties like grammatical gender and agreement rules that can be applied to novel syntactic contexts and words. Drawing inspiration from psycholinguistics, we conduct a noun learning experiment to assess whether an LSTM and a decoder-only transformer can achieve human-like abstraction of grammatical gender in French. Language models were tasked with learning the gender of a novel noun embedding from a few examples in one grammatical agreement context and predicting agreement in another, unseen context. We find that both language models effectively generalise novel noun gender from one to two learning examples and apply the learnt gender across agreement contexts, albeit with a bias for the masculine gender category. Importantly, the few-shot updates were only applied to the embedding layers, demonstrating that models encode sufficient gender information within the word-embedding space. While the generalisation behaviour of models suggests that they represent grammatical gender as an abstract category, like humans, further work is needed to explore the details of how exactly this is implemented. For a comparative perspective with human behaviour, we conducted an analogous one-shot novel noun gender learning experiment, which revealed that native French speakers, like language models, also exhibited a masculine gender bias and are not excellent one-shot learners either.
Anthology ID:
2024.findings-eacl.50
Volume:
Findings of the Association for Computational Linguistics: EACL 2024
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
747–765
Language:
URL:
https://aclanthology.org/2024.findings-eacl.50
DOI:
Bibkey:
Cite (ACL):
Priyanka Sukumaran, Conor Houghton, and Nina Kazanina. 2024. Investigating grammatical abstraction in language models using few-shot learning of novel noun gender. In Findings of the Association for Computational Linguistics: EACL 2024, pages 747–765, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
Investigating grammatical abstraction in language models using few-shot learning of novel noun gender (Sukumaran et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-eacl.50.pdf