Emergent Word Order Universals from Cognitively-Motivated Language Models

Tatsuki Kuribayashi, Ryo Ueda, Ryo Yoshida, Yohei Oseki, Ted Briscoe, Timothy Baldwin


Abstract
The world’s languages exhibit certain so-called typological or implicational universals; for example, Subject-Object-Verb (SOV) languages typically use postpositions. Explaining the source of such biases is a key goal of linguistics.We study word-order universals through a computational simulation with language models (LMs).Our experiments show that typologically-typical word orders tend to have lower perplexity estimated by LMs with cognitively plausible biases: syntactic biases, specific parsing strategies, and memory limitations. This suggests that the interplay of cognitive biases and predictability (perplexity) can explain many aspects of word-order universals.It also showcases the advantage of cognitively-motivated LMs, typically employed in cognitive modeling, in the simulation of language universals.
Anthology ID:
2024.acl-long.781
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14522–14543
Language:
URL:
https://aclanthology.org/2024.acl-long.781
DOI:
10.18653/v1/2024.acl-long.781
Bibkey:
Cite (ACL):
Tatsuki Kuribayashi, Ryo Ueda, Ryo Yoshida, Yohei Oseki, Ted Briscoe, and Timothy Baldwin. 2024. Emergent Word Order Universals from Cognitively-Motivated Language Models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 14522–14543, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Emergent Word Order Universals from Cognitively-Motivated Language Models (Kuribayashi et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.781.pdf