CogMemLM: Human-Like Memory Mechanisms Improve Performance and Cognitive Plausibility of LLMs

Lukas Thoma, Ivonne Weyers, Erion Çano, Stefan Schweter, Jutta L Mueller, Benjamin Roth


Anthology ID:
2023.conll-babylm.15
Volume:
Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning
Month:
December
Year:
2023
Address:
Singapore
Editors:
Alex Warstadt, Aaron Mueller, Leshem Choshen, Ethan Wilcox, Chengxu Zhuang, Juan Ciro, Rafael Mosquera, Bhargavi Paranjabe, Adina Williams, Tal Linzen, Ryan Cotterell
Venue:
CoNLL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
180–185
Language:
URL:
https://aclanthology.org/2023.conll-babylm.15
DOI:
10.18653/v1/2023.conll-babylm.15
Bibkey:
Cite (ACL):
Lukas Thoma, Ivonne Weyers, Erion Çano, Stefan Schweter, Jutta L Mueller, and Benjamin Roth. 2023. CogMemLM: Human-Like Memory Mechanisms Improve Performance and Cognitive Plausibility of LLMs. In Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning, pages 180–185, Singapore. Association for Computational Linguistics.
Cite (Informal):
CogMemLM: Human-Like Memory Mechanisms Improve Performance and Cognitive Plausibility of LLMs (Thoma et al., CoNLL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.conll-babylm.15.pdf