LLMs as a synthesis between symbolic and distributed approaches to language

Gemma Boleda


Abstract
Since the middle of the 20th century, a fierce battle is being fought between symbolic and distributed approaches to language and cognition. The success of deep learning models, and LLMs in particular, has been alternatively taken as showing that the distributed camp has won, or dismissed as an irrelevant engineering development. In this position paper, I argue that deep learning models for language actually represent a synthesis between the two traditions. This is because 1) deep learning architectures allow for both distributed/continuous/fuzzy and symbolic/discrete/categorical-like representations and processing; 2) models trained on language make use of this flexibility. In particular, I review recent research in interpretability that showcases how a substantial part of morphosyntactic knowledge is encoded in a near-discrete fashion in LLMs. This line of research suggests that different behaviors arise in an emergent fashion, and models flexibly alternate between the two modes (and everything in between) as needed. This is possibly one of the main reasons for their wild success; and it makes them particularly interesting for the study of language. Is it time for peace?
Anthology ID:
2025.findings-emnlp.498
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9365–9379
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.498/
DOI:
Bibkey:
Cite (ACL):
Gemma Boleda. 2025. LLMs as a synthesis between symbolic and distributed approaches to language. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 9365–9379, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
LLMs as a synthesis between symbolic and distributed approaches to language (Boleda, Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.498.pdf
Checklist:
 2025.findings-emnlp.498.checklist.pdf