From BERT to LLMs: Comparing and Understanding Chinese Classifier Prediction in Language Models

Ziqi Zhang, Jianfei Ma, Emmanuele Chersoni, You Jieshun, Zhaoxin Feng


Abstract
Classifiers are an important and defining feature of the Chinese language, and their correct prediction is key to numerous educational applications. Yet, whether the most popular Large Language Models (LLMs) possess proper knowledge the Chinese classifiers is an issue that has largely remain unexplored in the Natural Language Processing (NLP) literature.To address such a question, we employ various masking strategies to evaluate the LLMs’ intrinsic ability, the contribution of different sentence elements, and the working of the attention mechanisms during prediction. Besides, we explore fine-tuning for LLMs to enhance the classifier performance.Our findings reveal that LLMs perform worse than BERT, even with fine-tuning. The prediction, as expected, greatly benefits from the information about the following noun, which also explains the advantage of models with a bidirectional attention mechanism such as BERT.
Anthology ID:
2025.blackboxnlp-1.20
Volume:
Proceedings of the 8th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Yonatan Belinkov, Aaron Mueller, Najoung Kim, Hosein Mohebbi, Hanjie Chen, Dana Arad, Gabriele Sarti
Venues:
BlackboxNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
317–329
Language:
URL:
https://aclanthology.org/2025.blackboxnlp-1.20/
DOI:
Bibkey:
Cite (ACL):
Ziqi Zhang, Jianfei Ma, Emmanuele Chersoni, You Jieshun, and Zhaoxin Feng. 2025. From BERT to LLMs: Comparing and Understanding Chinese Classifier Prediction in Language Models. In Proceedings of the 8th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, pages 317–329, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
From BERT to LLMs: Comparing and Understanding Chinese Classifier Prediction in Language Models (Zhang et al., BlackboxNLP 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.blackboxnlp-1.20.pdf