Zeyu Sun
2025
RankLLM: A Multi-Criteria Decision-Making Method for LLM Performance Evaluation in Sentiment Analysis
Huzhi Xue | Butian Zhao | Haihua Xie | Zeyu Sun
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
Huzhi Xue | Butian Zhao | Haihua Xie | Zeyu Sun
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
"Large Language Models (LLMs) have made significant advancements in sentiment analysis, yet their quality and reliability vary widely. Existing LLM evaluation studies are limited in scope,lack a comprehensive framework for integrating diverse capabilities, and fail to quantify the im-pact of prompt design on performance. To address these gaps, this paper introduces a set of LLM evaluation criteria with detailed explanations and mathematical formulations, aiding users in understanding LLM limitations and selecting the most suitable model for sentiment analysis.Using these criteria, we apply the Technique for Order Preference by Similarity to an Ideal Solu-tion (TOPSIS), a classic decision-making method, to rank the performance of LLMs in sentimentanalysis. We evaluated six popular LLMs on three Twitter datasets covering different topics and analyze the impact of prompt design by assessing model-prompt combinations. Additionally,a validation experiment on a publicly available annotated dataset further confirms our ranking results. Finally, our findings offer valuable insights into the evaluation and selection of LLMs for sentiment analysis."
Grammar-Based Code Representation: Is It a Worthy Pursuit for LLMs?
Qingyuan Liang | Zhao Zhang | Zeyu Sun | Zheng Lin | Qi Luo | Yueyi Xiao | Yizhou Chen | Yuqun Zhang | Haotian Zhang | Lu Zhang | Bin Chen | Yingfei Xiong
Findings of the Association for Computational Linguistics: ACL 2025
Qingyuan Liang | Zhao Zhang | Zeyu Sun | Zheng Lin | Qi Luo | Yueyi Xiao | Yizhou Chen | Yuqun Zhang | Haotian Zhang | Lu Zhang | Bin Chen | Yingfei Xiong
Findings of the Association for Computational Linguistics: ACL 2025
Grammar serves as a cornerstone in programming languages and software engineering, providing frameworks to define the syntactic space and program structure. Existing research demonstrates the effectiveness of grammar-based code representations in small-scale models, showing their ability to reduce syntax errors and enhance performance. However, as language models scale to the billion level or beyond, syntax-level errors become rare, making it unclear whether grammar information still provides performance benefits. To explore this, we develop a series of billion-scale GrammarCoder models, incorporating grammar rules in the code generation process. Experiments on HumanEval (+) and MBPP (+) demonstrate a notable improvement in code generation accuracy. Further analysis shows that grammar-based representations enhance LLMs’ ability to discern subtle code differences, reducing semantic errors caused by minor variations. These findings suggest that grammar-based code representations remain valuable even in billion-scale models, not only by maintaining syntax correctness but also by improving semantic differentiation.