Yuxuan Jiang
2026
Beyond Math: Stories as a Testbed for Memorization-Constrained Reasoning in LLMs
Yuxuan Jiang | Francis Ferraro
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Yuxuan Jiang | Francis Ferraro
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Memorization has been shown to greatly inflate Large Language Models’ (LLMs) performance on domains such as math and logic, where success should primarily rely on applying generalizable reasoning rules. In many real-world applications, however, memorization is not meant to be eliminated but selectively constrained—for example, in story understanding, where background knowledge must be integrated with narrative context. Drawing on the cognitive science distinction between “verbatim” (exact recall) and “gist” (semantic abstraction) memorization, we propose a two-tier framework for analyzing how LLMs reason under different degrees of memory access. The Inductive (prompt-guided) Setting softly steers models to reason through selective, context-relevant recall, while the Restrictive Setting imposes stronger constraints by limiting verbatim memory access. Evaluating GPT-4o, LLaMA3.3-70B, and DeepSeek V3 on six character-centric story understanding benchmarks, we find up to a 45.2% accuracy drop under the Restrictive Setting, revealing strong dependence on surface recall. By contrast, the Inductive Setting maintains performance, indicating that prompting can align LLMs toward memorization-constrained reasoning.
2025
From Generation to Judgment: Opportunities and Challenges of LLM-as-a-judge
Dawei Li | Bohan Jiang | Liangjie Huang | Alimohammad Beigi | Chengshuai Zhao | Zhen Tan | Amrita Bhattacharjee | Yuxuan Jiang | Canyu Chen | Tianhao Wu | Kai Shu | Lu Cheng | Huan Liu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Dawei Li | Bohan Jiang | Liangjie Huang | Alimohammad Beigi | Chengshuai Zhao | Zhen Tan | Amrita Bhattacharjee | Yuxuan Jiang | Canyu Chen | Tianhao Wu | Kai Shu | Lu Cheng | Huan Liu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Assessment and evaluation have long been critical challenges in artificial intelligence (AI) and natural language processing (NLP). Traditional methods, usually matching-based or small model-based, often fall short in open-ended and dynamic scenarios. Recent advancements in Large Language Models (LLMs) inspire the “LLM-as-a-judge” paradigm, where LLMs are leveraged to perform scoring, ranking, or selection for various machine learning evaluation scenarios. This paper presents a comprehensive survey of LLM-based judgment and assessment, offering an in-depth overview to review this evolving field. We first provide the definition from both input and output perspectives. Then we introduce a systematic taxonomy to explore LLM-as-a-judge along three dimensions: what to judge, how to judge, and how to benchmark. Finally, we also highlight key challenges and promising future directions for this emerging area.