Jong Inn Park
2026
Strong Memory, Weak Control: An Empirical Study of Executive Functioning in LLMs
Karin De Langis | Jong Inn Park | Khanh Chi Le | Andreas Schramm | Andrew Elfenbein | Michael C. Mensink | Dongyeop Kang
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Karin De Langis | Jong Inn Park | Khanh Chi Le | Andreas Schramm | Andrew Elfenbein | Michael C. Mensink | Dongyeop Kang
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Working memory, or the ability to hold and manipulate information in the mind, is a critical component of human intelligence and executive functioning. It is correlated with performance on various cognitive tasks, including measures of fluid intelligence, which encompasses reasoning and problem solving. We use a comprehensive set of classic working memory tasks to estimate the working memory capacity of large language models (LLMs). We find that in most cases, LLMs exceed normative human scores. However, we do not find that the increased capacity of working memory is associated with higher performance on other executive functioning tasks or problem solving benchmarks. These results suggest that LLMs may have deficits in attentional control and cognitive flexibility, which result in difficulties with inhibiting automatic responses and adapting to shifting information. Our findings suggest that reasoning models, although they often do not currently fully compensate for these deficits, may have the potential to do so in the future.
2025
How LLMs Comprehend Temporal Meaning in Narratives: A Case Study in Cognitive Evaluation of LLMs
Karin De Langis | Jong Inn Park | Andreas Schramm | Bin Hu | Khanh Chi Le | Dongyeop Kang
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Karin De Langis | Jong Inn Park | Andreas Schramm | Bin Hu | Khanh Chi Le | Dongyeop Kang
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large language models (LLMs) exihibit increasingly sophisticated linguistic capabilities, yet the extent to which these behaviors reflect human-like cognition versus advanced pattern recognition remains an open question.In this study, we investigate how LLMs process the temporal meaning of linguistic aspect in narratives that were previously used in human studies. Using an Expert-in-the-Loop probing pipeline, we conduct a series of targeted experiments to assess whether LLMs construct semantic representations and pragmatic inferences in a human-like manner.Our findings show that LLMs over-rely on prototypicality, produce inconsistent aspectual judgments, and struggle with causal reasoning derived from aspect, raising concerns about their ability to fully comprehend narratives.These results suggest that LLMs process aspect fundamentally differently from humans and lack robust narrative understanding.Beyond these empirical findings, we develop a standardized experimental framework for the reliable assessment of LLMs’ cognitive and linguistic capabilities.
2024
Benchmarking Cognitive Biases in Large Language Models as Evaluators
Ryan Koo | Minhwa Lee | Vipul Raheja | Jong Inn Park | Zae Myung Kim | Dongyeop Kang
Findings of the Association for Computational Linguistics: ACL 2024
Ryan Koo | Minhwa Lee | Vipul Raheja | Jong Inn Park | Zae Myung Kim | Dongyeop Kang
Findings of the Association for Computational Linguistics: ACL 2024
Large Language Models (LLMs) have recently been shown to be effective as automatic evaluators with simple prompting and in-context learning. In this work, we assemble 16 LLMs encompassing four different size ranges and evaluate their output responses by preference ranking from the other LLMs as evaluators, such as System Star is better than System Square. We then evaluate the quality of ranking outputs introducing the Cognitive Bias Benchmark for LLMs as Evaluators (CoBBLer), a benchmark to measure six different cognitive biases in LLM evaluation outputs, such as the Egocentric bias where a model prefers to rank its own outputs highly in evaluation. We find that LLMs are biased text quality evaluators, exhibiting strong indications on our bias benchmark (40% of comparisons made by all models) within each of their evaluations that question their robustness as evaluators. Furthermore, we examine the correlation between human and machine preferences and calculate the average Rank-Biased Overlap (RBO) score to be 44%, indicating that machine preferences are misaligned with humans. According to our findings, LLMs may still be unable to be utilized for automatic annotation aligned with human preferences.