Bimsara Pathiraja
2026
ExpressivityBench: Can LLMs Communicate Implicitly?
Joshua Tint | Som Sagar | Aditya Taparia | Kelly Raines | Bimsara Pathiraja | Caleb Liu | Ransalu Senanayake
Findings of the Association for Computational Linguistics: EACL 2026
Joshua Tint | Som Sagar | Aditya Taparia | Kelly Raines | Bimsara Pathiraja | Caleb Liu | Ransalu Senanayake
Findings of the Association for Computational Linguistics: EACL 2026
Human communication is often implicit, conveying tone, identity, and intent beyond literal meanings. While large language models have achieved strong performance on explicit tasks such as summarization and reasoning, their capacity for expressivity, or implicit communication, remains underexplored. We introduce ExpressivityBench, a framework for evaluating the expressivity of LLMs using information-theoretic communication models. Our approach quantifies how well LLM-generated text communicates target properties without explicit mention, across nine tasks spanning emotion, identity, and tone. To enable scalable and reproducible evaluation, we employ LLM-based graders validated against human judgments. Our results reveal that while models are adept at expressing affective content, they struggle with sociolinguistic signals, lagging behind human baselines. This study provides a necessary step to evaluate human-like implicit communication, with implications for applications such as education, mental health support, and socially-aware dialogue systems. We provide code and data for our benchmark alongside our paper.
2025
Investigating the Shortcomings of LLMs in Step-by-Step Legal Reasoning
Venkatesh Mishra | Bimsara Pathiraja | Mihir Parmar | Sat Chidananda | Jayanth Srinivasa | Gaowen Liu | Ali Payani | Chitta Baral
Findings of the Association for Computational Linguistics: NAACL 2025
Venkatesh Mishra | Bimsara Pathiraja | Mihir Parmar | Sat Chidananda | Jayanth Srinivasa | Gaowen Liu | Ali Payani | Chitta Baral
Findings of the Association for Computational Linguistics: NAACL 2025
Reasoning abilities of LLMs have been a key focus in recent years. One challenging reasoning domain with interesting nuances is legal reasoning, which requires careful application of rules, and precedents while balancing deductive and analogical reasoning, and conflicts between rules. Although there have been a few works on using LLMs for legal reasoning, their focus has been on overall accuracy. In this paper, we dig deeper to do a step-by-step analysis and figure out where they commit errors. We use the college-level Multiple Choice Question-Answering (MCQA) task from the Civil Procedure dataset and propose a new error taxonomy derived from initial manual analysis of reasoning chains with respect to several LLMs, including two objective measures: soundness and correctness scores. We then develop an LLM-based automated evaluation framework to identify reasoning errors and evaluate the performance of LLMs. The computation of soundness and correctness on the dataset using the auto-evaluator framework reveals several interesting insights. Furthermore, we show that incorporating the error taxonomy as feedback in popular prompting techniques marginally increases LLM performance. Our work will also serve as an evaluation framework that can be used in detailed error analysis of reasoning chains for logic-intensive complex tasks.