Aditya Kalyanpur


2025

pdf bib
From Generating Answers to Building Explanations: Integrating Multi-Round RAG and Causal Modeling for Scientific QA
Victor Barres | Clifton James McFate | Aditya Kalyanpur | Kailash Karthik Saravanakumar | Lori Moon | Natnael Seifu | Abraham Bautista-Castillo
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: Industry Track)

Application of LLMs for complex causal question answering can be stymied by their opacity and propensity for hallucination. Although recent approaches such as Retrieval Augmented Generation and Chain of Thought prompting have improved reliability, we argue current approaches are insufficient and further fail to satisfy key criteria humans use to select and evaluate causal explanations. Inspired by findings from the social sciences, we present an implemented causal QA approach that combines iterative RAG with guidance from a formal model of causation. Our causal model is backed by the Cogent reasoning engine, allowing users to interactively perform counterfactual analysis and refine their answer. Our approach has been integrated into a deployed Collaborative Research Assistant (Cora) and we present a pilot evaluation in the life sciences domain.

2020

pdf bib
GLUCOSE: GeneraLized and COntextualized Story Explanations
Nasrin Mostafazadeh | Aditya Kalyanpur | Lori Moon | David Buchanan | Lauren Berkowitz | Or Biran | Jennifer Chu-Carroll
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

When humans read or listen, they make implicit commonsense inferences that frame their understanding of what happened and why. As a step toward AI systems that can build similar mental models, we introduce GLUCOSE, a large-scale dataset of implicit commonsense causal knowledge, encoded as causal mini-theories about the world, each grounded in a narrative context. To construct GLUCOSE, we drew on cognitive psychology to identify ten dimensions of causal explanation, focusing on events, states, motivations, and emotions. Each GLUCOSE entry includes a story-specific causal statement paired with an inference rule generalized from the statement. This paper details two concrete contributions. First, we present our platform for effectively crowdsourcing GLUCOSE data at scale, which uses semi-structured templates to elicit causal explanations. Using this platform, we collected a total of ~670K specific statements and general rules that capture implicit commonsense knowledge about everyday situations. Second, we show that existing knowledge resources and pretrained language models do not include or readily predict GLUCOSE’s rich inferential content. However, when state-of-the-art neural models are trained on this knowledge, they can start to make commonsense inferences on unseen stories that match humans’ mental models.

2012

pdf bib
Multi-Dimensional Feature Merger for Question Answering
Apoorv Agarwal | J. William Murdock | Jennifer Chu-Carroll | Adam Lally | Aditya Kalyanpur
Proceedings of COLING 2012

pdf bib
Natural Language Processing in Watson
Alfio M. Gliozzo | Aditya Kalyanpur | James Fan
Tutorial Abstracts at the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction (AKBC-WEKEX)
James Fan | Raphael Hoffman | Aditya Kalyanpur | Sebastian Riedel | Fabian Suchanek | Partha Pratim Talukdar
Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction (AKBC-WEKEX)

2011

pdf bib
Relation Extraction with Relation Topics
Chang Wang | James Fan | Aditya Kalyanpur | David Gondek
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

2010

pdf bib
PRISMATIC: Inducing Knowledge from a Large Scale Lexicalized Relation Resource
James Fan | David Ferrucci | David Gondek | Aditya Kalyanpur
Proceedings of the NAACL HLT 2010 First International Workshop on Formalisms and Methodology for Learning by Reading