Michael JQ Zhang
2025
Clarify When Necessary: Resolving Ambiguity Through Interaction with LMs
Michael JQ Zhang
|
Eunsol Choi
Findings of the Association for Computational Linguistics: NAACL 2025
In this work, we explore the challenges of developing interactive assistants that resolve ambiguity by asking their users clarifying questions. Specifically, we develop a task-agnostic framework for evaluating a system’s ability to determine when to ask for clarification. Determining when to ask for clarification is a challenging task that requires systems to consider the demands of the individual user (i.e., how much they prioritize speed and usability versus carefulness) and the distribution of interpretations for a given request (i.e., whether an ambiguous request has one dominant, inferable interpretation). Using this framework, we evaluate systems for determining when to clarify across three NLP applications: QA, MT, and NLI. Finally, we introduce present a novel uncertainty estimation approach, IntentSim, that determines the utility of asking a clarifying question by estimating the entropy over user intents. Our method consistently outperforms existing uncertainty estimation approaches at identifying predictions that will benefit from clarification. Furthermore, we find that IntentSim is robust, demonstrating improvements across a wide range of NLP tasks and LMs. Together, our work lays foundation for further studies on clarifying interactions with LM assistants.
2024
Proceedings of the 1st Workshop on Towards Knowledgeable Language Models (KnowLLM 2024)
Sha Li
|
Manling Li
|
Michael JQ Zhang
|
Eunsol Choi
|
Mor Geva
|
Peter Hase
|
Heng Ji
Proceedings of the 1st Workshop on Towards Knowledgeable Language Models (KnowLLM 2024)