Michelle Yuan
2026
Barriers to Discrete Reasoning with Transformers: A Survey Across Depth, Exactness, and Bandwidth
Michelle Yuan | Weiyi Sun | Amir H. Rezaeian | Jyotika Singh | Sandip Ghoshal | Yao-Ting Wang | Miguel Ballesteros | Yassine Benajiba
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Michelle Yuan | Weiyi Sun | Amir H. Rezaeian | Jyotika Singh | Sandip Ghoshal | Yao-Ting Wang | Miguel Ballesteros | Yassine Benajiba
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Transformers have become the foundational architecture for a broad spectrum of sequence modeling applications, underpinning state-of-the-art systems in natural language processing, vision, and beyond. However, their theoretical limitations in discrete reasoning tasks, such as arithmetic, logical inference, and algorithmic composition, remain a critical open problem. In this survey, we synthesize recent advances from three theoretical perspectives: circuit complexity, approximation theory, and communication complexity, to clarify the structural and computational barriers that transformers face when performing symbolic computations. By connecting these established theoretical frameworks, we provide an accessible and unified account of why current transformer architectures struggle to implement exact discrete algorithms, even as they excel at pattern matching and interpolation. We review key definitions, seminal results, and illustrative examples, highlighting challenges such as depth constraints, difficulty approximating discontinuities, and bottlenecks in inter-token communication. Finally, we discuss implications for model design and suggest promising directions for overcoming these foundational limitations.
2025
ADAPTIVE IE: Investigating the Complementarity of Human-AI Collaboration to Adaptively Extract Information on-the-fly
Ishani Mondal | Michelle Yuan | Anandhavelu N | Aparna Garimella | Francis Ferraro | Andrew Blair-Stanek | Benjamin Van Durme | Jordan Boyd-Graber
Proceedings of the 31st International Conference on Computational Linguistics
Ishani Mondal | Michelle Yuan | Anandhavelu N | Aparna Garimella | Francis Ferraro | Andrew Blair-Stanek | Benjamin Van Durme | Jordan Boyd-Graber
Proceedings of the 31st International Conference on Computational Linguistics
Information extraction (IE) needs vary over time, where a flexible information extraction (IE) system can be useful. Despite this, existing IE systems are either fully supervised, requiring expensive human annotations, or fully unsupervised, extracting information that often do not cater to user’s needs. To address these issues, we formally introduce the task of “IE on-the-fly”, and address the problem using our proposed Adaptive IE framework that uses human-in-the-loop refinement to adapt to changing user questions. Through human experiments on three diverse datasets, we demonstrate that Adaptive IE is a domain-agnostic, responsive, efficient framework for helping users access useful information while quickly reorganizing information in response to evolving information needs.
MemInsight: Autonomous Memory Augmentation for LLM Agents
Rana Salama | Jason Cai | Michelle Yuan | Anna Currey | Monica Sunkara | Yi Zhang | Yassine Benajiba
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Rana Salama | Jason Cai | Michelle Yuan | Anna Currey | Monica Sunkara | Yi Zhang | Yassine Benajiba
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Large language model (LLM) agents have evolved to intelligently process information, make decisions, and interact with users or tools. A key capability is the integration of long-term memory capabilities, enabling these agents to draw upon historical interactions and knowledge. However, the growing memory size and need for semantic structuring pose significant challenges. In this work, we propose an autonomous memory augmentation approach, MemInsight, to enhance semantic data representation and retrieval mechanisms. By leveraging autonomous augmentation to historical interactions, LLM agents are shown to deliver more accurate and contextualized responses. We empirically validate the efficacy of our proposed approach in three task scenarios; conversational recommendation, question answering and event summarization. On the LLM-REDIAL dataset, MemInsight boosts persuasiveness of recommendations by up to 14%. Moreover, it outperforms a RAG baseline by 34% in recall for LoCoMo retrieval. Our empirical results show the potential of MemInsight to enhance the contextual performance of LLM agents across multiple tasks.
A Study on Leveraging Search and Self-Feedback for Agent Reasoning
Karthikeyan K | Michelle Yuan | Elman Mansimov | Katerina Margatina | Anurag Pratik | Daniele Bonadiman | Monica Sunkara | Yi Zhang | Yassine Benajiba
Proceedings of the 1st Workshop for Research on Agent Language Models (REALM 2025)
Karthikeyan K | Michelle Yuan | Elman Mansimov | Katerina Margatina | Anurag Pratik | Daniele Bonadiman | Monica Sunkara | Yi Zhang | Yassine Benajiba
Proceedings of the 1st Workshop for Research on Agent Language Models (REALM 2025)
Recent works have demonstrated that incorporating search during inference can significantly improve reasoning capabilities of language agents. Some approaches may make use of the ground truth or rely on model’s own generated feedback. The search algorithm uses this feedback to then produce values that will update its criterion for exploring and exploiting various reasoning paths. In this study, we investigate how search and model’s self-feedback can be leveraged for reasoning tasks. First, we explore differences in ground-truth feedback and self-feedback during search for math reasoning. Second, we observe limitations in applying search techniques to more complex tasks like tool-calling and design domain-specific approaches to address these gaps. Our experiments reveal challenges related to generalization when solely relying on self-feedback during search. For search to work effectively, either access to the ground-truth is needed or feedback mechanisms need to be carefully designed for the specific task.
2022
Adapting Coreference Resolution Models through Active Learning
Michelle Yuan | Patrick Xia | Chandler May | Benjamin Van Durme | Jordan Boyd-Graber
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Michelle Yuan | Patrick Xia | Chandler May | Benjamin Van Durme | Jordan Boyd-Graber
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Neural coreference resolution models trained on one dataset may not transfer to new, low-resource domains. Active learning mitigates this problem by sampling a small subset of data for annotators to label. While active learning is well-defined for classification tasks, its application to coreference resolution is neither well-defined nor fully understood. This paper explores how to actively label coreference, examining sources of model uncertainty and document reading costs. We compare uncertainty sampling strategies and their advantages through thorough error analysis. In both synthetic and human experiments, labeling spans within the same document is more effective than annotating spans across documents. The findings contribute to a more realistic development of coreference resolution models.
2020
Interactive Refinement of Cross-Lingual Word Embeddings
Michelle Yuan | Mozhi Zhang | Benjamin Van Durme | Leah Findlater | Jordan Boyd-Graber
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Michelle Yuan | Mozhi Zhang | Benjamin Van Durme | Leah Findlater | Jordan Boyd-Graber
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Cross-lingual word embeddings transfer knowledge between languages: models trained on high-resource languages can predict in low-resource languages. We introduce CLIME, an interactive system to quickly refine cross-lingual word embeddings for a given classification problem. First, CLIME ranks words by their salience to the downstream task. Then, users mark similarity between keywords and their nearest neighbors in the embedding space. Finally, CLIME updates the embeddings using the annotations. We evaluate CLIME on identifying health-related text in four low-resource languages: Ilocano, Sinhalese, Tigrinya, and Uyghur. Embeddings refined by CLIME capture more nuanced word semantics and have higher test accuracy than the original embeddings. CLIME often improves accuracy faster than an active learning baseline and can be easily combined with active learning to improve results.
Cold-start Active Learning through Self-supervised Language Modeling
Michelle Yuan | Hsuan-Tien Lin | Jordan Boyd-Graber
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Michelle Yuan | Hsuan-Tien Lin | Jordan Boyd-Graber
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Active learning strives to reduce annotation costs by choosing the most critical examples to label. Typically, the active learning strategy is contingent on the classification model. For instance, uncertainty sampling depends on poorly calibrated model confidence scores. In the cold-start setting, active learning is impractical because of model instability and data scarcity. Fortunately, modern NLP provides an additional source of information: pre-trained language models. The pre-training loss can find examples that surprise the model and should be labeled for efficient fine-tuning. Therefore, we treat the language modeling loss as a proxy for classification uncertainty. With BERT, we develop a simple strategy based on the masked language modeling loss that minimizes labeling costs for text classification. Compared to other baselines, our approach reaches higher accuracy within less sampling iterations and computation time.
Search
Fix author
Co-authors
- Jordan Lee Boyd-Graber 4
- Yassine Benajiba 3
- Benjamin Van Durme 3
- Monica Sunkara 2
- Yi Zhang 2
- Miguel Ballesteros 1
- Andrew Blair-Stanek 1
- Daniele Bonadiman 1
- Jason Cai 1
- Anna Currey 1
- Francis Ferraro 1
- Leah Findlater 1
- Aparna Garimella 1
- Sandip Ghoshal 1
- Karthikeyan K 1
- Hsuan-Tien Lin 1
- Elman Mansimov 1
- Katerina Margatina 1
- Chandler May 1
- Ishani Mondal 1
- Anandhavelu N 1
- Anurag Pratik 1
- Amir H. Rezaeian 1
- Rana Salama 1
- Jyotika Singh 1
- Weiyi Sun 1
- Yao-Ting Wang 1
- Patrick Xia 1
- Mozhi Zhang 1