2025
pdf
bib
abs
From Selection to Generation: A Survey of LLM-based Active Learning
Yu Xia
|
Subhojyoti Mukherjee
|
Zhouhang Xie
|
Junda Wu
|
Xintong Li
|
Ryan Aponte
|
Hanjia Lyu
|
Joe Barrow
|
Hongjie Chen
|
Franck Dernoncourt
|
Branislav Kveton
|
Tong Yu
|
Ruiyi Zhang
|
Jiuxiang Gu
|
Nesreen K. Ahmed
|
Yu Wang
|
Xiang Chen
|
Hanieh Deilamsalehy
|
Sungchul Kim
|
Zhengmian Hu
|
Yue Zhao
|
Nedim Lipka
|
Seunghyun Yoon
|
Ting-Hao Kenneth Huang
|
Zichao Wang
|
Puneet Mathur
|
Soumyabrata Pal
|
Koyel Mukherjee
|
Zhehao Zhang
|
Namyong Park
|
Thien Huu Nguyen
|
Jiebo Luo
|
Ryan A. Rossi
|
Julian McAuley
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Active Learning (AL) has been a powerful paradigm for improving model efficiency and performance by selecting the most informative data points for labeling and training. In recent active learning frameworks, Large Language Models (LLMs) have been employed not only for selection but also for generating entirely new data instances and providing more cost-effective annotations. Motivated by the increasing importance of high-quality data and efficient model training in the era of LLMs, we present a comprehensive survey on LLM-based Active Learning. We introduce an intuitive taxonomy that categorizes these techniques and discuss the transformative roles LLMs can play in the active learning loop. We further examine the impact of AL on LLM learning paradigms and its applications across various domains. Finally, we identify open challenges and propose future research directions. This survey aims to serve as an up-to-date resource for researchers and practitioners seeking to gain an intuitive understanding of LLM-based AL techniques and deploy them to new applications.
pdf
bib
abs
GUICourse: From General Vision Language Model to Versatile GUI Agent
Wentong Chen
|
Junbo Cui
|
Jinyi Hu
|
Yujia Qin
|
Junjie Fang
|
Yue Zhao
|
Chongyi Wang
|
Jun Liu
|
Guirong Chen
|
Yupeng Huo
|
Yuan Yao
|
Yankai Lin
|
Zhiyuan Liu
|
Maosong Sun
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Utilizing Graphic User Interfaces (GUIs) for human-computer interaction is essential for accessing various digital tools. Recent advancements in Vision Language Models (VLMs) reveal significant potential for developing versatile agents that assist humans in navigating GUIs. However, current VLMs face challenges related to fundamental abilities, such as OCR and grounding, as well as a lack of knowledge about GUI elements functionalities and control methods. These limitations hinder their effectiveness as practical GUI agents. To address these challenges, we introduce GUICourse, a series of datasets for training visual-based GUI agents using general VLMs. First, we enhance the OCR and grounding capabilities of VLMs using the GUIEnv dataset. Next, we enrich the GUI knowledge of VLMs using the GUIAct and GUIChat datasets. Our experiments demonstrate that even a small-sized GUI agent (with 3.1 billion parameters) performs effectively on both single-step and multi-step GUI tasks. We further finetune our GUI agents on other GUI tasks with different action spaces (AITW and Mind2Web), and the results show that our agents are better than their baseline VLMs. Additionally, we analyze the impact of OCR and grounding capabilities through an ablation study, revealing a positive correlation with GUI navigation ability.
pdf
bib
abs
Convert Language Model into a Value-based Strategic Planner
Xiaoyu Wang
|
Yue Zhao
|
Qingqing Gu
|
Zhonglin Jiang
|
Yong Chen
|
Luo Ji
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 6: Industry Track)
Emotional support conversation (ESC) aims to alleviate the emotional distress of individuals through effective conversations. Although large language models (LLMs) have obtained remarkable progress on ESC, most of these studies might not define the diagram from the state model perspective, therefore providing a suboptimal solution for long-term satisfaction. To address such an issue, we leverage the Q-learning on LLMs, and propose a framework called straQ*. Our framework allows a plug-and-play LLM to bootstrap the planning during ESC, determine the optimal strategy based on long-term returns, and finally guide the LLM to response. Substantial experiments on ESC datasets suggest that straQ* outperforms many baselines, including direct inference, self-refine, chain of thought, finetuning, and finite state machines.
pdf
bib
abs
AD-LLM: Benchmarking Large Language Models for Anomaly Detection
Tiankai Yang
|
Yi Nian
|
Li Li
|
Ruiyao Xu
|
Yuangang Li
|
Jiaqi Li
|
Zhuo Xiao
|
Xiyang Hu
|
Ryan A. Rossi
|
Kaize Ding
|
Xia Hu
|
Yue Zhao
Findings of the Association for Computational Linguistics: ACL 2025
Anomaly detection (AD) is an important machine learning task with many real-world uses, including fraud detection, medical diagnosis, and industrial monitoring. Within natural language processing (NLP), AD helps detect issues like spam, misinformation, and unusual user activity. Although large language models (LLMs) have had a strong impact on tasks such as text generation and summarization, their potential in AD has not been studied enough. This paper introduces AD-LLM, the first benchmark that evaluates how LLMs can help with NLP anomaly detection. We examine three key tasks: (i) zero-shot detection, using LLMs’ pre-trained knowledge to perform AD without tasks-specific training; (ii) data augmentation, generating synthetic data and category descriptions to improve AD models; and (iii) model selection, using LLMs to suggest unsupervised AD models. Through experiments with different datasets, we find that LLMs can work well in zero-shot AD, that carefully designed augmentation methods are useful, and that explaining model selection for specific datasets remains challenging. Based on these results, we outline six future research directions on LLMs for AD.
pdf
bib
abs
NLP-ADBench: NLP Anomaly Detection Benchmark
Yuangang Li
|
Jiaqi Li
|
Zhuo Xiao
|
Tiankai Yang
|
Yi Nian
|
Xiyang Hu
|
Yue Zhao
Findings of the Association for Computational Linguistics: EMNLP 2025
Anomaly detection (AD) is an important machine learning task with applications in fraud detection, content moderation, and user behavior analysis. However, AD is relatively understudied in a natural language processing (NLP) context, limiting its effectiveness in detecting harmful content, phishing attempts, and spam reviews. We introduce NLP-ADBench, the most comprehensive NLP anomaly detection (NLP-AD) benchmark to date, which includes eight curated datasets and 19 state-of-the-art algorithms. These span 3 end-to-end methods and 16 two-step approaches that adapt classical, non-AD methods to language embeddings from BERT and OpenAI. Our empirical results show that no single model dominates across all datasets, indicating a need for automated model selection. Moreover, two-step methods with transformer-based embeddings consistently outperform specialized end-to-end approaches, with OpenAI embeddings outperforming those of BERT. We release NLP-ADBench at https://github.com/USC-FORTIS/NLP-ADBench, providing a unified framework for NLP-AD and supporting future investigations.
pdf
bib
abs
Dream to Chat: Model-based Reinforcement Learning on Dialogues with User Belief Modeling
Yue Zhao
|
Xiaoyu Wang
|
Dan Wang
|
Zhonglin Jiang
|
Qingqing Gu
|
Teng Chen
|
Ningyuan Xi
|
Jinxian Qu
|
Yong Chen
|
Luo Ji
Findings of the Association for Computational Linguistics: EMNLP 2025
World models have been widely utilized in robotics, gaming, and autonomous driving. However, their applications to natural language tasks are relatively limited. In this paper, we construct the dialogue world model, which could predict future utterances and user beliefs, including emotion, sentiment, and intention. In this paper, we propose a framework called DreamCUB, which shows that this user belief modeling and the entire dialogue world model can be established by LLM post-training. By defining a POMDP, we apply model-based reinforcement learning to the dialogue system and solve it by maximizing the information bottleneck. Experiments show that the pretrained dialogue world model can achieve state-of-the-art performances on emotion classification and sentiment identification, while dialogue quality is also enhanced by joint training of policy, critic and dialogue world model. Further analysis reveals that DreamCUB holds a reasonable exploration-exploitation balance and also transfers well to out-of-domain scenarios such as empathetic dialogues.
pdf
bib
abs
Treble Counterfactual VLMs: A Causal Approach to Hallucination
Li Li
|
Jiashu Qu
|
Linxin Song
|
Yuxiao Zhou
|
Yuehan Qin
|
Tiankai Yang
|
Yue Zhao
Findings of the Association for Computational Linguistics: EMNLP 2025
Vision-Language Models (VLMs) excel at tasks such as image captioning and visual question answering but frequently produce hallucinated outputs that deviate from the actual visual input or prompt. While prior work links hallucination to biases in data or representation, their causal origins remain unclear. We propose a causal framework to analyze and mitigate hallucination in VLMs. Our key hypothesis is that hallucinations arise from unintended direct influences of the vision or text modality that bypass the intended multi-modal fusion. To examine this, we construct a causal graph of the VLM and use counterfactual analysis to estimate the Natural Direct Effect (NDE) of each modality and their interaction. By systematically identifying and suppressing these direct effects, we encourage outputs that are more faithfully grounded in true cross-modal reasoning. Our approach consists of three steps: (1) designing structural causal graphs to distinguish correct fusion pathways from spurious modality shortcuts, (2) estimating modality-specific and cross-modal NDE using perturbed image representations, hallucinated text embeddings, and degraded visual inputs, and (3) implementing a test-time intervention module to dynamically adjust the model’s dependence on each modality. Experimental results demonstrate that our method significantly reduces hallucination while preserving task performance, providing a robust and interpretable framework for improving VLM reliability.
pdf
bib
abs
TRUSTEVAL: A Dynamic Evaluation Toolkit on Trustworthiness of Generative Foundation Models
Yanbo Wang
|
Jiayi Ye
|
Siyuan Wu
|
Chujie Gao
|
Yue Huang
|
Xiuying Chen
|
Yue Zhao
|
Xiangliang Zhang
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (System Demonstrations)
Ensuring the trustworthiness of Generative Foundation Models (GenFMs) is a pressing challenge as they gain widespread use. Existing evaluation toolkits are often limited in scope, dynamism, and flexibility. This paper introduces TRUSTEVAL, a dynamic and comprehensive toolkit designed for evaluating GenFMs across various dimensions. TRUSTEVAL supports both dynamic dataset generation and evaluation, offering advanced features including comprehensiveness, usability, and flexibility. TRUSTEVAL integrates diverse generative models, datasets, evaluation methods, metrics, inference efficiency enhancement, and evaluation report generation. Through case studies, we demonstrate TRUSTEVAL’s potential to advance the trustworthiness evaluation of GenFMs.
2024
pdf
bib
abs
LLM Factoscope: Uncovering LLMs’ Factual Discernment through Measuring Inner States
Jinwen He
|
Yujia Gong
|
Zijin Lin
|
Cheng’an Wei
|
Yue Zhao
|
Kai Chen
Findings of the Association for Computational Linguistics: ACL 2024
Large Language Models (LLMs) have revolutionized various domains with extensive knowledge and creative capabilities. However, a critical issue with LLMs is their tendency to produce outputs that diverge from factual reality. This phenomenon is particularly concerning in sensitive applications such as medical consultation and legal advice, where accuracy is paramount. Inspired by human lie detectors using physiological responses, we introduce the LLM Factoscope, a novel Siamese network-based model that leverages the inner states of LLMs for factual detection. Our investigation reveals distinguishable patterns in LLMs’ inner states when generating factual versus non-factual content. We demonstrate its effectiveness across various architectures, achieving over 96% accuracy on our custom-collected factual detection dataset. Our work opens a new avenue for utilizing LLMs’ inner states for factual detection and encourages further exploration into LLMs’ inner workings for enhanced reliability and transparency.
2022
pdf
bib
abs
Clues Before Answers: Generation-Enhanced Multiple-Choice QA
Zixian Huang
|
Ao Wu
|
Jiaying Zhou
|
Yu Gu
|
Yue Zhao
|
Gong Cheng
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
A trending paradigm for multiple-choice question answering (MCQA) is using a text-to-text framework. By unifying data in different tasks into a single text-to-text format, it trains a generative encoder-decoder model which is both powerful and universal. However, a side effect of twisting a generation target to fit the classification nature of MCQA is the under-utilization of the decoder and the knowledge that can be decoded. To exploit the generation capability and underlying knowledge of a pre-trained encoder-decoder model, in this paper, we propose a generation-enhanced MCQA model named GenMC. It generates a clue from the question and then leverages the clue to enhance a reader for MCQA. It outperforms text-to-text models on multiple MCQA datasets.
2018
pdf
bib
abs
Document Embedding Enhanced Event Detection with Hierarchical and Supervised Attention
Yue Zhao
|
Xiaolong Jin
|
Yuanzhuo Wang
|
Xueqi Cheng
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Document-level information is very important for event detection even at sentence level. In this paper, we propose a novel Document Embedding Enhanced Bi-RNN model, called DEEB-RNN, to detect events in sentences. This model first learns event detection oriented embeddings of documents through a hierarchical and supervised attention based RNN, which pays word-level attention to event triggers and sentence-level attention to those sentences containing events. It then uses the learned document embedding to enhance another bidirectional RNN model to identify event triggers and their types in sentences. Through experiments on the ACE-2005 dataset, we demonstrate the effectiveness and merits of the proposed DEEB-RNN model via comparison with state-of-the-art methods.
2015
pdf
bib
Interactive Second Language Learning from News Websites
Tao Chen
|
Naijia Zheng
|
Yue Zhao
|
Muthu Kumar Chandrasekaran
|
Min-Yen Kan
Proceedings of the 2nd Workshop on Natural Language Processing Techniques for Educational Applications