Liam Van Der Poel
Also published as: Liam van der Poel
2023
An Active Learning Pipeline for NLU Error Detection in Conversational Agents
Damian Pascual
|
Aritz Bercher
|
Akansha Bhardwaj
|
Mingbo Cui
|
Dominic Kohler
|
Liam Van Der Poel
|
Paolo Rosso
Proceedings of the 17th Linguistic Annotation Workshop (LAW-XVII)
High-quality labeled data is paramount to the performance of modern machine learning models. However, annotating data is a time-consuming and costly process that requires human experts to examine large collections of raw data. For conversational agents in production settings with access to large amounts of user-agent conversations, the challenge is to decide what data should be annotated first. We consider the Natural Language Understanding (NLU) component of a conversational agent deployed in a real-world setup with limited resources. We present an active learning pipeline for offline detection of classification errors that leverages two strong classifiers. Then, we perform topic modeling on the potentially mis-classified samples to ease data analysis and to reveal error patterns. In our experiments, we show on a real-world dataset that by using our method to prioritize data annotation we reach 100% of the performance annotating only 36% of the data. Finally, we present an analysis of some of the error patterns revealed and argue that our pipeline is a valuable tool to detect critical errors and reduce the workload of annotators.
2022
Mutual Information Alleviates Hallucinations in Abstractive Summarization
Liam van der Poel
|
Ryan Cotterell
|
Clara Meister
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Despite significant progress in the quality of language generated from abstractive summarization models, these models still exhibit the tendency to hallucinate, i.e., output content not supported by the source document. A number of works have tried to fix—or at least uncover the source of—the problem with limited success. In this paper, we identify a simple criterion under which models are significantly more likely to assign more probability to hallucinated content during generation: high model uncertainty. This finding offers a potential explanation for hallucinations: models default to favoring text with high marginal probability, i.e., high-frequency occurrences in the training set, when uncertain about a continuation. It also motivates possible routes for real-time intervention during decoding to prevent such hallucinations. We propose a decoding strategy that switches to optimizing for pointwise mutual information of the source and target token—rather than purely the probability of the target token—when the model exhibits uncertainty. Experiments on the dataset show that our method decreases the probability of hallucinated tokens while maintaining the Rouge and BERT-S scores of top-performing decoding strategies.
Search
Co-authors
- Ryan Cotterell 1
- Clara Meister 1
- Damian Pascual 1
- Aritz Bercher 1
- Akansha Bhardwaj 1
- show all...