2024
pdf
bib
abs
Two-tiered Encoder-based Hallucination Detection for Retrieval-Augmented Generation in the Wild
Ilana Zimmerman
|
Jadin Tredup
|
Ethan Selfridge
|
Joseph Bradley
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track
Detecting hallucinations, where Large Language Models (LLMs) are not factually consistent with a Knowledge Base (KB), is a challenge for Retrieval-Augmented Generation (RAG) systems. Current solutions rely on public datasets to develop prompts or fine-tune a Natural Language Inference (NLI) model. However, these approaches are not focused on developing an enterprise RAG system; they do not consider latency, train or evaluate on production data, nor do they handle non-verifiable statements such as small talk or questions. To address this, we leverage the customer service conversation data of four large brands to evaluate existing solutions and propose a set of small encoder models trained on a new dataset. We find the proposed models to outperform existing methods and highlight the value of combining a small amount of in-domain data with public datasets.
2023
pdf
bib
abs
The economic trade-offs of large language models: A case study
Kristen Howell
|
Gwen Christian
|
Pavel Fomitchov
|
Gitit Kehat
|
Julianne Marzulla
|
Leanne Rolston
|
Jadin Tredup
|
Ilana Zimmerman
|
Ethan Selfridge
|
Joseph Bradley
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)
Contacting customer service via chat is a common practice. Because employing customer service agents is expensive, many companies are turning to NLP that assists human agents by auto-generating responses that can be used directly or with modifications. With their ability to handle large context windows, Large Language Models (LLMs) are a natural fit for this use case. However, their efficacy must be balanced with the cost of training and serving them. This paper assesses the practical cost and impact of LLMs for the enterprise as a function of the usefulness of the responses that they generate. We present a cost framework for evaluating an NLP model’s utility for this use case and apply it to a single brand as a case study in the context of an existing agent assistance product. We compare three strategies for specializing an LLM — prompt engineering, fine-tuning, and knowledge distillation — using feedback from the brand’s customer service agents. We find that the usability of a model’s responses can make up for a large difference in inference cost for our case study brand, and we extrapolate our findings to the broader enterprise space.
2022
pdf
bib
abs
Domain-specific knowledge distillation yields smaller and better models for conversational commerce
Kristen Howell
|
Jian Wang
|
Akshay Hazare
|
Joseph Bradley
|
Chris Brew
|
Xi Chen
|
Matthew Dunn
|
Beth Hockey
|
Andrew Maurer
|
Dominic Widdows
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
We demonstrate that knowledge distillation can be used not only to reduce model size, but to simultaneously adapt a contextual language model to a specific domain. We use Multilingual BERT (mBERT; Devlin et al., 2019) as a starting point and follow the knowledge distillation approach of (Sahn et al., 2019) to train a smaller multilingual BERT model that is adapted to the domain at hand. We show that for in-domain tasks, the domain-specific model shows on average 2.3% improvement in F1 score, relative to a model distilled on domain-general data. Whereas much previous work with BERT has fine-tuned the encoder weights during task training, we show that the model improvements from distillation on in-domain data persist even when the encoder weights are frozen during task training, allowing a single encoder to support classifiers for multiple tasks and languages.