Xuan Guo
2024
Retrieval Augmented Spelling Correction for E-Commerce Applications
Xuan Guo
|
Rohit Patki
|
Dante Everaert
|
Christopher Potts
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track
The rapid introduction of new brand names into everyday language poses a unique challenge for e-commerce spelling correction services, which must distinguish genuine misspellings from novel brand names that use unconventional spelling. We seek to address this challenge via Retrieval Augmented Generation (RAG). On this approach, product names are retrieved from a catalog and incorporated into the context used by a large language model (LLM) that has been fine-tuned to do contextual spelling correction. Through quantitative evaluation and qualitative error analyses, we find improvements in spelling correction utilizing the RAG framework beyond a stand-alone LLM. We also demonstrate the value of additional finetuning of the LLM to incorporate retrieved context.
2023
Multi-teacher Distillation for Multilingual Spelling Correction
Jingfen Zhang
|
Xuan Guo
|
Sravan Bodapati
|
Christopher Potts
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track
Accurate spelling correction is a critical step in modern search interfaces, especially in an era of mobile devices and speech-to-text interfaces. For services that are deployed around the world, this poses a significant challenge for multilingual NLP: spelling errors need to be caught and corrected in all languages, and even in queries that use multiple languages. In this paper, we tackle this challenge using multi-teacher distillation. On our approach, a monolingual teacher model is trained for each language/locale, and these individual models are distilled into a single multilingual student model intended to serve all languages/locales. In experiments using open-source data as well as customer data from a worldwide search service, we show that this leads to highly effective spelling correction models that can meet the tight latency requirements of deployed services.
Search