Björn Hoffmeister
Also published as: Bjorn Hoffmeister
2022
Low-Resource Adaptation of Open-Domain Generative Chatbots
Greyson Gerhard-Young
|
Raviteja Anantha
|
Srinivas Chappidi
|
Bjorn Hoffmeister
Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering
Recent work building open-domain chatbots has demonstrated that increasing model size improves performance (Adiwardana et al., 2020; Roller et al., 2020). On the other hand, latency and connectivity considerations dictate the move of digital assistants on the device (Verge, 2021). Giving a digital assistant like Siri, Alexa, or Google Assistant the ability to discuss just about anything leads to the need for reducing the chatbot model size such that it fits on the user’s device. We demonstrate that low parameter models can simultaneously retain their general knowledge conversational abilities while improving in a specific domain. Additionally, we propose a generic framework that accounts for variety in question types, tracks reference throughout multi-turn conversations, and removes inconsistent and potentially toxic responses. Our framework seamlessly transitions between chatting and performing transactional tasks, which will ultimately make interactions with digital assistants more human-like. We evaluate our framework on 1 internal and 4 public benchmark datasets using both automatic (Perplexity) and human (SSA – Sensibleness and Specificity Average) evaluation metrics and establish comparable performance while reducing model parameters by 90%.
2019
Neural Text Normalization with Subword Units
Courtney Mansfield
|
Ming Sun
|
Yuzong Liu
|
Ankur Gandhe
|
Björn Hoffmeister
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Industry Papers)
Text normalization (TN) is an important step in conversational systems. It converts written text to its spoken form to facilitate speech recognition, natural language understanding and text-to-speech synthesis. Finite state transducers (FSTs) are commonly used to build grammars that handle text normalization. However, translating linguistic knowledge into grammars requires extensive effort. In this paper, we frame TN as a machine translation task and tackle it with sequence-to-sequence (seq2seq) models. Previous research focuses on normalizing a word (or phrase) with the help of limited word-level context, while our approach directly normalizes full sentences. We find subword models with additional linguistic features yield the best performance (with a word error rate of 0.17%).
Search
Co-authors
- Greyson Gerhard-Young 1
- Raviteja Anantha 1
- Srinivas Chappidi 1
- Courtney Mansfield 1
- Ming Sun 1
- show all...