Leon Zhang
2024
SambaLingo: Teaching Large Language Models New Languages
Zoltan Csaki
|
Bo Li
|
Jonathan Lingjie Li
|
Qiantong Xu
|
Pian Pawakapan
|
Leon Zhang
|
Yun Du
|
Hengyu Zhao
|
Changran Hu
|
Urmish Thakker
Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)
Despite the widespread availability of LLMs, there remains a substantial gap in their capabilities and availability across diverse languages. One approach to address these issues has been to take an existing pre-trained LLM and continue to train it on new languages. While prior works have experimented with language adaptation, many questions around best practices and methodology have not been covered. In this paper, we present a comprehensive investigation into the adaptation of LLMs to new languages. Our study covers the key components in this process, including vocabulary extension, direct preference optimization and the data scarcity problem for human alignment in low resource languages. We scale these experiments across 9 languages and 2 parameter scales (7B and 70B). We compare our models against Llama 2, Aya-101, XGLM, BLOOM and existing language experts, outperforming all prior published baselines. Additionally, all evaluation code and checkpoints are made public to facilitate future research.
2023
STEER: Semantic Turn Extension-Expansion Recognition for Voice Assistants
Leon Zhang
|
Jiarui Lu
|
Joel Ruben Antony Moniz
|
Aditya Kulkarni
|
Dhivya Piraviperumal
|
Tien Dung Tran
|
Nick Tzou
|
Hong Yu
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track
In the context of a voice assistant system, steering refers to the phenomenon in which a user issues a follow-up command attempting to direct or clarify a previous turn. We propose STEER, a steering detection model that predicts whether a follow-up turn is a user’s attempt to steer the previous command. Constructing a training dataset for steering use cases poses challenges due to the cold-start problem. To overcome this, we developed heuristic rules to sample opt-in usage data, approximating positive and negative samples without any annotation. Our experimental results show promising performance in identifying steering intent, with over 95% accuracy on our sampled data. Moreover, STEER, in conjunction with our sampling strategy, aligns effectively with real-world steering scenarios, as evidenced by its strong zero-shot performance on a human-graded evaluation set. In addition to relying solely on user transcripts as input, we introduce STEER+, an enhanced version of the model. STEER+ utilizes a semantic parse tree to provide more context on out-of-vocabulary words, such as named entities that often occur at the sentence boundary. This further improves model performance, reducing error rate in domains where entities frequently appear, such as messaging. Lastly, we present a data analysis that highlights the improvement in user experience when voice assistants support steering use cases.
Search
Co-authors
- Jiarui Lu 1
- Joel Ruben Antony Moniz 1
- Aditya Kulkarni 1
- Dhivya Piraviperumal 1
- Tien Dung Tran 1
- show all...