Yuzhuang Xu
2024
UltraLink: An Open-Source Knowledge-Enhanced Multilingual Supervised Fine-tuning Dataset
Haoyu Wang
|
Shuo Wang
|
Yukun Yan
|
Xujia Wang
|
Zhiyu Yang
|
Yuzhuang Xu
|
Zhenghao Liu
|
Liner Yang
|
Ning Ding
|
Xu Han
|
Zhiyuan Liu
|
Maosong Sun
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Open-source large language models (LLMs) have gained significant strength across diverse fields. Nevertheless, the majority of studies primarily concentrate on English, with only limited exploration into the realm of multilingual abilities.In this work, we therefore construct an open-source multilingual supervised fine-tuning dataset.Different from previous works that simply translate English instructions, we consider both the language-specific and language-agnostic abilities of LLMs. Firstly, we introduce a knowledge-grounded data augmentation approach to elicit more language-specific knowledge of LLMs, improving their ability to serve users from different countries. Moreover, we find modern LLMs possess strong cross-lingual transfer capabilities, thus repeatedly learning identical content in various languages is not necessary. Consequently, we can substantially prune the language-agnostic supervised fine-tuning (SFT) data without any performance degradation, making multilingual SFT more efficient.The resulting UltraLink dataset comprises approximately 1 million samples across five languages (i.e., En, Zh, Ru, Fr, Es), and the proposed data construction method can be easily extended to other languages.UltraLink-LM, which is trained on the UltraLink dataset, outperforms several representative baselines across many tasks.
Pluggable Neural Machine Translation Models via Memory-augmented Adapters
Yuzhuang Xu
|
Shuo Wang
|
Peng Li
|
Xuebo Liu
|
Xiaolong Wang
|
Weidong Liu
|
Yang Liu
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Although neural machine translation (NMT) models perform well in the general domain, it remains rather challenging to control their generation behavior to satisfy the requirement of different users. Given the expensive training cost and the data scarcity challenge of learning a new model from scratch for each user requirement, we propose a memory-augmented adapter to steer pretrained NMT models in a pluggable manner. Specifically, we construct a multi-granular memory based on the user-provided text samples and propose a new adapter architecture to combine the model representations and the retrieved results. We also propose a training strategy using memory dropout to reduce spurious dependencies between the NMT model and the memory. We validate our approach on both style- and domain-specific experiments and the results indicate that our method can outperform several representative pluggable baselines.
Search
Co-authors
- Shuo Wang 2
- Haoyu Wang 1
- Yukun Yan 1
- Xujia Wang 1
- Zhiyu Yang 1
- show all...