Wei Xiong


2024

pdf bib
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Shizhe Diao | Rui Pan | Hanze Dong | KaShun Shum | Jipeng Zhang | Wei Xiong | Tong Zhang
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)

Foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more foundation models have become publicly available.However, most of those models exhibit a major deficiency in specialized-domain and specialized-task applications, where the step of domain- and task-aware finetuning is still required to obtain scientific language models. As the number of available foundation models and specialized tasks keeps growing, the job of training scientific language models becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the domain- and task-aware finetuning of general foundation models.LMFlow offers a complete finetuning workflow for a foundation model to support specialized training with limited computing resources.Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, inference acceleration, long context generalization, model customization, and even multimodal finetuning, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.

2022

pdf bib
ZhichunRoad at SemEval-2022 Task 2: Adversarial Training and Contrastive Learning for Multiword Representations
Xuange Cui | Wei Xiong | Songlin Wang
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)

This paper presents our contribution to the SemEval-2022 Task 2: Multilingual Idiomaticity Detection and Sentence Embedding.We explore the impact of three different pre-trained multilingual language models in the SubTaskA.By enhancing the model generalization and robustness, we use the exponential moving average (EMA) method and the adversarial attack strategy. In SubTaskB, we add an effective cross-attention module for modeling the relationships of two sentences. We jointly train the model with a contrastive learning objective and employ a momentum contrast to enlarge the number of negative pairs. Additionally, we use the alignment and uniformity properties to measure the quality of sentence embeddings.Our approach obtained competitive results in both subtasks.