Chuancheng Lv
2024
HyperLoRA: Efficient Cross-task Generalization via Constrained Low-Rank Adapters Generation
Chuancheng Lv
|
Lei Li
|
Shitou Zhang
|
Gang Chen
|
Fanchao Qi
|
Ningyu Zhang
|
Hai-Tao Zheng
Findings of the Association for Computational Linguistics: EMNLP 2024
Adapting pre-trained language models (PLMs) for cross-task generalization is a crucial research area within the field of NLP. While fine-tuning and in-context learning are effective approaches for adapting LMs to emerging tasks, they can be costly and inefficient. Recently, some researchers have focused on achieving efficient task adaptation via hypernetwork, which is a meta network that generates task-specific weights based on task-oriented information without any optimization. However, the training of hypernetworks often lacks stability since the optimization signal is not straightforward, and the task information is not adequately representative. Moreover, previous works train hypenetworks with the general corpus, which is struggling with few-shot adaptation. To address these issues, we introduce HyperLoRA, a hypernetwork for LoRA parameters generation involving hypernetwork pre-training on instruction-following data and generalization fine-tuning on sparse task data. Furthermore, we utilize a constrained training loss and a gradient-based demonstration selection strategy to enhance the training stability and performance. Experimental results and analysis across four benchmark datasets (P3, S-NI, BBH, and SuperGLUE) demonstrate the proposed approach has flexible generalization ability and superior performance.
2022
Sememe Prediction for BabelNet Synsets using Multilingual and Multimodal Information
Fanchao Qi
|
Chuancheng Lv
|
Zhiyuan Liu
|
Xiaojun Meng
|
Maosong Sun
|
Hai-Tao Zheng
Findings of the Association for Computational Linguistics: ACL 2022
In linguistics, a sememe is defined as the minimum semantic unit of languages. Sememe knowledge bases (KBs), which are built by manually annotating words with sememes, have been successfully applied to various NLP tasks. However, existing sememe KBs only cover a few languages, which hinders the wide utilization of sememes. To address this issue, the task of sememe prediction for BabelNet synsets (SPBS) is presented, aiming to build a multilingual sememe KB based on BabelNet, a multilingual encyclopedia dictionary. By automatically predicting sememes for a BabelNet synset, the words in many languages in the synset would obtain sememe annotations simultaneously. However, previous SPBS methods have not taken full advantage of the abundant information in BabelNet. In this paper, we utilize the multilingual synonyms, multilingual glosses and images in BabelNet for SPBS. We design a multimodal information fusion model to encode and combine this information for sememe prediction. Experimental results show the substantial outperformance of our model over previous methods (about 10 MAP and F1 scores). All the code and data of this paper can be obtained at https://github.com/thunlp/MSGI.
Search
Co-authors
- Fanchao Qi 2
- Hai-Tao Zheng 2
- Lei Li 1
- Shitou Zhang 1
- Gang Chen 1
- show all...