Exploring Versatile Generative Language Model Via Parameter-Efficient Transfer Learning

Zhaojiang Lin, Andrea Madotto, Pascale Fung


Abstract
Fine-tuning pre-trained generative language models to down-stream language generation tasks has shown promising results. However, this comes with the cost of having a single, large model for each task, which is not ideal in low-memory/power scenarios (e.g., mobile). In this paper, we propose an effective way to fine-tune multiple down-stream generation tasks simultaneously using a single, large pretrained model. The experiments on five diverse language generation tasks show that by just using an additional 2-3% parameters for each task, our model can maintain or even improve the performance of fine-tuning the whole model.
Anthology ID:
2020.findings-emnlp.41
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
441–459
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.41
DOI:
10.18653/v1/2020.findings-emnlp.41
Bibkey:
Cite (ACL):
Zhaojiang Lin, Andrea Madotto, and Pascale Fung. 2020. Exploring Versatile Generative Language Model Via Parameter-Efficient Transfer Learning. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 441–459, Online. Association for Computational Linguistics.
Cite (Informal):
Exploring Versatile Generative Language Model Via Parameter-Efficient Transfer Learning (Lin et al., Findings 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.findings-emnlp.41.pdf
Optional supplementary material:
 2020.findings-emnlp.41.OptionalSupplementaryMaterial.pdf
Code
 zlinao/VGLM
Data
CoQA