%0 Conference Proceedings %T Exploring Versatile Generative Language Model Via Parameter-Efficient Transfer Learning %A Lin, Zhaojiang %A Madotto, Andrea %A Fung, Pascale %Y Cohn, Trevor %Y He, Yulan %Y Liu, Yang %S Findings of the Association for Computational Linguistics: EMNLP 2020 %D 2020 %8 November %I Association for Computational Linguistics %C Online %F lin-etal-2020-exploring %X Fine-tuning pre-trained generative language models to down-stream language generation tasks has shown promising results. However, this comes with the cost of having a single, large model for each task, which is not ideal in low-memory/power scenarios (e.g., mobile). In this paper, we propose an effective way to fine-tune multiple down-stream generation tasks simultaneously using a single, large pretrained model. The experiments on five diverse language generation tasks show that by just using an additional 2-3% parameters for each task, our model can maintain or even improve the performance of fine-tuning the whole model. %R 10.18653/v1/2020.findings-emnlp.41 %U https://aclanthology.org/2020.findings-emnlp.41 %U https://doi.org/10.18653/v1/2020.findings-emnlp.41 %P 441-459