Learning to Generate Task-Specific Adapters from Task Description

Qinyuan Ye, Xiang Ren


Abstract
Pre-trained text-to-text transformers such as BART have achieved impressive performance across a range of NLP tasks. Recent study further shows that they can learn to generalize to novel tasks, by including task descriptions as part of the source sequence and training the model with (source, target) examples. At test time, these fine-tuned models can make inferences on new tasks using the new task descriptions as part of the input. However, this approach has potential limitations, as the model learns to solve individual (source, target) examples (i.e., at the instance level), instead of learning to solve tasks by taking all examples within a task as a whole (i.e., at the task level). To this end, we introduce Hypter, a framework that improves text-to-text transformer’s generalization ability to unseen tasks by training a hypernetwork to generate task-specific, light-weight adapters from task descriptions. Experiments on ZEST dataset and a synthetic SQuAD dataset demonstrate that Hypter improves upon fine-tuning baselines. Notably, when using BART-Large as the main network, Hypter brings 11.3% comparative improvement on ZEST dataset.
Anthology ID:
2021.acl-short.82
Volume:
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
Month:
August
Year:
2021
Address:
Online
Venues:
ACL | IJCNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
646–653
Language:
URL:
https://aclanthology.org/2021.acl-short.82
DOI:
10.18653/v1/2021.acl-short.82
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2021.acl-short.82.pdf
Optional supplementary material:
 2021.acl-short.82.OptionalSupplementaryMaterial.zip
Code
 INK-USC/hypter
Data
Natural QuestionsNewsQASQuADZEST