Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks

State-of-the-art parameter-efficient fine-tuning methods rely on introducing adapter modules between the layers of a pretrained language model. However, such modules are trained separately for each task and thus do not enable sharing information across tasks. In this paper, we show that we can learn adapter parameters for all layers and tasks by generating them using shared hypernetworks, which condition on task, adapter position, and layer id in a transformer model. This parameter-efficient multi-task learning framework allows us to achieve the best of both worlds by sharing knowledge across tasks via hypernetworks while enabling the model to adapt to each individual task through task-specific adapters. Experiments on the well-known GLUE benchmark show improved performance in multi-task learning while adding only 0.29% parameters per task. We additionally demonstrate substantial performance improvements in few-shot domain generalization across a variety of tasks. Our code is publicly available in https://github.com/rabeehk/hyperformer.


Introduction
Transfer learning from pretrained large-scale language models yields state-of-the-art results in a variety of tasks (Devlin et al., 2019;Radford et al., 2018;Liu et al., 2019b). As a highly expressive and abstract framework, Raffel et al. (2020) explored the landscape of transfer learning by converting text-based natural language processing (NLP) problems into a sequence-to-sequence format to train a unified model on several tasks simultaneously. Multi-task learning with pretrained language models (Ruder, 2017) is appealing for multiple reasons: 1) Training individual models per task results in higher computational costs, which hinders deployment and maintenance. These costs are substantially reduced by training a single * Work done while the author was at Google. Our HYPERFORMER adapter architecture. Following Houlsby et al. (2019), we include adapter modules after the two feed-forward layers. The Adapter hypernetwork h l A produces the weights (U l τ and D l τ ) for task-specific adapter modules conditioned on an input task embedding I τ . Similarly, the layer normalization hypernetwork h l LN generates the conditional layer normalization parameters (β τ and γ τ ). During training, we only update layer normalizations in T5, hypernetworks, and task embeddings. The compact HYPERFORMER++ shares the same hypernetworks across all layers and tasks and computes the task embedding based on task, layer id, and position of the adapter module ( §2.4). model. 2) Fine-tuning the model across multiple tasks allows sharing information between the different tasks and positive transfer to other related tasks. Specifically, when target datasets have limited training data, multi-task learning improves the performance compared to individually trained models (Liu et al., 2019a;Ratner et al., 2018). However, multi-task fine-tuning can result in models underperforming on high-resource tasks due to constrained capacity (Arivazhagan et al., 2019;McCann et al., 2018). An additional issue with multi-task fine-tuning is the potential for task interference or negative transfer, where achieving good performance on one task can hinder performance on another (Wang et al., 2019c).

Multi-head attention
As an alternative to fine-tuning (Howard and Ruder, 2018), adapter layers (Houlsby et al., 2019) insert a small number of additional parameters per task into the model. During fine-tuning, only the adapter modules, layer normalizations, and parameters of the final classification layer are updated, while the original pretrained model parameters remain frozen. Such task-specific adapters eliminate negative task interference by encapsulating task-specific information (Pfeiffer et al., 2020). However, so far there has not been an effective and parameter-efficient way to share information across multiple adapters to enable positive transfer to low-resource and related tasks.
To address this problem and to enable sharing information across tasks while reaping the benefits of adapter layers, as depicted in Figure 1, we propose HYPERFORMER++, which employs a compact hypernetwork (Ha et al., 2017;Oswald et al., 2020) shared across tasks and layers. The hypernetwork learns to generate task and layer-specific adapter parameters, conditioned on task and layer id embeddings. The hypernetwork is jointly learned between all tasks and is thus able to share information across them, while negative interference is minimized by generating separate adapter layers for each task. For each new task, our model only requires learning an additional task embedding, reducing the number of trained parameters.
We use the encoder-decoder T5 model (Raffel et al., 2020) as the underlying model for our experiments and evaluate on the standard GLUE benchmark (Wang et al., 2019b). We achieve strong gains over both the T5 BASE model as well as adapters (Houlsby et al., 2019). To our knowledge, this is the first time that adapters have been successfully integrated into a stateof-the-art encoder-decoder model beyond machine translation , demonstrating that our method effectively balances sharing information across tasks while minimizing negative transfer.
In summary, we make the following contributions: (1) We propose a parameter-efficient method for multitask fine-tuning based on hypernetworks and adapter layers.
(2) We demonstrate that our method scales more efficiently than prior work. (3) We provide empirical results on GLUE demonstrating the effectiveness of the proposed method on multi-task learning.
(4) We perform extensive few-shot domain transfer experiments, which reveal that the captured shared knowledge can positively transfer to unseen in-domain tasks. We release our code to facilitate future work.

HYPERFORMER
In this section, we present our HYPERFORMER model, which integrates hypernetwork-based adapter layers into a multi-task transformer model. In §2.4, we introduce a parameter-efficient variant of this model, called HYPERFORMER++.
Problem formulation: We consider a general multi-task learning problem, where we are given the data from a set of tasks {D τ } T τ=1 , where T is the total number of tasks and D τ ={(x i τ ,y i τ )} Nτ i=1 shows the training data for τ-th task with N τ samples. We assume we are also given a large-scale pretrained language model f θ (.) parameterized by θ that computes the output for input x i τ . Standard multi-task fine-tuning minimizes the following loss on the training set: where l is typically the cross-entropy loss, and w τ shows the sampling weight for τ-th task. Our goal is to finetune the pretrained model in a multi-task learning setup efficiently, while allowing sharing information across tasks and at the same time, enabling the model to adapt to each individual task.
The key idea of our approach, depicted in Figure  1, is to learn a parametric task embedding {I τ } T τ=1 for each task, and then feed these task embeddings to hypernetworks parameterized by ν that generate the task-specific adapter layers (Houlsby et al., 2019). We insert adapter modules within the layers of a pretrained model, making the final model of X ν (x i τ , θ, I τ ) parameterized by ν that computes the output for input x i τ . During training, we only train hypernetwork parameters ν, task embeddings {I τ } T τ=1 , and layer normalizations in f θ (.), while the rest of the pretrained model parameters θ are fixed: The hypernetworks capture the shared information across tasks in a multi-task learning model enabling positive transfer between related domains and transferable tasks, while adapters are reducing negative interference, encapsulating task-specific information.
Base model: All of our models are built on top of the state-of-the-art T5 transformer model (Raffel et al., 2020). This model frames text-based language tasks as sequence-to-sequence problems. T5 consists of an encoder-decoder Transformer (Vaswani et al., 2017) with minor modifications (Raffel et al., 2020). The model is trained simultaneously on multiple tasks, obtaining state-of-the-art performance across a diverse set of tasks. We use the T5 framework as it enables training a universal model that interfaces with many language tasks. Our model has three main components: 1) task conditional adapter layers; 2) task conditional layer normalizations; and 3) hypernetworks that generate task-specific parameters. We next describe these components.

Task Conditional Adapter Layers
Prior work has shown that fine-tuning all parameters of the model can result in a sub-optimal solution, particularly for resource-limited datasets (Peters et al., 2019). As an alternative to fine-tuning all the model's parameters, prior work (Houlsby et al., 2019;Rebuffi et al., 2018;Stickland and Murray, 2019) inserted small modules called adapter layers within layers of a pretrained model, as shown in Figure 1. Adapters introduce no change to the structure or parameters of the original model. In this work, we propose conditional adapter modules, in which we generate the adapters weights based on input task embeddings using shared hypernetworks (Ha et al., 2017), which capture information across tasks that can be used to positively transfer to other relevant tasks.
Each layer of a transformer model consists of an attention block and a feed-forward block, each followed by a skip connection. Following Houlsby et al. (2019), as depicted in Figure 1, we introduce a conditional adapter layer after each block before the skip connection. The conditional adapter layer A l τ for layer l consists of a down-projection, D l τ ∈R h×d , GeLU non-linearity (Hendrycks and Gimpel, 2016), and up-projection U l τ ∈ R d×h , where h is the input dimension, and d is the bottleneck dimension for the adapter layer, mathematically defined as: x is the input hidden state and LN l τ is the conditional layer norm defined in the next section. We generate adapter weights (U l τ , D l τ ) through a hypernetwork described in §2.3.

Task Conditional Layer Normalization
Conventional layer normalization (Ba et al., 2016) is defined as: where is the element-wise multiplication between two vectors, and γ l τ and β l τ are learnable parameters with the same dimension as x i τ . Values of µ τ and σ τ show the mean and standard deviation of training data for the τ-th task.
To allow the layer normalization inside adapters to adapt to each task, inspired by Perez et al. (2018); De Vries et al. (2017), we generate γ l τ , β l τ via a hypernetwork as a function of task embeddings ( §2.3).

Task Conditioned Hypernetworks
In order to have a model that can share information while being able to adapt to each individual task, we generate the parameters of task conditional adapter layers and layer normalization using hypernetworks. A hypernetwork is a network that generates the weights of another network (Ha et al., 2017).
The hypernetworks capture the shared information, while the generated task conditional adapters and layer normalization allow the model to adapt to each individual task to reduce negative task interference.
Learned task embedding: We first compute a task embedding I τ ∈ R t for each individual task using a task projector network h I (.), which is a multi-layer perceptron consisting of two feed-forward layers and a ReLU non-linearity: where z τ ∈ R t can be a learnable parameter or any pretrained task features (Vu et al., 2020), and the task projector network h I (.) learns a suitable compressed task embedding from input task features. In this work, we consider a parametric z τ to allow end-to-end training which is convenient in practice. 1 Removing task prefixes: The T5 model prepends task-specific prefixes to the input sequence for conditioning. For instance, when training on CoLA (Warstadt et al., 2019), cola sentence: is prepended to each sample. Instead, we remove task prefixes and use task embeddings for conditioning.
Task conditioned hypernetworks: We consider simple linear layers as hypernetworks that are functions of input task embeddings I τ . We introduce these hypernetworks in each layer of the transformer. We define hypernetwork h l A (.) that generates task conditional adapter weights (U l τ , D l τ ): are the respective hypernetwork parameters. We additionally define the hypernetwork h l LN (.) that computes the layer normalization parameters: where W γ l ∈R h×t and W β l ∈R h×t .

HYPERFORMER++
A downside of introducing a separate hypernetwork in each layer of the Transformer is that it increases the overall number of parameters. We, therefore, propose to share hypernetworks across transformer layers. By having a shared hypernetwork that is reusable, this strategy results in a substantial reduction in the number of parameters. However, reapplying the same hypernetwork across all the layers introduces weight sharing across target parameters, which may not be desirable. To allow for a flexible parameterization of task conditional adapters/layer normalization, for a transformer of L layers, we introduce a set of layer id embeddings I = {l i } L i=1 , and adapter position embeddings P ={p j } 2 j=1 , which specify the position of adapter layers in each transformer block (after the attention layer or feed-forward layer), which are used as additional inputs to the hypernetworks. For simplicity, we consider l i ∈R t , p j ∈R t , and z τ ∈R t . We feed a concatenation of (z τ ,l i ,p j ) to a similar task projector network h I as in Eq. (5): which is then followed by a shared layer normalization to compute final task embeddings I τ ∈R t to the hypernetwork. This way, the hypernetwork is able to produce distinct weights for each task, adapter position, and layer of a transformer. Furthermore, layer id and adapter position embeddings are parameters that are learned via back-propagation, allowing us to train the whole model end-to-end conveniently.

Experiments
Datasets: Following Raffel et al. (2020), we evaluate the performance of the models on the GLUE benchmark (Wang et al., 2019b). This benchmark covers multiple tasks of paraphrase detection (MRPC, QQP), sentiment classification (SST-2), natural language inference (MNLI, RTE, QNLI), and linguistic acceptability (CoLA). 2 The original test sets are not publicly available, and following Zhang et al. (2021), for datasets fewer than 10K samples (RTE, MRPC, STS-B, CoLA), we divide the original validation set in half, using one half for validation and the other for the test. For the other larger datasets, we split 1k samples from the training set as our validation data and test on the original validation set.
Experimental details: We use the HuggingFace implementation (Wolf et al., 2020a) of the T5 model (Raffel et al., 2020). We fine-tune all models with a constant learning rate of 0.0003 and following Raffel et al. (2020), we use 2 18 = 262144 steps in all experiments. We save a checkpoint every 1000 steps for all models (see also §A). Raffel et al. (2020) report the results based on the best checkpoint for each task independently. In contrast, we focus on the more realistic setting where we report the results on a single checkpoint with the highest average validation performance across all tasks. The hyperparameters are selected in the same manner. In contrast to prior work (Houlsby et al., 2019), we do not learn a separate output layer for each task but instead share a frozen output layer for all the tasks, which makes our setting more parameter-efficient than prior work and is an advantage of multi-task learning with encoder-decoder models. 3 Baselines: We compare to the strong adapter baseline (Houlsby et al., 2019). Following Houlsby et al.
(2019), we add adapters modules for each task after the two feed-forward modules in each transformer block of the T5 model. As suggested in Houlsby et al. (2019), we train the layer normalization parameters inside the T5 model, per task. We refer to this method as Adapters. We additionally propose a variant of this model, in which we share all layer normalization parameters (T5 and adapters) across all tasks. We refer to this model as Adapters †. We compare our models to the state-of-the-art T5 model, in which we fine-tune all parameters of the model on all tasks. We refer to this method as T5 SMALL /T5 BASE in experiments.
Sampling tasks: During training, we sample tasks with conventional temperature-based sampling with temperature T = 10 for all methods. We sample different tasks proportional to p and N τ is the number of training samples for the τth task. We did not experiment with more complex sampling strategies (Raffel et al., 2020) Table 1 shows the results on GLUE for single-task and multi-task training. We experiment with reduction factors of r = {8,16,32} for all adapter-based methods, where r = h d . We report the results both with T5 SMALL (6 layers and 60M parameters) and T5 BASE models (12 layers and 222M parameters).

Results on the GLUE Benchmark
Overall, our proposed HYPERFORMER++ obtains strong gains over Adapters (82.51 versus 79.53 for T5 SMALL and 86.48 versus 84.88 for T5 BASE ) while being more parameter-efficient.
Our variant of Adapters †, which shares layer norms across tasks, outperforms prior work (Houlsby et al., 2019), which does not share such information (80.85 versus 79.53 for T5 SMALL and 85.83 versus 84.88 for T5 BASE ). This demonstrates that in encoder-decoder models such as T5 more sharing of information across tasks is beneficial.
Our proposed HYPERFORMER obtains consistent improvement over our proposed Adapters † method. We attribute this improvement to the ability to learn the shared information across tasks through our hypernetworks. Interestingly, HYPERFORMER++ obtains similar performance as HYPERFORMER while being more than an order of magnitude more parameterefficient. Adapter modules thus seem to be similar enough so that much of their information can be modeled by a single, appropriately conditioned network. Compared to single-task fine-tuning of all param- We also report the total number of parameters and trainable parameters for all methods in Table 1. For adapter-based methods, the number of parameters varies based on the adapter size (we report all numbers with r =32). The multiple in terms of the number of parameters of HYPERFORMER++ BASE with regard to T5 BASE is 1.02× with only 0.29% trainable parameters per task. Note that by keeping the output layer frozen for Adapters SMALL and Adapters BASE , they require 5.51× and 2.53× fewer parameters respectively compared to a direct application of prior work (Houlsby et al., 2019). Despite using more efficient baselines, compared to Adapters BASE , HYPERFORMER++ BASE requires 3× fewer trainable parameters.
For CB and BoolQ, since test sets are not available, we divide the validation sets in half, using one half for validation and the other for testing. For Yelp polarity, TREC, and IMDB, since validation sets are not available, we similarly divide the test sets to form validation sets. For the rest, we report on the original test sets.
We consider the models trained on GLUE reported in Table 1 and evaluate them on the test set after the few-shot fine-tuning on each target training data. For Adapters † and our method, we use the adapter and the task embedding respectively trained on the most similar GLUE task for initialization, i.e. MNLI for NLI, QNLI for QA, SST-2 for sentiment analysis, and QQP for paraphrase detection. Following prior evidence of positive transfer from NLI to other tasks (Conneau and Kiela, 2018;Yin et al., 2020;Phang et al., 2018), we initialize the out-of-domain TREC from MNLI. We show the results of full fine-tuning of all model's parameters, Adapters †, and HYPERFORMER++ 4 in Table 2. Our method significantly surpasses the baselines on the majority of settings. Given that our model HYPERFORMER++ BASE has substantially fewer trainable parameters than T5 BASE , we investigate whether it generalizes better in a low-resource setting. We subsample each individual task in GLUE for varying training sizes. We train the models for 15,000 steps, which we found to be  sufficient to allow them to converge. Figure 2 shows the results. HYPERFORMER++ BASE substantially improves results with limited training data, indicating more effective fine-tuning in this regime.

Parameter Efficiency
In this section, we compare the number of parameters of HYPERFORMER++ with Adapters.
Adapters parameters: The standard setting (Houlsby et al., 2019) employs two adapters per layer for each task. Each adapter layer has 2hd parameters for projection matrices (U l τ and D l τ ) and 2h parameters for the layer normalization. The total number of parameters for Adapters for L Transformer layers in both an encoder and a decoder across T tasks is, therefore, 4T L(2hd + 2h), which scales linearly with the number of tasks times the number of layers.
HYPERFORMER++ parameters: Our approach learns a task feature embedding per task, consisting of T t parameters. We additionally employ layer id and adapter position embeddings in the encoder and decoder, which require 2(2+L)t parameters, with a fixed embedding size of t for all these feature embeddings. We consider a separate task projector networks h I for encoder and decoder, which is in both cases a two-layer MLP, consisting of a total of 2(3te+et) parameters, where e = 128 is the hidden dimension for the task-projector network. Our hypernetwork for adapters in encoder/decoder consists of 2(2thd) parameters and our layer normalization hypernetwork consists of 2(2th) parameters. In total, this results in t(T +4+2L) Task features + 8te+2t(2hd+2h) Hypernetworks parameters.
The total number of parameters for hypernetworks remains constant, while the task feature parameters scale with the number of tasks or layers times t, where t=64 in our experiments.
In settings with a large number of layers and a large number of tasks, since t 2hd+2h and T +L T L, our method is much more parameter-efficient compared to Adapters. In the current setting, the term hd is the largest term, and the factor 2T L for Adapters is larger than the factor t for HYPERFORMER++.

Do Extra Parameters Make a Difference?
While our HYPERFORMER++ is more parameterefficient than the baselines, the number of parameters of HYPERFORMER per task is higher compared to Adapters †. To confirm that the improvements of   HYPERFORMER are due to its capability of sharing information across tasks and not the number of parameters, as an ablation, we run the Adapters † with r = {2, 4} and choose the model performing the best on the validation set. This allows Adapters † to have a higher number of parameters compared to HYPERFORMER. We report the results in Table 3 and compare them with results of HYPERFORMER in Table 1. The results demonstrate that even with an increased number of parameters, Adapters † is not able to reach the performance of HYPERFORMER, and HYPERFORMER performs substantially better.

Impact of the Framework Components
We investigate the impact of the components of our framework including: (1) task conditional adapter blocks; (2) task conditional layer normalization; (3) task projection network; (4) fine-tuning of layer normalizations in the T5 model; (5) task conditional layer normalization in adapter modules and fine-tuning of layer normalizations inside the T5 model. We consider our small model of Table 1 and train different variants of it. Table 4 shows the results on GLUE, demonstrating that each component of the model contributes positively to its final performance.

Visualization of Task Embeddings
To analyze what HYPERFORMER++ BASE has learned about the relations between different tasks, we visualize the learned task embeddings for the models trained with the largest number of samples in Table 1 and 2. Figure 3 illustrates the 2D vector projections of task embeddings using PCA (Wold et al., 1987). Interestingly, the observed groupings correspond to similar tasks. This shows that learned task embeddings by HYPERFORMER++ BASE are meaningful. For CB, an NLI dataset despite being initialized from MNLI, after few-shot training the task embedding is closest to RTE, another NLI dataset. This is plausible as premises and hypotheses in both the discourse-based CB and the news and Wikipedia-based RTE are more complex compared to MNLI. The sentence similarity dataset STS-B is grouped close to the MRPC paraphrase dataset. CoLA, which focuses on linguistic acceptability is very different from other tasks and is not grouped with any of the observed task embeddings. In addition, the task embeddings for 1) all the sentiment analysis datasets namely SST-2, Yelp polarity, and IMDB; 2) the two large-scale NLI datasets namely MNLI and SciTail; 3) question answering datasets, i.e. BoolQ and QNLI; and 4) paraphrase datasets namely QQP and PAWS are each grouped together.

Related Work
Multi-task learning: Multi-task learning, i.e., learning a unified model to perform well on multiple different tasks, is a challenging problem in NLP. It requires addressing multiple challenges such as catastrophic forgetting, and handling disproportionate task sizes resulting in a model overfitting in lowresource tasks while underfitting in high-resource ones (Arivazhagan et al., 2019). Liu et al. (2019a) proposed Multi-Task Deep Neural Network (MTDNN) for learning from multiple NLU tasks. Although MTDNN obtains impressive results on GLUE, it applies multi-task learning as a form of pretraining followed by task-specific fine-tuning. Concurrently with us, Tay et al. (2021) propose a multi-task learning method by training task-conditioned hyper networks; however, their method is 43x less parameter efficient compared to ours. In another line of research, Clark et al. (2019b) proposed to learn multi-task models with knowledge distillation. Houlsby et al. (2019) trained adapters for each task separately, keeping the model fixed. Stickland and Murray (2019) share the model parameters across tasks and introduce task-specific adapter parameters, which is more parameter-inefficient than our method.
Hypernetworks and contextual parameter generation: Our work is closely related to hypernetworks (Ha et al., 2017). In a continual learning setup, where tasks are learned sequentially, Oswald et al. (2020) proposed a task-conditioned hypernetwork to generate all the weights of the target model. Our method is substantially more efficient as we do not generate all the weights of the target model but a very small number of parameters for adapter modules to allow the model to adapt to each individual task efficiently. Similarly, Jin et al. (2020) generate the full model from task-specific descriptions in different domains whereas we efficiently generate only small adapter modules for each task.
Prior work also proposed meta-learning or Bayesian approaches to generate softmax layer parameters for new settings (Bansal et al., 2020;Ponti et al., 2020). Meta-learning approaches are notoriously slow to train. In addition, generating softmax parameters requires a substantially higher number of parameters, leaves the method unable to adapt the lower layers of the model, and restricts their application to classification tasks.
In contemporaneous work,Üstün et al. (2020) proposed a multilingual dependency parsing method based on adapters and contextual parameter generator networks (Platanios et al., 2018) where they generate adapter parameters conditioned on trained input language embeddings. Their study is limited to multilingual dependency parsing, while our work studies multi-task learning and applies to several tasks thanks to the general sequence-to-sequence nature of our model. Moreover, their number of trainable parameters is 2.88× larger than their base model since they employ a contextual parameter generator in each layer. In contrast, we use a single compact hypernetwork allowing us to efficiently condition on multiple tasks and layers of a transformer model.

Conclusion
We propose a parameter-efficient method for multi-task fine-tuning. Our approach is to train shared hypernetworks to generate task-specific adapters conditioned on the task, layer id, and adapter position embeddings. The shared hypernetworks capture the knowledge across tasks and enable positive transfer to low-resource and related tasks, while task-specific layers allow the model to adapt to each individual task. Extensive experiments show that our method obtains strong improvement over multi-task learning on the GLUE benchmark, and substantially improves the in-domain task generalization.