Control Prefixes for Parameter-Efficient Text Generation

Prefix-tuning is a parameter-efficient and powerful technique for adapting a pre-trained language model to a downstream application. However, it uses the same dataset-level tuned set of parameters for all examples in the dataset. We extend the framework with a dynamic method, Control Prefixes, which allows for the effective inclusion of input-dependent information, thereby demonstrating how prefix-tuning can be used for controlled text generation tasks. The method incorporates attribute-level learnable representations into different layers of a pre-trained Transformer, enabling the generated text to be guided in a particular direction. We provide a systematic evaluation of the technique and apply it to five datasets from the GEM benchmark for natural language generation (NLG). Using only 0.1–2% additional trainable parameters, we show Control Prefixes can even outperform full fine-tuning methods, and present state-of-the-art results on several data-to-text datasets, including WebNLG. We also examine the common case where input-dependent information is unavailable at test time and show Control Prefixes can excel in this setting also.


Introduction
Recently, approaches in text generation have been dominated by adapting one large-scale, pre-trained language model (PLM) to various downstream tasks.Such adaptation is often performed via finetuning, which necessitates updating and storing all of the parameters, resulting in multiple new language models (LMs), one for each task.This poses a considerable challenge to the deployment of NLP systems in practice, especially as the scale of PLMs continues to climb from millions to billions of parameters.Moreover, full fine-tuning has been shown to be unnecessarily profligate through overwriting natural language understanding (NLU) that could otherwise be shared among tasks (Peters et al., 2019); it has also been shown that fine-tuned networks do not deviate substantially from the pretrained one in parameter space (Aghajanyan et al., 2020;Radiya-Dixit and Wang, 2020), implying the existence of parameter efficient alternatives.
Many researchers have sought to alleviate these issues by using fixed-LM techniques, where the parameters of the base LM remain unchanged.An ever-growing subset of these methods can be considered prompt tuning, where language models are adapted to downstream tasks with the aid of a tuned prompt accompanying the input.A recent survey on prompt tuning (Liu et al., 2021a), however, notes the dearth of research exploring dynamic prompts, which are input-dependent.This work fills this gap in the literature and considers such dynamic prompts.Existing controlled generation techniques either aim to generate text with specific target qualities, independent of overall task performance, or are methods that have the benefit of updating not only the attribute-level parameters but training all the parameters in the language model.
We propose the dynamic prompting method CONTROL PREFIXES.The method extends prefixtuning and integrates static task-specific prompts at every layer of a model, adding only 0.1-3% additional parameters to the base LM.With CON-TROL PREFIXES we aim to preserve the fixed-LM property, while also allowing datapoint-specific attributes to act as guidance signals at the input-level.This is done by employing modular control prefixes, which change alongside the input according to the guidance signal.Operating together with the static prompt parameters, these dynamic prompts can steer the frozen PLM to extend finer-grained control.The chosen attributes can provide additional information about the input, for example the domain of a data-to-text triple set, or it can specify some aspect of the desired output, such as the target length for text simplification.
We evaluate our method on an array of text generation tasks, leveraging additional input-level information specific to each dataset.Our results show that our parameter efficient architecture out-performs previous approaches, many of them based on full fine-tuning, according to the WebNLG (Gardent et al., 2017), DART (Radev et al., 2020) and E2E Clean (Dušek et al., 2019) data-to-text datasets.In addition, our method attains higher human-assessed performance than existing systems for summarization on XSum (Narayan et al., 2018).Although CONTROL PREFIXES no longer operates in the standard setting for NLG tasks, by being not confined to just using the textual input, we focus on datasets where the attribute-level information is available as part of the task.
We also consider the common case where the attribute-level information is not available, and demonstrate that zero-shot learning with CONTROL PREFIXES can be effective.We show similar control prefix representations are learned by the model for semantically similar attribute labels.

Related Work
Prompt Tuning Unlike the discrete text prompts used by GPT-3 (Brown et al., 2020), in prompt tuning, soft prompts are learned through backpropagation to maximize the information from labelled data.This work focuses on tuning methods as zero-shot prompting performance lags far behind tuned models on supervised datasets (Lester et al., 2021).Several successive works (Logeswaran et al., 2020;Liu et al., 2021b;Lester et al., 2021) employ prompt-embedding tuning, which trains continuous embeddings prepended to the input embeddings.Li and Liang (2021) discovered that prefix-tuning was more effective than promptembedding tuning for text generation.In prefixtuning, additional trainable key-value pairs, which are fixed across all examples, are used to augment the left context in every attention computation.Therefore, the prompt has constituents at every layer rather than being confined to steer the frozen LM only through the input as in embedding tuning.
Controlled generation A complementary field to prompt learning is controlled generation, which aims to incorporate various types of guidance (e.g.length specifications (Kikuchi et al., 2016) or highlighted phrases (Grangier and Auli, 2018)) beyond the input text into the generation model.Johnson et al. (2016) successfully trained a multilingual translation model with control tokens to encode each language.Keskar et al. (2019) pre-trained a 1.63B parameter model, also alongside conditional control tokens, and demonstrated these learnt to govern style, content, and task-specific behaviour.However, these models require the whole underlying LM to be fine-tuned alongside the control tokens for a particular task.
Alternatives exist, such as plug-and-play perturbations of the LM hidden states towards a target attribute (Nguyen et al., 2016;Dathathri et al., 2020).These methods use fixed LMs and are able to control target qualities such as sentiment and topic.However, they are slow at inference time due to requiring multiple passes for a single batch.The shift in conditional probability has also been shown to increase text degeneration (Holtzman et al., 2019).

Dynamic prompts
There have been few works exploring dynamic prompts (Liu et al., 2021a;Tsimpoukelli et al., 2021), which are inputdependent.Perhaps most similar to our work is work by Yu et al. (2021), who use an attribute alignment function to form dynamic prompts.Unlike our work, the prompt does not have a static component and aims to generate text with specific target attributes, independent of task performance.With CONTROL PREFIXES, the intention is to also maximize task-specific performance, which is why we maintain a large static prompt component to specify the task itself.

Background
This work considers sequence-to-sequence tasks where the objective is to model the conditional probability P (Y | X) with X and Y representing the tokenized input and output sequences respectively.For example, in summarization, X could be an article and Y would be a short target summary.
In this work we experiment with T5-large (Raffel et al., 2020) and BART LARGE (Lewis et al., 2020) as the underlying pre-trained LMs with parameters φ; and as we consider fixed-LM methods, φ always remains frozen.These models are Transformer encoder-decoder models where decoding proceeds auto-regressively.Let us denote d to represent the hidden state dimension and L the number of layers.We use (E, Dc, Dm) to denote the three classes of attention present in each layer: self-attention in the encoder (E), decoder cross-attention (Dc) and decoder masked-attention (Dm).For an attention computation in the l-th layer, the query, key and value matrices are denoted Q l ∈ R N ×d , and K l , V l ∈ R M ×d , where N is the number of tokens in the series relating 1 P < l a t e x i t s h a 1 _ b a s e 6 4 = " 5 I M I q F y p 0 General Task Prefix (400k -8M params) P < l a t e x i t s h a 1 _ b a s e 6 4 = " 5 I M I q F y p 0 + l q 0 5 J 5 s 5 h T 9 w P n 8 A U f e O 5 Q = = < / l a t e x i t > C A < l a t e x i t s h a 1 _ b a s e 6 4 = " d p 8 I + 3 A y s / I d W I z T g j 7 N a T x L X s E 4 B w u 4 A o 8 u I E 6 3 E M T W s B g A s / w C m 9 O 4 r w 4 7 8 7 H s u u j e t + O 7 m t 7 Z 3 d v f x + 4 e D w 6 P i k e H r W 1 l u u j e t + O 7 m t 7 Z 3 d v f x + 4 e D w 6 P i k e H r W 1 l c k 8 2 c w x 8 4 n z 9 T f Y 7 m < / l a t e x i t > hX, Y, Gi < l a t e x i t s h a 1 _ b a s e 6 4 = " K l 0 Z E 5 h y B P x a b Y I S 6 w e K y t h R S 5 0   to queries, and M is the number of tokens in the series relating to keys and values.

Intuition
Using a fixed PLM that captures broad natural language understanding provides the model with a parameter-efficient starting point which can be shared by many different tasks.Combining this with a trainable task representation allows the model to learn information relevant to one particular task.Furthermore, introducing attributelevel parameters allows us to guide the generation into a required direction and provide the model with datapoint-level information.The general taskspecific parameters can themselves adapt to the modular control prefixes, which change according to the guidance signal for each input X.This demarcation of parameters enables fine-grained control to be extended to aid performance on downstream tasks.CONTROL PREFIXES can therefore leverage input-level information while being a fixed-LM, parameter efficient method. 1 For this work, we only consider discrete labels as attributes for the guidance signal.

Description
The model uses a general task prefix P θ ("taskspecific parameters") and also trains a set of control prefixes C θ that change depending on the input ("attribute-level parameters").This requires attribute-level information or guidance G, to indicate which control prefixes to be used while pro-cessing a given input X.2 Let us consider the parallel corpus Z = X j , Y j , G j j=1,..,N , where G j indicates all the conditional attribute-level information for the sample j.The goal is to optimize through gradient descent the final inference parameters, θ, whilst the underlying φ parameters of the pre-trained LM remain frozen: (1)

General Prefix
For each attention class (E, Dc, Dm), a distinct prefix of key-value pairs is learnt, P = {P 1 , . . ., P L }, where P l ∈ R ρ×2d ∀l ∈ {1, . . ., L}. P ∈ R ρ×2dL and ρ is the prompt length, i.e. the number of additional key-value pairs in each attention computation.In prefix-tuning3 , for an attention computation in the l-th layer, K l and V l are augmented to become where K l , V l ∈ R (ρ+M )×d .The overall general prefix, parameterized by θ, is P θ = P E , P Dc , P Dm , where P θ ∈ R ρ×6dL .
Control Prefixes Let us consider one attribute with R possible labels 4 , such as the news domain of an article (e.g.sport, technology etc.), C θ = {C θ,1 , . . ., C θ,R }, where C θ,r ∈ R ρc×6dL , ∀r ∈ {1 . . ..R}.C θ,r represents the control prefix learnt for the r-th attribute label and the parameter ρ c denotes the control prompt length for this particular attribute.Let A be a function which returns the corresponding control prefix for the attribute label indicated by G.In CONTROL PREFIXES the K l and V l are augmented to become where K l , V l ∈ R (ρc+ρ+M )×d .

Shared Re-parameterization
Li and Liang (2021) found that prefix optimization is stabilized by increasing the number of trainable parameters.This is achieved by introducing a feed-forward network to re-parameterize the prefix.Rather than one network, we use three distinct two-layered large feed-forward neural networks for each attention class, applied row-wise.For each attention class (E, Dc, Dm), P = MLP( P ) where P ∈ R ρ×d is smaller than the matrix P ∈ R ρ×2dL , and each MLP has an intermediate dimension k which we set to 800.The distinct MLPs and each P are parameterized by training parameters θ; thus, θ is a function of θ and |θ| < | θ|.Once training is complete, the final θ parameters can be saved for use at inference and the re-parameterization parameters dispensed with.
As described for the general prefix, P θ , each control prefix, C θ,r , comprises three constituents for each attention class: The re-parameterization of C θ,r occurs in the same manner as P θ , sharing the same MLP E , MLP Dc and MLP Dm .When using a disjoint set of reparameterizations for the control prefixes, learning becomes unstable and performance degrades. 5ecent work by Buhai et al. (2020) show that over-parameterization can smooth the optimization landscape.With this in mind, the three distinct re-parameterizations compel each prefix element to coordinate control for the particular attention class.For example, the rows of P E and C E r lie in a vector space better coordinated for moderating the processing of the input sequence X than P Dm and C Dm r .This is due to being formed from the shared mapping MLP E .

Datasets, Guidance and Metrics
Examples of specific attribute labels for each task are found in the Appendix. 6ata-to-text The objective of data-to-text generation is to produce fluent text from structured input, such as a triple set (a set of subject-predicateobjects). Following Li and Liang (2021), we evaluate on the data-to-text datasets DART (Radev et al., 2020) and WebNLG (Gardent et al., 2017).However, we implement prefix-tuning for T5-large rather than GPT-2, as T5-large provides a stronger baseline and enables comparison with state-of-theart (SOTA) systems. 7We also report results on E2E Clean (Dušek et al., 2019), a dataset focused on the restaurant domain.We use the official evaluation scripts and report BLEU (Papineni et al., 2002), METEOR (Lavie and Agarwal, 2007), and TER (Snover et al., 2006) metrics. 8ebNLG contains triple sets from DBPedia (Auer et al., 2007).The test set is divided into two partitions: "Seen", which contains 10 DBpedia categories present in the training set, and "Unseen", which covers 5 categories never seen during training.9These categories, such as Airport or Food are used as a guidance signal in our experiments (indicated by A 1 in Table 1); our approach for unseen categories is discussed in §6.2.
Providing the category explicitly as guidance with CONTROL PREFIXES may enable properties of triples belonging to a specific WebNLG category to be captured more effectively.This intuition is supported by studies showing a clear disparity in the performance of different model types between different categories (Moryossef et al., 2019;Castro Ferreira et al., 2020).DART is an opendomain, multi-source corpus, with six sources: internal and external human annotation of both Wikipedia tables and WikiSQL, as well as the two existing datasets WebNLG and E2E Clean.Radev et al. (2020) showed fine-tuning T5-large on the WebNLG dataset with only the human an-notated portion of DART achieves SOTA performance, whilst using the whole DART dataset is not as effective.Nevertheless, this inspired the idea of using the six DART sub-dataset sources as a controllable attribute, represented by A 2 in Table 1.This strategy was inspired by previous work which incorporates auxiliary scaffold tasks (Swayamdipta et al., 2018;Cohan et al., 2019;Cachola et al., 2020).
Simplification We use WikiLarge (Zhang and Lapata, 2017) as the training data and evaluate on two simplification benchmarks: TurkCorpus (Xu et al., 2016) and ASSET (Alva-Manchego et al., 2020).Both benchmarks are composed of the same 2000 validation source and 359 test source sentences.However, the 10 ASSET references per source focus on a more diverse set of rewriting simplifications than the 8 TurkCorpus references per source.Martin et al. (2020) introduced 'BART LARGE with ACCESS', which is a fine-tuned BART LARGE model trained alongside control tokens to condition on four simplification-specific attributes, such as the length compression ratio (the length of the target sequence relative to the source sequence).We use the same controllable attributes in this work to directly compare with Martin et al. (2020) (Table 2).The control ratios are discretized into bins of fixed-width 0.05, capped to a maximum ratio of 2. At inference time, once the model has been trained with these oracle controls, the control ratios are set to desired values by tuning on the respective validation set.
Summarization As in Li and Liang (2021), we report results on the XSum dataset (Narayan et al., 2018) using BART LARGE .XSum comprises 226,711 British Broadcasting Corporation (BBC) articles coupled with their single-sentence summaries, where each sample corresponds to a unique URL.The URL contains information on whether the sub-directory is from the BBC Sport or BBC News page (A 1 in Table 3), and further subdirectory information (A 2 in Table 3, where A 2 has 40 labels), for example ('sport', 'formula1') or ('news', 'science').The motivation for using this as guidance is that different sub-directories are likely to share properties relating to how the information is presented; journalists are also usually confined to one domain.We report on the customary ROUGE scores (Lin, 2004).

Training Details
For the data-to-text datasets, we follow Ribeiro et al. ( 2020) and linearize the triples, prepending the special tokens <H>, <R>, and <T> before the subject, predicate, and object of an individual triple. 11We also prepend "translate Graph to English: " to every input (Raffel et al., 2020).Full training and hyperparameter details can be found in Appendix D.

Data-to-Text
Results in Table 1 show that for DART, both CONTROL PREFIXES (A 2 ) and prefix-tuning attain higher performance than the current SOTA, which is T5-large fined-tuned (Radev et al., 2020), by 1.29 and 0.54 BLEU points respectively.This indicates CONTROL PREFIXES can exert control over the frozen T5-large more effectively than prefixtuning.
The SOTA for WebNLG is a T5-large model fine-tuned on WebNLG and the human annotated portion of DART (Radev et al., 2020). 12Compared to this model, CONTROL PREFIXES achieves a 0.83 higher BLEU overall, and 1.33 on the Seen categories.Notably, CONTROL PREFIXES (A 1 ) outperforms CONTROL PREFIXES (A 1 ,A 2 ) on the Seen component of the dataset, but does not generalize as well to the unseen categories.We argue that this illustrates the benefit of using both controllable attributes.The prefix-tuning model with additional DART data, like the SOTA, is trained on only the human annotated portion and yields a minor performance increase of 0.05 BLEU compared to prefix-tuning solely trained on WebNLG.We believe this indicates that for fine-tuning, training on a complementary type of additional data allows the PLM to maintain more NLU by not over-fitting a narrow distribution, leading to better LM generalization.In contrast, for prefix-tuning, much of this gain has already been realized by retaining the original frozen parameters.
The SOTA (Harkous et al., 2020) for E2E Clean consists of a fine-tuned GPT-2 with a semantic fidelity classifier trained on additional generated data.CONTROL PREFIXES (A 2 ), which can leverage the heterogeneous DART datasets, outperforms this model in terms of the BLEU score.

Simplification
Table 2 reveals that prefix-tuning BART performs comparably to fine-tuning BART.When comparing our CONTROL PREFIXES to fine-tuned 'BART LARGE with ACCESS' there is comparable performance in terms of SARI for ASSET, and better FKGL results on ASSET.For text simplification, Martin et al. (2020) indicate the gains from using the controllable attributes, as assessed by SARI and FKGL, are mostly due to being able to calibrate the length ratio, with validation and test sets being drawn from the same distribution, as opposed to the WikiLarge training distribution.CONTROL PRE-FIXES also achieves higher SARI and FKGL scores on TurkCorpus compared to the Gold Reference, which evaluates against other human annotators.

Summarization
There is considerable inconsistency regarding author-conducted human evaluation for NLG (van der Lee et al., 2021).Therefore, we opted to submit our CONTROL PREFIXES model outputs to an externally run evaluation framework, GENIE (Khashabi et al., 2021), which provides an unbiased attestation of performance.Their sample size of 300 examples is larger than the 50 or 100 examples that have been previously used for XSum and is typical of human evaluation experiments (Narayan et al., 2018;Dou et al., 2020).Both human evaluation and automated ROUGE metrics can be seen in Table 3.The confidence intervals indicate that this result is not necessarily definitive, but it also highlights that the quality of generations in this domain is not captured fully by ROUGE.For the datasets considered, the automatic metrics are the least reliable for XSum as it is the only dataset with a single gold reference.
The results also show that CONTROL PREFIXES performs better than prefix-tuning in terms of ROUGE.We are not able to report the same humanassessment results for prefix-tuning, as each participant of GENIE is limited to one submission and there is no existing result for prefix-tuning.

Visualizing Control Prefixes
Fig. 2 displays t-SNE (Maaten and Hinton, 2008) visualizations of the length compression control prefixes learnt as part of our simplification CON-TROL PREFIXES model. 13We plot only the decoder self-attention constituent of each control prefix (comprising multiple key-value pairs at each layer) as the length ratio directly concerns the tar-  Table 3: Summarization results on XSum.The human-assessed results are from the GENIE benchmark, where the 95% confidence intervals are computed with bootstrap re-sampling.Note the BART LARGE and PEGASUS finetuned results for the human-assessed dimensions are transcribed from Khashabi et al. (2021), whilst the automatic metric results, indicated by * , are from Lewis et al. (2020) and Zhang et al. (2019).Prefix-tuning and CONTROL PREFIXES (A 1 ,A 2 ) use BART LARGE as the fixed LM.A 1 refers to the BBC news/sport page attribute and A 2 the further sub-directory attribute.We bold the best results of parameter-efficient models in the results tables for ROUGE, with fully fine-tuned models as reference.The public GENIE leaderboard is available at https: //leaderboard.allenai.org/genie-xsum/.
get. 14 The relationship learnt by the control prefixes is very manifest, aided by the near uniform distribution of length ratios in the WikiLarge training dataset from 0 to 1.1.
Fig. 2 establishes that for this simplistic attribute, different control prefixes corresponding to similar attribute labels (i.e.varying length ratios for the length attribute) share properties.Interestingly the decoder cross-attention of the control prefix is not as manifest.We believe this is due to BART LARGE being accustomed to the same crossattention key-value pairs in each layer.

Zero-shot Learning
We argue that even for more complicated attributes, such as the WebNLG category attribute, if the attribute labels are semantically similar, the respective control prefixes will similarly assist the general 14 Plots for the encoder and decoder cross-attention constituents can be seen found in Appendix E. task-specific prefix and the frozen LM during generation.Previous work has discussed the notion of task similarity (Achille et al., 2019) for prompt learning methods (Lester et al., 2021); however, we argue prefixes concerning different labels of one attribute are more likely to overlap in terms of learnable properties than different tasks or whole datasets.
In the case of WebNLG, where although no examples of the unseen category are present during training, a textual label for the category exists.These labels were available to all competition participants.This gives us some prior on the properties of the unseen categories, which we show is enough to successfully zero-shot transfer with control prefixes.For each WebNLG model with the category attribute, we map each category's textual label, including for the unseen categories, to a Glove embedding 15 (Pennington et al., 2014).Then for each unseen category, we map to the seen category with the highest cosine similarity in embedding space, and use that control prefix at inference for the corresponding unseen sample.For example, the control prefix for the seen category SportsTeam is used for examples relating to the unseen category Athlete. 16 Table 4 shows a comparison of using an out-ofvocabulary (OOV) control prefix for each example with an unseen category, and the zero-shot transfer method for both WebNLG datasets 17 .The OOV control prefix is trained on a random 2% of the data for each accumulated batch.These results indicate that zero-shot transfer is more promising than a learned OOV representation.The result fundamentally depends on the WebNLG categories, and if similar textual labels pertain to similar triple sets that CONTROL PREFIXES can utilize.

Discussion
We also investigated a simpler architecture 'prefixtuning + control tokens' which informs the model of the identical guidance signal as in CONTROL PREFIXES, but with trainable control tokens instead of control prefixes.Appendix F reveals that CON-TROL PREFIXES consistently outperforms prefixtuning + control tokens on the data-to-text and summarization datasets, while the results are both com-15 Glove Common Crawl (840B tokens, 2.2M vocab, cased, 300d vectors).
16 Appendix H displays model output for WebNLG along with the zero-shot procedure. 17We also report results on WebNLG+ 2020 (Castro Ferreira et al., 2020) parable to the Gold References on simplification datasets.This indicates that CONTROL PREFIXES is a superior parameter-efficient framework in leveraging additional information, whilst maintaining the fixed-LM property.
The alternative method is less expressive than CONTROL PREFIXES, by only exerting control through the embeddings rather than through each layer.CONTROL PREFIXES fundamentally depends on the strength of the guidance signal and by adding the constraint of attribute information being available with the dataset the guidance signal is naturally weaker.However, we show that CONTROL PREFIXES is a powerful general method which can utilize this signal to achieve a modest but consistent improvement across an array of tasks.

Conclusion
We introduce CONTROL PREFIXES, a parameterefficient controlled generation technique, which integrates a task-specific prompt alongside dynamic prompts to leverage additional input-level information.The method extends prefix-tuning, enabling the model to have finer-grained control over generated text, and assists in maximizing downstream task performance.
We demonstrate that CONTROL PREFIXES outperforms prefix-tuning and prefix-tuning with embedding level guidance, as well as existing approaches, on an array of natural language generation tasks.Our method attains state-of-theart results on several data-to-text datasets including WebNLG.This is despite learning <2% additional parameters to the underlying LM parameters (which remain fixed).Additionally, our method holds the highest human evaluation ranking on the external platform GENIE for the summarization dataset XSum.

A Additional Results
Additional results using the official evaluation scripts for the data-to-text datasets are reported in Tables 5,6,7 to supplement the results in Table 1.

B WebNLG+ 2020 Results
As NLG is notoriously challenging to evaluate, this work assesses model performance on five of the eleven datasets comprising GEM (Gehrmann et al., 2021), a benchmark that intends to provide robust datasets and reproducible standards across an array of NLG tasks.The GEM datasets used in this study are DART, E2E Clean, ASSET, TurkCorpus and WebNLG+ 2020.WebNLG+ 2020 is not a component of DART-it was used for the second official WebNLG competition (Castro Ferreira et al., 2020).There are 16 training categories (the 15 categories from WebNLG, but with new examples), alongside 3 unseen categories.Table 8 displays WebNLG+ 2020 results using the same model architectures as used for WebNLG.A similar pattern is revealed, in that CONTROL PREFIXES outperforms prefixtuning with CONTROL PREFIXES (A 1 ,A 2 ) as the top-performing model.This illustrates again the benefit of using both controllable attributes.
In the WebNLG and WebNLG+ 2020 training sets, for the same tripleset, multiple distinct lexicalizations exist.In our experiments, the examples sharing identical tripleset inputs have the same triple order after linearization.This is to aid in comparison with current systems for WebNLG, DART and E2E Clean.Future work would have to assess if architecture-independent improvement in test-set performance can arise by random permutation of the order of triples for training set examples with identical tripleset inputs.The motivation being that this may improve the generalizability of the model, since the model would not learn the order of particular tripleset inputs.

C Prefix-tuning
We make two previously unremarked upon observations of the benefits conferred by using the keyvalue pair prefix-tuning described in §3.3 compared to prefix-tuning involving augmenting the activations directly (Hu et al., 2021) or promptembedding tuning of prompt length ρ. i) The form discussed does not restrict the input length of the base LM.ii) The time complexity at inference time is reduced; for example, if we take a multi-head self-attention computation (M = N ), the time complexity at inference time is

D Additional Training Details
All implementations in this study are built on top of the Transformers library (Wolf et al., 2020).As T5 has relative position biases, we set these in all layers pertaining to offsets where the key is part of a prefix to zero.For BART LARGE we adapt the original implementation (Li and Liang, 2021).Table 10 displays the hyperparameters used when training the models reported in this paper.
The general prompt length and each control prompt length are architecture-specific parameters that we choose based on performance on the validation set.We use gradient accumulation across batches to maintain an effective batch size above 64, a linear learning rate scheduler for all models and beam-search decoding.AdamW (Loshchilov and Hutter, 2017) and AdaFactor (Shazeer and Stern, 2018) were used for optimization.We chose the checkpoint with the highest validation score using BLEU for data-to-text, SARI for simplification and ROUGE-2 for summarization.For all tasks, we train our models on single Tesla V100-SXM2-16GB machines, with mixed precision for BART LARGE based models (fp16) and full precision for T5-large based models (fp32).
The CONTROL PREFIXES models with the DART sub-dataset source attribute (A 2 ) use DART as additional data and were trained in two stages: i) on DART, ii) solely on the downstream dataset.The WebNLG prefix-tuning model with DART data shown in Table 10 uses only the human annotated portion of DART.The prefix-tuning models using all of the DART data for WebNLG and E2E Clean were similarly trained in two stages, with identical hyperparameters to CONTROL PREFIXES models using A 2 .Training prefix-tuning on all of DART for WebNLG yielded lower performance than with only the human-annotated DART portion as additional data, so was not reported in Table 1.
Decoding specific parameters were not tuned-we instead mirrored what the topperforming fine-tuned based system used for the particular LM and dataset.For example, a beam width of 5 as in Ribeiro et al. ( 2020) for T5-large on all data-to-text datasets.
For XSum the source articles are truncated to 512 BPE tokens.

E Simplification Length Control
Fig. 4 depicts the length compression ratio output distribution on the validation set for CONTROL PREFIXES, where a length control prefix of a specific attribute value (0.25,0.5,0.75,1.0) is specified.This clearly demonstrates CONTROL PREFIXES is capable of controlling the target length with respect to the input.Table 11 displays example output generations with each of the 0.25,0.5,0.75,1.0values specified.
Fig. 5 is supplementary to §6.1, showing all constituents of the length compression control prefixes for all attribute values.In the WikiLarge training data, there are far fewer training samples where the simplified output is much longer than the complex, original input in WikiLarge.This explains why the representations are not as interpretable for values greater than 1.2.

E.1 QuestEval
The Gold Reference results for QuestEval 18 are higher for TurkCorpus compared to ASSET in Table 2.We argue this is because the test set gold references are on average 114 characters for TurkCorpus, as opposed to 98 for ASSET.Therefore, the ASSET references contain less information to answer the generated queries during QuestEval evaluation; and thus, there is lower performance.We argue this shows a limitation with using QuestEval as a reference-less metric for simplification-by favouring longer generations.

F Prefix-tuning + Control Tokens
We propose another architecture 'prefix-tuning + control tokens', where all of the original LM parameters, φ, still remain fixed, including the embedding matrix.Control has to be exerted through the few control embeddings and prefix-tuning's ability to steer the frozen φ parameters through < 2% additional parameters.We use this method to inform the model of the same discrete guidance information as in CONTROL PREFIXES, but with control tokens instead of control prefixes. 19This alter- 18 Although QuestEval can take references, the authors maintain that any improvement in correlation with human performance is very minor. 19Only the embeddings pertaining to the controllable attributes and the prefix are trained.The dimension represented on the x-axis is stretched from a 1:1 to 2:1 aspect ratio for labelling clarity.
native method is less expressive than CONTROL PREFIXES, in much the same way as prefix-tuning is more expressive than prompt-embedding tuning.Prefix-tuning + control tokens also does not benefit from the shared re-parameterizations ( §3.3) that we argue allow for more effective demarcation of control of the fixed LM in each attention class subspace.
Table 9 reveals that CONTROL PREFIXES outperforms prefix-tuning + control tokens on the data-totext and summarization datasets, while the results are both comparable to the Gold References on simplification datasets.This indicates that CONTROL PREFIXES is better able to integrate and leverage guidance signal at the input-level, whilst maintaining the fixed-LM property, than prefix-tuning + control tokens.Figure 6: Prefix-tuning results of a model parameter search on several datasets for the optimal prompt length per dataset.These results are for the metric monitored per task on the respective validation sets indicated in the legend.φ% denotes the % of additional parameters to the number of fixed-LM parameters required at inference time.The y-axis is a relative measure: the validation set performance as a % of the maximum attained in the parameter search.

G Varying Prompt Length
Our research is not solely focused on parameter efficiency, but also on the effectiveness of adapting an already parameter efficient, fixed-LM method (adding <3% additional parameters).The only way to add parameters with prefix-tuning is to increase the prompt length.XSum is the only dataset considered where performance does not plateau when increasing prompt length 20 , therefore we ensure CONTROL PREFIXES does not have more parameters than prefix-tuning to ensure a fair comparison.
The only way to add parameters with prefixtuning is by increasing prompt length.Fig. 6 illustrates how performance saturation is observed-after a certain prompt length performance plateaus.Different datasets require varying prompt lengths to attain near maximum performance in a parameter search for prompt length.For the data-totext datasets, near maximum performance (>99% of the maximum validation score in the search) is reached with a prompt length of 1 or 2.

H Qualitative Examples
For data-to-text, Table 13 displays example CON-TROL PREFIXES output for WebNLG input belonging to unseen categories, along with the zero-shot procedure.Table 13 depicts example CONTROL PREFIXES (A 1 ,A 2 ) output alongside prefix-tuning model output for WebNLG+ 2020 input.For simplification, Table 12 compares the fixed-LM guided generations of CONTROL PREFIXES to the finetuned BART LARGE with ACCESS (Martin et al., 2020).For summarization, Table 15 depicts cherrypicked CONTROL PREFIXES generated summaries for XSum input, alongside T5-large fine-tuned summaries that have higher ROUGE scores.This is to illustrate how CONTROL PREFIXES can achieve higher human assessment through GENIE than top-performing fine-tuned models, whilst attaining lower automatic metric scores.
described by Hu et al. (2021), when utilizing different forms of prefix-tuning.This is shown in G.
7 Y w o j s y y N x P / 8 7 o p h l d + J l S S I l d s s S h M J c G Y z P 4 m A 6 E 5 Q z m x h D I t 7 K 2 E j a i m D G 0 6 J R u C t / z y K m n X a 9 5 F r X 5 X r z S u 8 z i K c A K n c A 4 e X E I D b q E J L W A w h G d 4 h T d H O i / O u / O x a C 0 4 + c w x / I H z + Q N h 2 I 0 y < / l a t e x i t > 3 P < l a t e x i t s h a 1 _ b a s e 6 4 = " 5 I M I q F y p 0 C e Z F 3 D r u c m 6 G d U o 2 C S T 0 u 9 1 P C E s j E d 8 q 6 l i k b c + N n 8 1 C k 5 s 8 q A h L G 2 p Z D M 1 d 8 T G Y 2 M m U S B 7 Y w o j s y y N x P / 8 7 o p h l d + J l S S I l d s s S h M J c G Y z P 4 m A 6 E 5 Q z m x h D I t 7 K 2 E j a i m D G 0 6 J R u C t / z y K m n X a 9 5 F r X 5 X r z S u 8 z i K c A K n c A 4 e X E I D b q E J L W A w h G d 4 h T d H O i / O u / O x a C 0 4 + c w x / I H z + Q N h 2 I 0 y < / l a t e x i t > 4 P < l a t e x i t s h a 1 _ b a s e 6 4 = " 5 I M I q F y p 0 C e Z F 3 D r u c m 6 G d U o 2 C S T 0 u 9 1 P C E s j E d 8 q 6 l i k b c + N n 8 1 C k 5 s 8 q A h L G 2 p Z D M 1 d 8 T G Y 2 M m U S B 7 Y w o j s y y N x P / 8 7 o p h l d + J l S S I l d s s S h M J c G Y z P 4 m A 6 E 5 Q z m x h D I t 7 K 2 E j a i m D G 0 6 J R u C t / z y K m n X a 9 5 F r X 5 X r z S u 8 z i K c A K n c A 4 e X E I D b q E J L W A w h G d 4 h T d H O i / O u / O x a C 0 4 + c w x / I H z + Q N h 2 I 0 y < / l a t e x i t > 5 P < l a t e x i t s h a 1 _ b a s e 6 4 = " 5 I M I q F y p 0 C e Z F 3 D r u c m 6 G d U o 2 C S T 0 u 9 1 P C E s j E d 8 q 6 l i k b c + N n 8 1 C k 5 s 8 q A h L G 2 p Z D M 1 d 8 T G Y 2 M m U S B 7 Y w o j s y y N x P / 8 7 o p h l d + J l S S I l d s s S h M J c G Y z P 4 m A 6 E 5 Q z m x h D I t 7 K 2 E j a i m D G 0 6 J R u C t / z y K m n X a 9 5 F r X 5 X r z S u 8 z i K c A K n c A 4 e X E I D b q E J L W A w h G d 4 h T d H O i / O u / O x a C 0 4 + c w x / I H z + Q N h 2 I 0 y < / l a t e x i t > t e x i t s h a 1 _ b a s e 6 4 = " d p 8 I + 3 A y s / I d W I z T g j 7 N a T x L X s E = " > A A A B 7 n i c b V A 9 T w J B E J 3 D L 8 Q v 1 N J m I 5 h Y k T s s t E R p L D G R j w Q u Z G 9 Z Y M P e 3 m V 3 z o R c + B E 2 F h p j 6 + + x 8 9 + 4 w B U K v m S S l / d m M j M v i K U w 6 L r f T m 5 j c 2 t 7 J 7 9 b 2 N s / O D w q H p + 0 T J R o x p s s k p H u B N R w K R R v o k D J O 7 H m N A w k b w e T + t x v P 3 F t R K Q e c R p z P 6 Q j J Y a C U b R S u 1 7 u p 7 e z c r 9 Y c i v u A m S d e B k p Q v J I 3 5 8 l 5 c d 6 d j 2 l r x p n N 7 J M / c D 5 / A I 8 q m R 0 = < / l a t e x i t > Pre-trained Model (0.4B params) P < l a t e x i t s h a 1 _ b a s e 6 4 = " 5 I M I q F y p 0 C e Z F 3 D r u c m 6 G d U o 2 C S T 0 u 9 1 P C E s j E d 8 q 6 l i k b c + N n 8 1 C k 5 s 8 q A h L G 2 p Z D M 1 d 8 T G Y 2 M m U S B 7 Y w o j s y y N x P / 8 7 o p h l d + J l S S I l d s s S h M J c G Y z P 4 m A 6 E 5 Q z m x h D I t 7 K 2 E j a i m D G 0 6 J R u C t / z y K m n X a 9 5 F r X 5 X r z S u 8 z i K c A K n c A 4 e X E I D b q E J L W A w h G d 4 h T d H O i / O u / O x a C 0 4 + c w x / I H z + Q N h 2 I 0 y < / l a t e x i t > P < l a t e x i t s h a 1 _ b a s e 6 4 = " 5 I M I q F y p 0 C e Z F 3 D Figure 1: High-level diagram contrasting prefix-tuning and CONTROL PREFIXES in the single-task setup for a PLM such as BART LARGE .The same single-task batch (examples 1,2,3,4 and 5) is considered for both setups.Left: Prefix-tuning has one general prefix P for all examples.Right: CONTROL PREFIXES utilizes additional attribute information at the input-level, G, in i).This conditional information is used in ii) to dictate which control prefix (C A , C B , C C ) to use for a particular example in a batch.This takes advantage of prefix-tuning's capacity to include different prefixes in one forward pass.

Figure 2
Figure 2: t-SNE visualizations for the decoder selfattention constituent of the simplification model's length compression control prefixes.Each circle represents a control prefix corresponding to each length ratio (bins of fixed width 0.05, from 0 to 1.1).

Figure 3 :
Figure 3: t-SNE visualizations for the encoder constituent of control prefixes representing WebNLG categories seen during training.Each circle represents a category seen during training for the CONTROL PREFIXES (A1) model.All 15 categories are seen categories in WebNLG+ 2020, along with the category Company.WebNLG+ 2020 has 3 additional unseen categories to those shown.

Figure 4 :
Figure 4: Histogram illustrating the influence of different target length ratios on the actual length compression ratio output distribution for the simplification CONTROL PREFIXES model on the TurkCorpus validation set.
Figure5: t-SNE visualizations for constituents of the length compression control prefixes learnt as part of the simplification CONTROL PREFIXES model.Each diagram depicts representations of control prefixes corresponding to each length value (41 bins of fixed width 0.05, from 0 to 2) for a particular attention mechanism.The dimension represented on the x-axis is stretched from a 1:1 to 2:1 aspect ratio for labelling clarity.

Table 1 :
(Gehrmann et al., 2021)esults reported on the respective official evaluation scripts.φ%denotes the % of additional parameters to the number of fixed-LM parameters required at inference time.T5-large fine-tuned results for WebNLG are from Ribeiro et al. (2020) and for DART are fromRadev et al. (2020).Note the results in the main body of the GEM paper(Gehrmann et al., 2021)are reported on the validation set, rather than the test set as is done here.Several of the baseline results were only reported to the significant figures shown.A 1 signifies models trained with control prefixes for the WebNLG category attribute, and A 2 with control prefixes for the DART sub-dataset source attribute.For WebNLG, S, U and A refer to BLEU scores for the Seen, Unseen and All portions of the dataset.The DART results are reported on the official evaluation script for v1.1.1,the same version as the official leaderboard.A CONTROL PREFIXES model attains state-of-the-art results for each dataset.

Table 2 :
Martin et al. (2020)ts on ASSET and TurkCorpus test sets.†Thismodel is fromMartin et al. (2020), where the authors fine-tuned BART LARGE model alongside control tokens for the four attributes.The CONTROL PREFIXES model is trained with control prefixes for these same four attributes.Prefix-tuning and CONTROL PREFIXES use BART LARGE as the fixed LM.The * denotes baseline results calculated in this study-the model outputs ofMartin et al. (2020)are publicly available.The BART LARGE with ACCESS and CONTROL PREFIXES model are the average test set results over 5 random seeds.We bold the best results of parameter-efficient models in the results tables, while fully fine-tuned models and human performance are reported for reference.
, the second official WebNLG competition, in Appendix B.