Domain Incremental Lifelong Learning in an Open World

Lifelong learning (LL) is an important ability for NLP models to learn new tasks continuously. Architecture-based approaches are reported to be effective implementations for LL models. However, it is non-trivial to extend previous approaches to domain incremental LL scenarios since they either require access to task identities in the testing phase or cannot handle samples from unseen tasks. In this paper, we propose \textbf{Diana}: a \underline{d}ynam\underline{i}c \underline{a}rchitecture-based lifelo\underline{n}g le\underline{a}rning model that tries to learn a sequence of tasks with a prompt-enhanced language model. Four types of hierarchically organized prompts are used in Diana to capture knowledge from different granularities. Specifically, we dedicate task-level prompts to capture task-specific knowledge to retain high LL performances and maintain instance-level prompts to learn knowledge shared across input samples to improve the model's generalization performance. Moreover, we dedicate separate prompts to explicitly model unseen tasks and introduce a set of prompt key vectors to facilitate knowledge sharing between tasks. Extensive experiments demonstrate that Diana outperforms state-of-the-art LL models, especially in handling unseen tasks. We release the code and data at \url{https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/diana}.


Introduction
An essential ability of humans is to learn new tasks continuously in their lifetime since our surrounding world is ever involving (Thrun and Mitchell, 1995).Humans need to learn inputs from unseen new tasks everyday.However, neural network based NLP models tend to rapidly lose previously acquired knowledge when trained on new tasks.This phenomenon is referred to as catastrophic forgetting Figure 1: An overview of Diana.A pre-trained language model is used to learn tasks in different formats with hierarchically organized prompts.(French, 1999), and it's important to equip NLP models with the lifelong learning (LL) ability to alleviate this issue in advanced AI applications.
An effective method to build LL models is the architecture-based approach (Chen et al., 2016;Rusu et al., 2016;Fernando et al., 2017;Wiwatcharakoses and Berrar, 2020), in which task-specific components are used to isolate knowledge for each separate task (Mancini et al., 2018).Recently, to leverage the power of pre-trained language model (PLM), some architecture-based LL models convert NLP tasks into a unified language modeling (LM) format (Sanh et al., 2021;Xie et al., 2022) and learn these tasks using a PLM.Separate prompts (Qin and Joty, 2022) or adapters (Madotto et al., 2021b) are allocated for different tasks to avoid the catastrophic forgetting issue.
However, despite the reported effectiveness, most above models are designed for the task incremental learning scenario, in which we assume task IDs for testing samples are available (Wang et al., 2022a,b).This setting limits the application of LL models because practical applications usually follow a more general domain incremental learning scenario (van de Ven et al., 2022), i.e., we cannot access the task IDs of most input samples.
There are generally two approaches to building LL models for domain incremental learning.One is to predict the task ID of each testing sample (Worts-man et al., 2020), and activate specified components based on the prediction (Figure 2a).This scheme achieves high LL performances if the predicted ID is correct (Madotto et al., 2021a).However, these models cannot handle samples from unseen tasks since there are no components designated for these samples and thus no task IDs to be predicted.This hinders the application of LL models because we often encounter samples from unseen tasks in practical situations (Dietterich, 2017).
Another approach to building domain incremental LL models is to organize model components at the instance-level, i.e., a pool of fine-grained components are dynamically combined in the forward pass for each input instance (Figure 2b).This approach avoids the trouble of explicitly determining task IDs.However, it usually yields low LL performance because there are no dedicated components for each task to capture task-specific knowledge (Wang et al., 2022a).
In this study, we combine the advantages of the above two approaches and propose Diana: a dynamic architecture-based lifelong learning model.We convert different NLP tasks into a unified LM format and propose to learn these tasks using a prompt-enhanced PLM (Figure 1).Specifically, Diana maintains four types of prompts to capture task knowledge from different granularities: 1.A general prompt P g is used for all tasks; 2. The format prompts P f are shared between tasks in a similar format; 3. A task prompt P t is assigned for each incoming task; 4. A pool of meta prompts P m are dynamically combined for each input instance.These four types of prompts present a hierarchical structure with a decreasing knowledge granularity, i.e., P g captures global knowledge between all tasks, while P m captures local knowledge that is shared between instances.
Diana can better generalize to unseen tasks while achieving high LL performances since its components are organized at both task and instance level.Moreover, we also maintain key vectors for P t and P m to better share task knowledge, and allocate separate task prompts to explicitly model samples for unseen tasks.Extensive experiments on benchmark NLP tasks indicate that Diana outperforms state-of-the-art (SOTA) baselines, especially in handling unseen tasks.Our main contributions are: 1. We propose Diana: a novel architecture-based domain incremental LL model that uses hierarchically organized prompts to capture knowledge in different granularities.
2. We are the first to consider unseen tasks in the testing phase of LL models.Specific prompts are designated in Diana to handle unseen tasks, and prompt keys are built to facilitate sharing of task knowledge.
3. Extensive experiments show that Diana outperformed SOTA baselines.
Experiment settings of LL methods can be generally classified into three scenarios based on whether the task ID is provided for testing samples and whether it must be inferred (van de Ven andTolias, 2019), i.e., task-incremental learning (Mallya andLazebnik, 2018;Ebrahimi et al., 2020), domain-incremental learning (Pu et al., 2021;Gao et al., 2022), and class-incremental learning (Zhang et al., 2020).In this work, we focus on the domain-incremental learning setting, where task ID is not provided for each testing sample.One line of methods in this category attempt to detect the task ID for each input sample (Madotto et al., 2021a).However, these methods fail to generalize to unseen tasks (Wang et al., 2022a).Another line of methods try to build a dynamic architecture for each input sample, for example, maintaining a pool of prompts that can be dynamically combined (Wang et al., 2022b).However, these methods yield sub-optimal performance since no task-specific parameters are used.Our model Diana is the first attempt to take advantage of the two aforementioned types of methods.

PLM
Output   ( ) Pre-trained LM is becoming the de facto standard component for NLP models.To encourage knowledge sharing, existing approaches attempt to cast all NLP tasks into a unified text-to-text format (McCann et al., 2019) and learn these tasks by finetuning a PLM.A similar work compared to ours is ProQA (Zhong et al., 2022a), in which different QA tasks are unified and a set of structured prompts are used.However, ProQA only considers two QA tasks and is limited to the task incremental learning scenario, while our model is designed to tackle more general NLP tasks in a more general domain incremental learning scenario.

Task Formulation
In this study, we aim to sequentially learn Each task T i is presented in a specific format F j (such as "Classification" or "Summarization"), and each training sample of T i is a tuple of a context C, a question Q, and an answer A: (C, Q, A).Note that the format of each task can be easily inferred from the context-question pair (C, Q).Our model g θ is built to predict A based on C and Q.We also consider a more challenging open domain lifelong learning setting, i.e., the model needs to predict answers for unseen tasks.Therefore, we collect another N ′ unseen tasks T N +1 , • • • , T N +N ′ that are only used for testing.We assume that all task identities of inputs are not available in the testing phase.

Framework of Hierarchical Prompts
We follow previous approaches to serialize the context C, question Q, and answer A into text sequences (Khashabi et al., 2020;Zhong et al., 2022a) and use a prompt-enhanced encoder-decoder model g θ to learn each task T i in Diana.We use soft prompts (Liu et al., 2021;Lester et al., 2021;Vu et al., 2022) in our study, i.e., each prompt is a sequence of trainable embeddings that are randomly initialized and learned in the training process.For each training sample (C, Q, A) from task T i , we first construct a prompt P (C, Q) based on (C, Q).Then the encoder takes in the concatenation of P (C, Q), C, and Q and the decoder predicts A, i.e., A = g θ ([P (C, Q); C; Q]), in which "[; ]" denotes the sequence concatenation operation.
Four types of prompts are contained in P (C, Q), i.e., P (C, Q) = [P g ; P f (F j ); P t (T i ); P m (C, Q)] (Figure 2c).Specifically, P g is a general prompt, P f (F j ) is a format prompt (where F j is the format of task T i ), P t (T i ) is a task prompt and P m (C, Q) is a combined meta prompt.These four types of prompts are organized hierarchically so that they are shared by samples in different granularities: 1. General Prompt P g is shared for all training tasks so that it encodes global task knowledge.
2. Format Prompt P f (F j ) is shared between tasks in the same format F j so that it captures format-related knowledge, i.e., knowledge that is shared between tasks in the format F j .
3. Task Prompt P t (T i ) is specifically allocated for the task T i and it is only shared for samples from T i .We use P t (T i ) to learn task-specific knowledge.Moreover, to explicitly model samples from unseen tasks, we enlarge the set of task prompts with L extra prompts Pt (F 1 ), • • • , Pt (F L ), in which each prompt Pt (F j ) models the unseen task for a particular format F j .
4. Meta Prompt P m (C, Q) is a dynamic combination of various instance-level prompts.Specifically, we maintain M instance-level meta prompts {P i m } M i=1 and dynamically combine these prompts based on the (C, Q) to obtain P m (C, Q).P m (C, Q) captures the knowledge shared between similar training instances.
We expect these four types of prompts can capture knowledge from different granularities since they are shared in different scopes.Moreover, to facilitate knowledge sharing, we allocate a key vector k t (T i ) and k j m to each task prompt P t (T i ) and meta prompt P j m , respectively, and build a fixed text en-coder h to map a context-question pair (C, Q) to a query vector q = h(C, Q).A two-stage learning process is introduced in Diana to learn these keys and P (C, Q).Specifically, the first stage focuses on learning a representation space for prompt keys so that we can determine proper prompts to construct P (C, Q).The second stage optimizes the constructed prompt P (C, Q) and the backbone language model.These two stages are detailed in the following sections.

Key Vector Space Learning
We first optimize key vectors assigned to each task prompt and meta prompt to construct the prompt P (C, Q) for each input (C, Q).Note that these key vectors are only used to determine the task prompt and meta prompt in P (C, Q) because the general prompt P g is shared by all tasks in Diana, and the format prompt P f (F j ) can be determined based on the format of C and Q directly.
Task Prompt Keys help to determine the task prompt in P (C, Q).Specifically, for a given input (C, Q), we first calculate its query vector q and then determine the most similar task prompt key k t (T i ) to q.The task prompt Ideally, the key vector k t (T i ) for a task prompt P t (T i ) should be located near samples from task T i and distant to samples from other tasks T j (j ̸ = i).Therefore, when learning each task T i , we maintain a small memory buffer M for samples from previously learned tasks T j , (j < i), and design the following exponential angular triplet loss (Ye et al., 2021) to enforce the above property: in which the operator ||•, •|| determines the distance between two input vectors (here we use cosine distance), (C n , Q n ) is a negative sample extracted from the memory buffer M: Meta Prompt Keys help to combine these instance-level meta prompts {P i m } M i=1 to produce P m (C, Q).Specifically, for each input (C, Q), we select M ′ meta prompt keys that are closest to its query vector q = h(C, Q).Then P m (C, Q) is obtained by concatenating these M ′ meta prompts.Intuitively, the knowledge associated with (C, Q, A) is distributed in these M ′ meta prompts.When learning meta prompt keys, we expect the distribution of these keys to balance two properties: diversity and locality (Figure 3).Specifically, the diversity property aims to distribute these keys to the whole vector space so that every meta prompt can be involved in the training process.The locality property aims to cluster similar meta prompts keys so that the knowledge of each sample can be better shared.For each input C and Q, we propose the following loss to enforce the above two properties: where S(C, Q) is the index set of these M ′ meta prompt keys that are closest to h(C, Q), η and γ are scalar hyper-parameters for the distance margin.Specifically, the first term in Eq. 3 enforces the locality property by pulling these M ′ meta prompt keys around the query vector.The second term enforces the diversity property by pushing these meta prompt keys away from each other to occupy the whole vector space.Note that Eq. 3 only involves a single query h(C, Q) from the current task.This may limit the learned meta prompt keys since samples from previously learned tasks are not considered.In this study, we extend Eq. 3 to better shape the distributions of meta prompt keys with the help of the memory buffer M, in which samples from previously learned tasks are contained.Specifically, when learning the task T i , we first calculate query vectors for samples in M and then group these query vectors into B clusters (we set B = 5 × i in our experiments, where i is the number of received tasks).Centroids of these B clusters are denoted as For each sample (C, Q) from M, the subsequent loss is optimized: where c k is the centroid to which (C, Q) belong.
The above loss enforces the global diversity by scattering meta prompt keys to each centroid.

Model Training
Scheduled Sampling of Task Prompts When training Diana, the task ID of each sample (C, Q) is given so that we can directly get the task prompt P t (T i ).However, naively using golden truth task IDs leads to an exposure bias issue, i.e., task IDs inferred in testing may not always be correct.
In this study, we introduce a scheduled sampling process to tackle the exposure bias issue.Specifically, for a given sample (C, Q, A) in the k-th training step, we toss a coin and use the golden truth task ID with probability ϵ k , or use the task ID inferred based on task prompt keys with probability 1 − ϵ k (Bengio et al., 2015).Note that when starting to learn each task, prompt keys are not well optimized, and thus the selected task ID is not accurate.Therefore, we set the value of ϵ k to favor the golden truth task ID at the beginning (i.e., when k is small) and gradually switch to the inferred task ID as the training proceeds (i.e., when k is large), i.e., a linear decrement of ϵ k is scheduled: in which α and β are scalar hyper-parameters.Note that LL models may encounter another source of exposure bias since we may receive inputs from unseen tasks in the testing phase.In this study, we use these L extra prompts Pt (F 1 ), • • • , Pt (F L ) to explicitly model unseen tasks.Specifically, for each training sample (C, Q, A), we first determine its task format F j based on (C, Q), and allocate a small probability to use Pt (F j ) as its task prompt in P (C, Q).In this way, we can capture general knowledge about all tasks for a given format in Pt (F j ) and expect the knowledge to facilitate handling unseen tasks.
Train with LM Loss For each training sample (C, Q, A), we first construct the prompt P (C, Q) using approaches introduced above, and then optimize P (C, Q) together with the encoder-decoder model g θ using the following LM loss: The overall loss that we optimize for Diana is: After learning each task T i , we select a small number of samples from T i based on the query vector of each sample to update the memory M.This selection process aims to maintain diverse samples in M.More details are in Appendix B.
See summarized training process in Algorithm 1.

Model Inference
When testing, we determine the prompt P (C, Q) for each input context C and question Q, and use the learned model g θ to predict the answer A.
Adaptive Decision Boundaries (ADB) are used to select proper task prompts in the testing phase.Specifically, for each task T i , a scalar boundary δ i is constructed following the approach proposed by Zhang et al. (2021).An input (C, Q) is regarded as a sample from unseen tasks if its query vector h(C, Q) falls outside the boundary of every task: For samples from unseen tasks, we use the prompt Pt (F j ) as its task prompt in P (C, Q), where F j is the format of (C, Q).
Answer Prediction is performed with a greedy decoding process: 4 Experiments

Datasets
We use two sets of tasks to evaluate Diana: 1. decaNLP tasks: We follow Sun et al. (2019a) to select 5 tasks from the decaNLP (McCann et al., 2018) to train Diana.These tasks cover 3 different formats: Span Extraction, Sequence Generation, and Text Classification.We also collect N ′ = 3 additional tasks for each of these 3 format from decaNLP to serve as unseen tasks in the testing phase, i.e., our model is trained on N = 5 seen tasks while tested on 8 tasks; 2. QA tasks: The second set focuses on question answering (QA) benchmarks.Specifically, we use 8 QA datasets over 3 QA formats, i.e., Extractive QA, Abstractive QA and Multiple-Choice QA to train Diana.We also collect N ′ = 3 additional QA datasets for each of these three formats as unseen tasks, i.e., our model is trained on N = 8 seen tasks while tested on 11 tasks.
Note that task IDs for all testing samples are not available in our experiments.See Appendix C,J for more details of our dataset settings.

Evaluation Metrics
Individual tasks from above two task sets are evaluated following McCann et al. (2018) and Zhong et al. (2022a), respectively (see Appendix C).To evaluate the LL performance of Diana, we build a performance matrix R ∈ R N ×(N +N ′ ) , where R i,j is the model performance on task T j after learning task T i .The following LL metrics are computed: 1. Average Performance A N and A N ′ is defined as the average performance of the final model on N seen tasks and N ′ unseen tasks, respectively: 2. Average Forget F N is defined as the average performance decrease of each task after it is learned: In our experiments, we perform five runs with different random seeds and task orders.All reported metric scores are averages of these five runs.Ideally, we expect a strong LL model to yield high A N and A N ′ scores, and low F N scores.

Implementation Details
We use T5-base (Raffel et al., 2020) to initialize our encoder-decoder model, and set the lengths of soft prompts P g , P f , P t , P m to 20, 40, 40, 20, respectively.We maintain totally M = 30 meta prompts, and for each sample (C, Q) we choose M ′ = 5 meta prompts to construct P m (C, Q).We use the AdamW (Loshchilov and Hutter, 2017) optimizer with a learning rate of 1e-4 and batch size of 64.Each task is trained for five epochs.We set η = 0.15 and γ = 0.3 in Eq. 3 and α = 0.9 and β = 3e − 4 in Eq. 5. We maintain 50 samples from each learned task in the memory M. All experiments are performed on 4 V100 GPUs, and the computational cost of our model is analyzed in Appendix G. See more details in Appendix A.

Baselines
We use the following competitive baselines covering all three types of LL models: 1. Regularization-based methods: EWC (Kirkpatrick et al., 2017) adopts the elastic weight consolidation approach to add regularization on parameter changes; FLCB (Gao et al., 2022)  We combine ProQA and ER to implement a stronger baseline ProQA+ER, in which samples from previous tasks are replayed for the ProQA model, and we also implement a variant of Diana by removing the memory buffer Diana w/o M. We further report the performance for sequentially fine-tuning the LL model on all tasks (Finetune) and multi-task learning (Multitask).Note that the performance of Multitask is generally regarded as the upper bound of LL models when only seen tasks are considered.
All the above baselines are implemented following the same settings of our model, including using the same backbone PLM, prompt size, and memory size used for replay.Note that for the ProQA baseline, we follow its original setting to provide task IDs for testing samples when evaluating.

Experiment Results
Results on Seen Tasks Table 1 shows the result on seen tasks from our two task sets.It can be seen that Diana outperforms all competitive baselines.Specifically, in the more general domain incremental learning scenario, i.e., when task IDs are unavailable in testing, Diana outperforms the best-performing baseline AFPER by a large margin.On QA tasks, Diana achieves 6.15% relative improvement on the A N score and 27.26% relative decrease on the F N score.Similar trend is also observed on decaNLP tasks.This means that Diana obtains higher performance with less forgetting in the LL process compared with other baselines.
We can also observe that: (1) Diana even outperforms the ProQA+ER baseline, which leaks task IDs in testing.This proves the superiority of our model design.( 2) When task IDs are unavailable, Diana w/o M outperforms all baselines that do not use the memory buffer.This demonstrates that Diana's hierarchical prompts help to improve the LL performance even without the memory buffer.
Results on Unseen Tasks Table 2 shows the result on unseen tasks from our two task sets.Note that we cannot compute the average forget score for unseen tasks since these tasks are never learned.Diana yields the best performances on all settings.It also achieves a relative improvement of 9.49% and 11.04% on the A N ′ score compared with the best baseline DER++ on these two task sets.
We can also observe that: (1) When M is unavailable, models that share knowledge through fine-grained components (i.e., Diana and L2P) generally obtain high performance, and our model that allocates extra prompts for unseen tasks achieves the best performance.This validates our approach of using hierarchical prompts to explicitly model unseen tasks.(2) It is interesting to see that Diana even outperforms Multitask, which is usually regarded as the upper bound of traditional LL models when only seen tasks are considered.This indicates that traditional LL models have limited generalization ability to unseen tasks and it also proves that our model is effective in modeling unseen tasks.
See Appendix D for detailed experimental results of all tasks.

Ablation Studies
We conduct ablation studies on different components of Diana.Specifically, three types of variants are implemented: 1.Each of these four prompt types is ablated: w/o general prompt, w/o format prompt, w/o task prompt, w/o meta prompt.
2. Schemes to enhance task prompts are ablated: w/o Sched.Sampling removes the scheduled sampling scheme and only uses the ground truth task IDs in training; w/o G.T. Identity is similar to the above variant.Instead, it only uses predicted task IDs in training; w/o Neg.Samples only uses positive samples to train task prompt keys, i.e., the second term in Eq. 1 is removed; w/o ADB uses fixed decision boundaries instead of ADBs to detect unseen tasks.
3. Schemes to enhance meta prompts are ablated: w/o Sample Dive.does not enforce the diversity property of the meta prompt keys, i.e., the second term in Eq. 3 is removed; w/o Memory Dive.does not use samples from previous tasks to enhance the diversity property, i.e., the loss L ′ m (Eq.4) is removed; w/o Loc.does not enforce the locality property of the meta prompt keys, i.e., the first term in Eq. 3 is removed; w/o Cluster does not cluster samples in M, i.e., c k in Eq. 4 is replaced with the query vector of each sample from M.
Table 3 shows the performance of the above variants on QA tasks.It can be observed that Diana outperforms all the above variants.We can also see that: (1) "w/o Meta Prompt" lowers the LL performance by a large margin.This indicates that these task ID detectors can be found in Appendix E.

Distribution of Meta Prompt Keys
We also analyze the distribution of meta prompt keys K = {k j m } M j=1 constructed in Diana, which are expected to balance the locality and diversity property.Specifically, we introduce two metrics to quantify these two properties.For the diversity property, we follow Mansoury et al. ( 2020) to measure whether these meta prompt keys cover the whole vector space: where N Z (k j m , M) represents the set of top-Z nearest samples in M around k j m , and | • | returns the sample count of a set.High diversity scores are received if we can scatter meta prompt keys near every query vector from M. For the locality property, we follow Scellato et al. (2010) to measure whether there are keys clustered around each query vector q in M: High locality scores are received if meta prompt keys in K are tightly clustered.
On the QA tasks, we compare the above two metrics between Diana and our ablation variants for meta prompts under different values of Z.As can be seen from Table 4, the strategies we introduced in Diana (Section 3.3) help to enforce the locality and diversity properties of meta prompt keys.

Conclusion
We propose Diana, a novel LL model for the domain incremental learning scenario.Diana converts different NLP tasks into a unified sequence generation format and uses a prompt-enhanced PLM to learn these tasks.We introduce four types of hierarchically organized prompts in Diana to capture knowledge in different granularities.These prompts are shared between different scopes of samples and are dynamically combined based on a set of key vectors.The space of key vectors is learned with several distance-based regularization terms.Dedicated components are also allocated in Diana to model samples from unseen tasks.Experiments and empirical analysis on two sets of tasks show that Diana outperforms SOTA LL models, especially in handling samples from unseen tasks.

Limitations
One major limitation of this study is its input modality.Specifically, our model is limited to textual inputs and ignores other modalities (e.g., vision and audio).Open and domain incremental lifelong learning across modalities is more realistic and challenging.Fortunately, we can obtain robust features of different modalities via multi-modal pre-training models (Xu et al., 2021;Huo et al., 2021).For future work, we will try to tackle multimodal tasks in an open (including out of distribution data (Lang et al., 2022(Lang et al., , 2023a,b),b)) and domain incremental lifelong learning scenario with better approaches.

Ethics Statement
This work does not raise any direct ethical issues.In the proposed work, we seek to develop a model for domain incremental lifelong learning in an open world, and we believe this work leads to intellectual merits that benefit from a realistic and efficient lifelong learning model.All experiments are conducted on open datasets.

A More Implementation Details
We use T5-base (Raffel et al., 2020) to initialize our encoder-decoder model (12 layers, 768 dimensional hidden size, and 12 attention heads), and set the lengths of soft prompts P g ,P f ,P t ,P m to 20, 40, 40, 20, respectively.We use a fixed T5-base encoder with an average pooling layer to obtain the query vector.We maintain a pool of M = 30 meta prompts, and for each sample (C, Q) we choose M ′ = 5 meta prompts to construct P m (C, Q).We use the AdamW (Loshchilov and Hutter, 2017) optimizer for training.All hyperparameters are tuned according to the average score on validation datasets of NarQA, RACE, OBQA, SIQA and Dream.We tried epoch number of {2, 3, 4, 5, 6, 7, 8} and learning rate of {1e−5, 5e− 5, 1e − 4, 5e − 4, 1e − 3}.We finally set the learning rate to 1e-4 and the number of training epochs to 5. We set η = 0.15 and γ = 0.3 in Eq. 3 and α = 0.9 and β = 3e − 4 in Eq. 5.For η and γ, we have a grid search between 0 and 0.5 with an interval of 0.05.For α and β, α is searched among {0.9, 0.7, 0.5}, while β is searched among {1e − 5, 3e − 5, 1e − 4, 3e − 4, 1e − 3}.All experiments are performed on 4 V100 GPUs (32GB).The batch size is set to 64.In each set of tasks, We perform 5 runs with different task orders by setting the random seed to {42, 43, 44, 45, 46} respectively.In this way, we report the average score of each method.Note that we only use the random seed 42 for tuning hyper-parameters.
In order to train extra task prompts { Pt (F 1 ), • • • , Pt (F L )} for unseen tasks, we allocate a small probability ω = 5% for each training sample (C, Q, A) to use Pt (F j ) as its task prompt in P (C, Q), where F j is the task format of (C, Q, A).To implement variant "w/o ADB" for ablation study, we use a fixed decision boundary instead of ADB.If for any task T i , the distance ||h(C, Q), k t (T i )|| > 0.35, we regard the sample is from unseen tasks.
The adaptive decision boundary for each task is determined following the approach proposed by Zhang et al. (2021).We use AdamW optimizer with a learning rate of 0.02 to learn each decision boundary.To obtain the ROUGE-L score, we use the NLTK package for sentence tokenization, and python rouge-score package for evaluation.

B Memory Update
After learning task T i , we select E diverse samples (we set E = 50 in our experiments) from T i to update the memory M based on the query vector of each sample.Specifically, our selection criteria are built based on the distance of these prompt keys and query vectors.For each meta prompt key k j m (j = 1, • • • , M ), we select top-⌈ E M ⌉ samples (⌈•⌉ is the ceiling function), whose query vectors are closest to k j m .After accumulating M ⌈ E M ⌉ memory samples selected by M meta prompt keys, we rank these samples based on their distance to the corresponding meta prompt keys, and choose top-E samples with the smallest distance to be fed into M.In this way, the memory M we constructed can expand to the whole space of prompt keys.
Note that, the memory buffer M is optional in Diana.Without M, the loss in Eq. 4 is not optimized, and the second term in Eq. 1 is removed.

D Detailed Experimental Results
We provide the detailed performance of Diana under each single task compared with competitive baselines.The results under five seen tasks of the decaNLP task set, and eight seen tasks of the QA task set are shown in Table 6 and Table 7.The results of unseen tasks for the decaNLP task set and the QA task set are shown in Table 8 and Table 9.

E More Analysis of Task Identity Detection Performance
Architecture-based LL models need to detect task identities of input samples when these identities are unavailable in the testing phase.To verify the performance of the task identity detector implemented in Diana, we compare our approach with other task identity detectors: (1) Perplexity-based detector implemented in baseline "AdapterCL" determines the task identities based on the perplexity of the PLM when different adapter modules are activated.
(2) Distance-based detector implemented in our variant "w/o Neg.Samples" determines the task identity based on the distance between each key and query vectors.(3) Advanced distance-based detector implemented in our variant "w/o ADB" utilizes negative samples based on the above detector.Note that we do not apply ADB in the above two distance-based detectors.
The above approaches are trained and evaluated with the QA tasks under two scenarios: (1) In Closed-world: detectors are only required to detect samples from seen tasks.Note that in this setting, the Advanced distance-based detector used in "w/o ADB" is the same as the task identity detector implemented in Diana.(2) In Open-world: detectors are required to handle unseen task samples as well.When tested in the open-world scenario, these two distance-based detectors adopt a fixed decision boundary of 0.35 (see Appendix A).The perplexity-based detector adopts a perplexity threshold of 4, i.e., samples with a perplexity score above 4 are regarded as unseen task samples.This perplexity threshold is selected based on the model performance on the validation set.
We report the task identity detection accuracy and Marco F1 scores for seen samples and unseen samples separately in Table 10.we can observe that: (1) The task identity detector used in Diana achieves the best performance in both scenarios.This proves the effectiveness of our task prompt keys in detecting task identities.( 2) Negative samples used in Advanced distance-based detector significantly improve the task identity detection performance on seen tasks.(3) ADB is effective in improving the task identity detection performance on unseen tasks.

F More Analysis of Scheduled Sampling
We perform a more detailed analysis of the scheduled sampling scheme introduced in Diana.Specifically, in the ablation variant "w/o G.T. Identity", the model only uses predicted task identities in training.This scheme helps to alleviate the discrepancy between training and testing with the cost of the model coverage speed.In the ablation variant "w/o Sched.Sampling", the model only uses golden truth task identities in the training process.This scheme leads to the discrepancy between training and testing.The above two schemes under-perform our model Diana.
In this section, we analyze the task identity detection accuracy yield by the above schemes in  that the task identity detection accuracy achieved by "w/o G.T. Identity" is extremely low in earlier iterations, which hinders task prompts from sharing task-specific knowledge in the early training stage.The scheduled sampling process introduced in Diana effectively compromises between detecting correct task identities and alleviating the train-test discrepancy, and thus it results in the best LL performance among these variants.Note that the task identity detection accuracy in "w/o Sched.
Sampling" is almost zero in the first 1,000 iterations when learning task T N .This is because the task prompt keys for previous N − 1 tasks are already well learned.The randomly initialized prompt key for task T N needs to be pulled to the query vector space before starting to be functional.

G More Analysis of Computational Cost
We analyze the computational cost of Diana when learning the QA tasks, including the number of tunable parameters, time used for training and testing, and size of required memories retained from previous tasks.As indicated in

H Effect of PLM Size
We evaluate Diana and the best-performing baseline DER++ on different sized PLM using QA datasets.As shown in Table 12, Diana obtains better performance with larger PLM size, and consistently outperforms the baseline.

I Analysis of Training Method
During training, we follow a full tuning scheme that updates parameters of the backbone language models (T5) along with prompts.We also investigate the performance of prompt tuning, which fixes the backbone language model and only updates the prompts.As indicated in Table 13, prompt tuning dramatically degenerates the performance of Diana.

J Cases
We list some samples for tasks we modeled from the decaNLP task set and the QA task set respectively, shown in Table 14 and Table 15.

K Training Process
Details about the training process of Diana are shown in Algorithm 1.
Table 14: Samples extracted from different decaNLP tasks.Each task contains a context, a question and an answer.Note that SQuAD is in the QA task set as well.

Figure 2 :
Figure 2: Different prompt organization schemes.(a) Each task is assigned a separate prompt and the closest prompt to the query vector is activated.(b) A pool of prompts are maintained and the top-M ′ closest prompts to the query vector are activated and combined.(c) Four kinds of prompts are hierarchically organized and combined based on the task format and distances between the query vector and prompt keys.

Figure 3 :
Figure 3: Illustration of the diversity and locality property.(a) The diversity property distributes key vectors to the whole space.(b) The locality property cluster similar keys to facilitate knowledge sharing.(c) Diana aims to achieve a balance between diversity and locality

Figure 4 :
Figure 4: The task identity detection accuracy for samples from the last task T N when learning T N of the QA task set.

Table 1 :
Model performance on seen tasks.Best results (except the upper bound Multitask) are bolded.Our model Diana significantly outperforms other baselines on all metrics with p-value<0.05(t-test).
(Zhong et al., 2022a))b)rom previous tasks to guide future task learning; 2. Rehearsal-based methods: ER(Chaudhry et al., 2019b)replays memory samples from previous tasks to consolidate learned knowledge; DER++(Buzzega et al., 2020)augments ER with a L 2 loss on the soft labels; AFPER (Mi et al., 2020) combines ER with an adaptive elastic weight consolidation mechanism; 3. Architecture-based methods: AdapterCL (Madotto et al., 2021a) allocates separate adapters for different tasks; L2P(Wang et al., 2022b)attaches a group of prompts on a pre-trained model to share fine-grained knowledge; DualPrompt(Wang et al., 2022a)uses different prompts to encode task-invariant and task-specific knowledge; ProQA(Zhong et al., 2022a)uses a unified structural prompt to implement LL models.Note that ProQA is designed for task incremental learning that requires accessing task IDs in the testing phase.

Table 2 :
Model performance on unseen tasks.Best results are bolded.Diana significantly outperforms other baselines on all metrics with p-value<0.05(t-test).

Table 3 :
Ablation studies of model components and training strategies on QA tasks.Each result is an average of 5 random runs.

Table 4 :
Quantitative analysis of the locality and diversity for meta prompt keys on QA tasks.

Table 5 :
Dataset Statistics of the decaNLP task set and the QA task set.

Table 6 :
Model performance on seen tasks in decaNLP.Best results (except the upper bound Multitask) are bold.Our model Diana significantly outperforms other baselines on all metrics with p-value<0.05(t-test).

Table 7 :
Model performance on seen QA tasks.Best results (except the upper bound Multitask) are bold.Our model Diana significantly outperforms other baselines on all metrics with p-value<0.05(t-test).
Figure 4 when learning the last task T N in the in-put task sequence of QA task set.We can observe

Table 10 :
Task identity detection performance of different models under the QA tasks.

Table 11 ,
Diana does not introduce too much computation overhead.

Table 11 :
Computational cost of Diana and baselines for the QA task set."Train Time" is the average time cost for each batch."Test Time" is the total time cost to evaluate all 11 tasks.Both train and test times are in seconds.

Table 12 :
Performance with different sized PLMs on QA tasks.

Table 13 :
Performance with different training methods on QA tasks.