Meta-Learning with Variational Semantic Memory for Word Sense Disambiguation

A critical challenge faced by supervised word sense disambiguation (WSD) is the lack of large annotated datasets with sufficient coverage of words in their diversity of senses. This inspired recent research on few-shot WSD using meta-learning. While such work has successfully applied meta-learning to learn new word senses from very few examples, its performance still lags behind its fully-supervised counterpart. Aiming to further close this gap, we propose a model of semantic memory for WSD in a meta-learning setting. Semantic memory encapsulates prior experiences seen throughout the lifetime of the model, which aids better generalization in limited data settings. Our model is based on hierarchical variational inference and incorporates an adaptive memory update rule via a hypernetwork. We show our model advances the state of the art in few-shot WSD, supports effective learning in extremely data scarce (e.g. one-shot) scenarios and produces meaning prototypes that capture similar senses of distinct words.


Introduction
Disambiguating word meaning in context is at the heart of any natural language understanding task or application, whether it is performed explicitly or implicitly. Traditionally, word sense disambiguation (WSD) has been defined as the task of explicitly labeling word usages in context with sense labels from a pre-defined sense inventory. The majority of approaches to WSD rely on (semi-)supervised learning (Yuan et al., 2016;Raganato et al., 2017a,b;Hadiwinoto et al., 2019;Huang et al., 2019;Scarlini et al., 2020;Bevilacqua and Navigli, 2020) and make use of training corpora manually annotated for word senses. Typically, these methods require a fairly large number of annotated training examples per word. This problem is exacerbated by the dramatic imbalances in sense frequencies, which further increase the need for annotation to capture a diversity of senses and to obtain sufficient training data for rare senses.
This motivated recent research on few-shot WSD, where the objective of the model is to learn new, previously unseen word senses from only a small number of examples. Holla et al. (2020a) presented a meta-learning approach to few-shot WSD, as well as a benchmark for this task. Meta-learning makes use of an episodic training regime, where a model is trained on a collection of diverse few-shot tasks and is explicitly optimized to perform well when learning from a small number of examples per task (Snell et al., 2017;Finn et al., 2017;Triantafillou et al., 2020). Holla et al. (2020a) have shown that meta-learning can be successfully applied to learn new word senses from as little as one example per sense. Yet, the overall model performance in settings where data is highly limited (e.g. one-or two-shot learning) still lags behind that of fully supervised models.
In the meantime, machine learning research demonstrated the advantages of a memory component for meta-learning in limited data settings (Santoro et al., 2016a;Munkhdalai and Yu, 2017a;Munkhdalai et al., 2018;Zhen et al., 2020). The memory stores general knowledge acquired in learning related tasks, which facilitates the acquisition of new concepts and recognition of previously unseen classes with limited labeled data (Zhen et al., 2020). Inspired by these advances, we introduce the first model of semantic memory for WSD in a meta-learning setting. In meta-learning, prototypes are embeddings around which other data points of the same class are clustered (Snell et al., 2017). Our semantic memory stores prototypical representations of word senses seen during training, generalizing over the contexts in which they are used. This rich contextual information aids in learning new senses of previously unseen words that appear in similar contexts, from very few examples.
The design of our prototypical representation of word sense takes inspiration from prototype theory (Rosch, 1975), an established account of category representation in psychology. It stipulates that semantic categories are formed around prototypical members, new members are added based on resemblance to the prototypes and category membership is a matter of degree. In line with this account, our models learn prototypical representations of word senses from their linguistic context. To do this, we employ a neural architecture for learning probabilistic class prototypes: variational prototype networks, augmented with a variational semantic memory (VSM) component (Zhen et al., 2020). Unlike deterministic prototypes in prototypical networks (Snell et al., 2017), we model class prototypes as distributions and perform variational inference of these prototypes in a hierarchical Bayesian framework. Unlike deterministic memory access in memory-based meta-learning (Santoro et al., 2016b;Munkhdalai and Yu, 2017a), we access memory by Monte Carlo sampling from a variational distribution. Specifically, we first perform variational inference to obtain a latent memory variable and then perform another step of variational inference to obtain the prototype distribution. Furthermore, we enhance the memory update of vanilla VSM with a novel adaptive update rule involving a hypernetwork (Ha et al., 2016) that controls the weight of the updates. We call our approach β-VSM to denote the adaptive weight β for memory updates.
We experimentally demonstrate the effectiveness of this approach for few-shot WSD, advancing the state of the art in this task. Furthermore, we observe the highest performance gains on word senses with the least training examples, emphasizing the benefits of semantic memory for truly few-shot learning scenarios. Our analysis of the meaning prototypes acquired in the memory suggests that they are able to capture related senses of distinct words, demonstrating the generalization capabilities of our memory component. We make our code publicly available to facilitate further research. 1 2 Related work Word sense disambiguation Knowledge-based approaches to WSD (Lesk, 1986;Agirre et al., 1 https://github.com/YDU-uva/VSM_WSD 2014; Moro et al., 2014) rely on lexical resources such as WordNet (Miller et al., 1990) and do not require a corpus manually annotated with word senses. Alternatively, supervised learning methods treat WSD as a word-level classification task for ambiguous words and rely on sense-annotated corpora for training. Early supervised learning approaches trained classifiers with hand-crafted features (Navigli, 2009;Zhong and Ng, 2010) and word embeddings (Rothe and Schütze, 2015;Iacobacci et al., 2016) as input. Raganato et al. (2017a) proposed a benchmark for WSD based on the SemCor corpus (Miller et al., 1994) and found that supervised methods outperform the knowledgebased ones.
Neural models for supervised WSD include LSTM-based (Hochreiter and Schmidhuber, 1997) classifiers (Kågebäck and Salomonsson, 2016;Melamud et al., 2016;Raganato et al., 2017b), nearest neighbour classifier with ELMo embeddings (Peters et al., 2018), as well as a classifier based on pretrained BERT representations (Hadiwinoto et al., 2019). Recently, hybrid approaches incorporating information from lexical resources into neural architectures have gained traction. GlossBERT (Huang et al., 2019) fine-tunes BERT with Word-Net sense definitions as additional input. EWISE (Kumar et al., 2019) learns continuous sense embeddings as targets, aided by dictionary definitions and lexical knowledge bases. Scarlini et al. (2020) present a semi-supervised approach for obtaining sense embeddings with the aid of a lexical knowledge base, enabling WSD with a nearest neighbor algorithm. By further exploiting the graph structure of WordNet and integrating it with BERT, EWISER (Bevilacqua and Navigli, 2020) achieves the current state-of-the-art performance on the benchmark by Raganato et al. (2017a) -an F1 score of 80.1%.
Unlike few-shot WSD, these works do not finetune the models on new words during testing. Instead, they train on a training set and evaluate on a test set where words and senses might have been seen during training.
Meta-learning Meta-learning, or learning to learn (Schmidhuber, 1987;Bengio et al., 1991;Thrun and Pratt, 1998), is a learning paradigm where a model is trained on a distribution of tasks so as to enable rapid learning on new tasks. By solving a large number of different tasks, it aims to leverage the acquired knowledge to learn new, unseen tasks. The training set, referred to as the meta-training set, consists of episodes, each corresponding to a distinct task. Every episode is further divided into a support set containing just a handful of examples for learning the task, and a query set containing examples for task evaluation. In the meta-training phase, for each episode, the model adapts to the task using the support set, and its performance on the task is evaluated on the corresponding query set. The initial parameters of the model are then adjusted based on the loss on the query set. By repeating the process on several episodes/tasks, the model produces representations that enable rapid adaptation to a new task. The test set, referred to as the meta-test set, also consists of episodes with a support and query set. The meta-test set corresponds to new tasks that were not seen during meta-training. During metatesting, the meta-trained model is first fine-tuned on a small number of examples in the support set of each meta-test episode and then evaluated on the accompanying query set. The average performance on all such query sets measures the few-shot learning ability of the model.
Metric-based meta-learning methods (Koch et al., 2015;Vinyals et al., 2016;Sung et al., 2018;Snell et al., 2017) learn a kernel function and make predictions on the query set based on the similarity with the support set examples. Model-based methods (Santoro et al., 2016b;Munkhdalai and Yu, 2017a) employ external memory and make predictions based on examples retrieved from the memory. Optimization-based methods (Ravi and Larochelle, 2017;Finn et al., 2017;Nichol et al., 2018;Antoniou et al., 2019) directly optimize for generalizability over tasks in their training objective.
Meta-learning has been applied to a range of tasks in NLP, including machine translation (Gu et al., 2018), relation classification (Obamuyide and Vlachos, 2019), text classification (Yu et al., 2018;Geng et al., 2019), hypernymy detection (Yu et al., 2020), and dialog generation (Qian and Yu, 2019). It has also been used to learn across distinct NLP tasks (Dou et al., 2019;Bansal et al., 2019) as well as across different languages (Nooralahzadeh et al., 2020;Li et al., 2020). Bansal et al. (2020) show that meta-learning during self-supervised pretraining of language models leads to improved fewshot generalization on downstream tasks. Holla et al. (2020a) propose a framework for few-shot word sense disambiguation, where the goal is to disambiguate new words during meta-testing. Meta-training consists of episodes formed from multiple words whereas meta-testing has one episode corresponding to each of the test words. They show that prototype-based methods -prototypical networks (Snell et al., 2017) and first-order ProtoMAML (Triantafillou et al., 2020) -obtain promising results, in contrast with model-agnostic meta-learning (MAML) (Finn et al., 2017).
Memory-based models Memory mechanisms (Weston et al., 2014;Graves et al., 2014;Krotov and Hopfield, 2016) have recently drawn increasing attention. In memory-augmented neural network (Santoro et al., 2016b), given an input, the memory read and write operations are performed by a controller, using soft attention for reads and least recently used access module for writes. Meta Network (Munkhdalai and Yu, 2017b) uses two memory modules: a key-value memory in combination with slow and fast weights for one-shot learning. An external memory was introduced to enhance recurrent neural network in Munkhdalai et al. (2019), in which memory is conceptualized as an adaptable function and implemented as a deep neural network. Semantic memory has recently been introduced by Zhen et al. (2020) for few-shot learning to enhance prototypical representations of objects, where memory recall is cast as a variational inference problem.
In NLP, Tang et al. (2016) use content and location-based neural attention over external memory for aspect-level sentiment classification. Das et al. (2017) use key-value memory for question answering on knowledge bases. Mem2Seq (Madotto et al., 2018) is an architecture for task-oriented dialog that combines attention-based memory with pointer networks (Vinyals et al., 2015). Geng et al. (2020) propose Dynamic Memory Induction Networks for few-shot text classification, which utilizes dynamic routing (Sabour et al., 2017) over a static memory module. Episodic memory has been used in lifelong learning on language tasks, as a means to perform experience replay (d'Autume et al., 2019;Han et al., 2020;Holla et al., 2020b).

Task and dataset
We treat WSD as a word-level classification problem where ambiguous words are to be classified into their senses given the context. In traditional WSD, the goal is to generalize to new contexts of word-sense pairs. Specifically, the test set consists of word-sense pairs that were seen during train-ing. On the other hand, in few-shot WSD, the goal is to generalize to new words and senses altogether. The meta-testing phase involves further adapting the models (on the small support set) to new words that were not seen during training and evaluates them on new contexts (using the query set). It deviates from the standard N -way, K-shot classification setting in few-shot learning since the words may have a different number of senses and each sense may have different number of examples (Holla et al., 2020a), making it a more realistic few-shot learning setup (Triantafillou et al., 2020).
Dataset We use the few-shot WSD benchmark provided by Holla et al. (2020a). It is based on the SemCor corpus (Miller et al., 1994), annotated with senses from the New Oxford American Dictionary by Yuan et al. (2016). The dataset consists of words grouped into meta-training, metavalidation and meta-test sets. The meta-test set consists of new words that were not part of metatraining and meta-validation sets. There are four setups varying in the number of sentences in the support set |S| = 4, 8, 16, 32. |S| = 4 corresponds to an extreme few-shot learning scenario for most words, whereas |S| = 32 comes closer to the number of sentences per word encountered in standard WSD setups. For |S| = 4, 8, 16, 32, the number of unique words in the meta-training / meta-validation / meta-test sets is 985/166/270, 985/163/259, 799/146/197 and 580/85/129 respectively. We use the publicly available standard dataset splits. 2 Episodes The meta-training episodes were created by first sampling a set of words and a fixed number of senses per word, followed by sampling example sentences for these word-sense pairs. This strategy allows for a combinatorially large number of episodes. Every meta-training episode has |S| sentences in both the support and query sets, and corresponds to the distinct task of disambiguating between the sampled word-sense pairs. The total number of meta-training episodes is 10, 000. In the meta-validation and meta-test sets, each episode corresponds to the task of disambiguating a single, previously unseen word between all its senses. For every meta-test episode, the model is fine-tuned on a few examples in the support set and its generalizability is evaluated on the query set. In contrast to the meta-training episodes, the meta-test episodes reflect a natural distribution of senses in the corpus, including class imbalance, providing a realistic evaluation setting.

Model architectures
We experiment with the same model architectures as Holla et al. (2020a). The model f θ , with parameters θ, takes words x i as input and produces a perword representation vector f θ (x i ) for i = 1, ..., L where L is the length of the sentence. Sense predictions are only made for ambiguous words using the corresponding word representation.
GloVe+GRU Single-layer bi-directional GRU (Cho et al., 2014) network followed by a single linear layer, that takes GloVe embeddings (Pennington et al., 2014) as input. GloVe embeddings capture all senses of a word. We thus evaluate a model's ability to disambiguate from senseagnostic input.
ELMo+MLP A multi-layer perception (MLP) network that receives contextualized ELMo embeddings (Peters et al., 2018) as input. Their contextualised nature makes ELMo embeddings better suited to capture meaning variation than the static ones. Since ELMo is not fine-tuned, this model has the lowest number of learnable parameters.
BERT Pretrained BERT BASE (Devlin et al., 2019) model followed by a linear layer, fully finetuned on the task. BERT underlies state-of-the-art approaches to WSD.

Prototypical Network
Our few-shot learning approach builds upon prototypical networks (Snell et al., 2017), which is widely used for few-shot image classification and has been shown to be successful in WSD (Holla et al., 2020a). It computes a prototype z k = However, the resulting prototypes may not be sufficiently representative of word senses as semantic categories when using a single deterministic vector, computed as the average of only a few examples. Such representations lack expressiveness and may not encompass sufficient intra-class variance, that is needed to distinguish between different fine-grained word senses. Moreover, large uncertainty arises in the single prototype due to the small number of samples.

Variational Prototype Network
Variational prototype network (Zhen et al., 2020) (VPN) is a powerful model for learning latent representations from small amounts of data, where the prototype z of each class is treated as a distribution. Given a task with a support set S and query set Q, the objective of VPN takes the following form: (2) where q(z|S) is the variational posterior over z, p(z|x i ) is the prior, and L z is the number of Monte Carlo samples for z. The prior and posterior are assumed to be Gaussian. The re-parameterization trick (Kingma and Welling, 2013) is adopted to enable back-propagation with gradient descent, i.e., z (lz) = f (S, (lz) ), (lz) ∼ N (0, I), f (·, ·) = (lz) * µ z + σ z , where the mean µ z and diagonal covariance σ z are generated from the posterior inference network with S as input. The amortization technique is employed for the implementation of VPN. The posterior network takes the mean word representations in the support set S as input and returns the parameters of q(z|S). Similarly, the prior network produces the parameters of p(z|x i ) by taking the query word representation x i ∈ Q as input. The conditional predictive log-likelihood is implemented as a cross-entropy loss.

β-Variational Semantic Memory
In order to leverage the shared common knowledge between different tasks to improve disambiguation in future tasks, we incorporate variational semantic memory (VSM) as in Zhen et al. (2020). It consists of two main processes: memory recall, which retrieves relevant information that fits with specific tasks based on the support set of the current task; Figure 1: Computational graph of variational semantic memory for few-shot WSD. M is the semantic memory module, S the support set, x and y are the query sample and label, and z is the word sense prototype.
memory update, which effectively collects new information from the task and gradually consolidates the semantic knowledge in the memory. We adopt a similar memory mechanism and introduce an improved update rule for memory consolidation.
Memory recall The memory recall of VSM aims to choose the related content from the memory, and is accomplished by variational inference. It introduces latent memory m as an intermediate stochastic variable, and infers m from the addressed memory M . The approximate variational posterior q(m|M, S) over the latent memory m is obtained empirically by where is the dot product, |M | is the number of memory slots, M a is the memory content at slot a and stores the prototype of samples in each class, and we take the mean representation of samples in S.
The variational posterior over the prototype then becomes: where m (lm) is a Monte Carlo sample drawn from the distribution q(m|M, S), and l m is the number of samples. By incorporating the latent memory m from Eq. (3), we achieve the objective for varia-tional semantic memory as follows: where p(m|S) is the introduced prior over m, λ z and λ m are the hyperparameters. The overall computational graph of VSM is shown in Figure 1. Similarly, the posterior and prior over m are also assumed to be Gaussian and obtained by using amortized inference networks; more details are provided in Appendix A.1.

Memory update
The memory update is to be able to effectively absorb new useful information to enrich memory content. VSM employs an update rule as follows: where M c is the memory content corresponding to class c,M c is obtained using graph attention (Veličković et al., 2017), and β ∈ (0, 1) is a hyperparameter.
Adaptive memory update Although VSM was shown to be promising for few-shot image classification, it can be seen from the experiments by Zhen et al. (2020) that different values of β have considerable influence on the performance. β determines the extent to which memory is updated at each iteration. In the original VSM, β is treated as a hyperparameter obtained by cross-validation, which is time-consuming and inflexible in dealing with different datasets. To address this problem, we propose an adaptive memory update rule by learning β from data using a lightweight hypernetwork (Ha et al., 2016). To be more specific, we obtain β by a function f β (·) implemented as an MLP with a sigmoid activation function in the output layer. The hypernetwork takesM c as input and returns the value of β: Moreover, to prevent the possibility of endless growth of memory value, we propose to scale down the memory value whenever M c 2 > 1. This is achieved by scaling as follows: When we update memory, we feed the new obtained memoryM c into the hypernetwork f β (·) and output adaptive β for the update. We provide a more detailed implementation of β-VSM in Appendix A.1.

Experiments and results
Experimental setup The size of the shared linear layer and memory content of each word sense is 64, 256, and 192 for GloVe+GRU, ELMo+MLP and BERT respectively. The activation function of the shared linear layer is tanh for GloVe+GRU and ReLU for the rest. The inference networks g φ (·) for calculating the prototype distribution and g ψ (·) for calculating the memory distribution are all three-layer MLPs, with the size of each hidden layer being 64, 256, and 192 for GloVe+GRU, ELMo+MLP and BERT. The activation function of their hidden layers is ELU (Clevert et al., 2016), and the output layer does not use any activation function. Each batch during meta-training includes 16 tasks. The hypernetwork f β (·) is also a threelayer MLP, with the size of hidden state consistent with that of the memory contents. The linear layer activation function is ReLU for the hypernetwork. For BERT and |S| = {4, 8}, λ z = 0.001, λ m = 0.0001 and learning rate is 5e−6; |S| = 16, λ z = 0.0001, λ m = 0.0001 and learning rate is 1e−6; |S| = 32, λ z = 0.001, λ m = 0.0001 and learning rate is 1e−5. Hyperparameters for other models are reported in Appendix A.2. All the hyperparameters are chosen using the meta-validation set. The number of slots in memory is consistent with the number of senses in the meta-training set -2915 for |S| = 4 and 8; 2452 for |S| = 16; 1937 for |S| = 32. The evaluation metric is the wordlevel macro F1 score, averaged over all episodes in the meta-test set. The parameters are optimized using Adam (Kingma and Ba, 2014).
We compare our methods against several baselines and state-of-the-art approaches. The nearest neighbor classifier baseline (NearestNeighbor) predicts a query example's sense as the sense of the support example closest in the word embedding space (ELMo and BERT) in terms of cosine distance. The episodic fine-tuning baseline (EF-ProtoNet) is one where only meta-testing is

Results
In Table 1, we show the average macro F1 scores of the models, with their mean and standard deviation obtained over five independent runs. Our proposed β-VSM achieves the new state-ofthe-art performance on few-shot WSD with all the embedding functions, across all the setups with varying |S|. For GloVe+GRU, where the input is sense-agnostic embeddings, our model improves disambiguation compared to ProtoNet by 1.8% for |S| = 4 and by 2.4% for |S| = 32. With contextual embeddings as input, β-VSM with ELMo+MLP also leads to improvements compared to the previous best ProtoFOMAML for all |S|. Holla et al.
(2020a) obtained state-of-the-art performance with BERT, and β-VSM further advances this, resulting in a gain of 0.9 -2.2%. The consistent improvements with different embedding functions and support set sizes suggest that our β-VSM is effective for few-shot WSD for varying number of shots and senses as well as across model architectures.

Analysis and discussion
To analyze the contributions of different components in our method, we perform an ablation study by comparing ProtoNet, VPN, VSM and β-VSM and present the macro F1 scores in Table 2.
Role of variational prototypes VPN consistently outperforms ProtoNet with all embedding functions (by around 1% F1 score on average). The results indicate that the probabilistic prototypes provide more informative representations of word senses compared to deterministic vectors. The highest gains were obtained in case of GloVe+GRU (1.7% F1 score with |S| = 8), suggesting that probabilistic prototypes are particularly useful for models that rely on static word embeddings, as they capture uncertainty in contextual interpretation.

Role of variational semantic memory
We show the benefit of VSM by comparing it with VPN. VSM consistently surpasses VPN with all three embedding functions. According to our analysis, VSM makes the prototypes of different word senses more distinctive and distant from each other. The senses in memory provide more context information, enabling larger intra-class variations to be captured, and thus lead to improvements upon VPN.
Role of adaptive β To demonstrate the effectiveness of the hypernetwork for adaptive β, we compare β-VSM with VSM where β is tuned by cross-validation. It can be seen from Table 2 that there is consistent improvement over VSM. Thus, the learned adaptive β acquires the ability to determine how much of the contents of memory needs to be updated based on the current new memory. β-VSM enables the memory content of different word senses to be more representative by better absorbing information from data with adaptive update, resulting in improved performance.   Variation of performance with the number of senses In order to further probe into the strengths of β-VSM, we analyze the macro F1 scores of the different models averaged over all the words in the meta-test set with a particular number of senses.
In Figure 2, we show a bar plot of the scores obtained from BERT for |S| = 16. For words with a low number of senses, the task corresponds to a higher number of effective shots and vice versa. It can be seen that the different models perform roughly the same for words with fewer senses, i.e., 2 -4. VPN is comparable to ProtoNet in its distribution of scores. But with semantic memory, VSM improves the performance on words with a higher number of senses. β-VSM further boosts the scores for such words on average. The same trend is observed for |S| = 8 (see Appendix A.3). Therefore, the improvements of β-VSM over ProtoNet come from tasks with fewer shots, indicating that VSM is particularly effective at disambiguation in low-shot scenarios.
Visualization of prototypes To study the distinction between the prototype distributions of word senses obtained by β-VSM, VSM and VPN, we visualize them using t-SNE (Van der Maaten and Hinton, 2008). Figure 3 shows prototype distribu-tions based on BERT for the word draw. Different colored ellipses indicate the distribution of its different senses obtained from the support set. Different colored points indicate the representations of the query examples. β-VSM makes the prototypes of different word senses of the same word more distinctive and distant from each other, with less overlap, compared to the other models. Notably, the representations of query examples are closer to their corresponding prototype distribution for β-VSM, thereby resulting in improved performance. We also visualize the prototype distributions of similar vs. dissimilar senses of multiple words in Figure 4 (see Appendix A.4 for example sentences). The blue ellipse corresponds to the 'set up' sense of launch from the meta-test samples. Green and gray ellipses correspond to a similar sense of the words start and establish from the memory. We can see that they are close to each other. Orange and purple ellipses correspond to other senses of the words start and establish from the memory, and they are well separated. For a given query word, our model is thus able to retrieve related senses from the memory and exploit them to make its word sense distribution more representative and distinctive.

Conclusion
In this paper, we presented a model of variational semantic memory for few-shot WSD. We use a variational prototype network to model the prototype of each word sense as a distribution. To leverage the shared common knowledge between tasks, we incorporate semantic memory into the probabilistic model of prototypes in a hierarchical Bayesian framework. VSM is able to acquire longterm, general knowledge that enables learning new senses from very few examples. Furthermore, we propose adaptive β-VSM which learns an adaptive memory update rule from data using a lightweight hypernetwork. The consistent new state-of-the-art performance with three different embedding functions shows the benefit of our model in boosting few-shot WSD.
Since meaning disambiguation is central to many natural language understanding tasks, models based on semantic memory are a promising direction in NLP, more generally. Future work might investigate the role of memory in modeling meaning variation across domains and languages, as well as in tasks that integrate knowledge at different levels of linguistic hierarchy.

A.3 Variation of performance with the number of senses
To further demonstrate that β-VSM achieves better performance in extremely data scarce scenarios, we also analyze variation of macro F1 scores with the number of senses for BERT and |S| = 8. In Figure 5, we observe a similar trend as with |S| = 16. β-VSM has an improved performance for words with many senses, which corresponds to a low-shot scenario. For example, with 8 senses, the task is essentially one-shot.

A.4 Example sentences to visualize prototypes
In Table 4, we provide some example sentences used to generate the plots in Figure 4. These examples correspond to words launch, start and establish, and contain senses 'set up', 'begin' and 'build up'.

A.5 Results on the meta-validation set
We provide the results on the on the meta-validation set in the Table 5, to better facilitate reproducibility.

Word
Sense Sentence launch set up The Corinthian Yacht Club in Tiburon launches its winter races Nov. 5. launch set up The most infamous of all was launched by the explosion of the island of Krakatoa in 1883; it raced across the Pacific at 300 miles an hour devastated the coasts of Java and Sumatra with waves 100 to 130 feet high, and pounded the shore as far away as San Francisco. launch set up In several significant cases, such as India, a decade of concentrated effort can launch these countries into a stage in which they can carry forward their own economic and social progress with little or no government-togovernment assistance. start set up With these maps completed, the inventory phase of the plan has been started. start begin Congress starts another week tomorrow with sharply contrasting forecasts for the two chambers. establish set up For the convenience of guests bundle centers have been established throughout the city and suburbs where the donations may be deposited between now and the date of the big event. establish build up From the outset of his first term, he established himself as one of the guiding spirits of the House of Delegates.