One Sense per Translation

Word sense disambiguation (WSD) is the task of determining the sense of a word in context. Translations have been used in WSD as a source of knowledge, and even as a means of delimiting word senses. In this paper, we define three theoretical properties of the relationship between senses and translations, and argue that they constitute necessary conditions for using translations as sense inventories. The key property of One Sense per Translation (OSPT) provides a foundation for a translation-based WSD method. The results of an intrinsic evaluation experiment indicate that our method achieves a precision of approximately 93% compared to manual corpus annotations. Our extrinsic evaluation experiments demonstrate WSD improvements of up to 4.6% F1-score on difficult WSD datasets.


Introduction
Lexical semantics is the study of word meaning.Word sense disambiguation (WSD) is the task of determining the meaning of a word in context, which is crucially dependent on a sense inventory, a discrete enumeration of word meanings.The standard WSD sense inventory, WordNet (Miller, 1995), is considered to be excessively fine-grained (Navigli, 2006).Since different senses of a word are often translated differently, it has been hypothesized that different translations of a word could be used to define its senses (Brown et al., 1991;Gale et al., 1992;Resnik, 1997;Diab and Resnik, 2002).This approach can be concisely described as "translations as sense inventories", or TSI.
In order to understand why the TSI approach has failed to produce any widely-used sense inventories, we seek to establish proper theoretical foundations to guide WSD engineering efforts.This problem is even more important now, when the field is moving forward with increasingly complex neural models.While some researchers downplay the need for discrete senses inventories, instead focusing on continuous semantic spaces and contextualized embeddings, many practical applications continue to depend on practical lexical resources, such as WordNet and its multi-lingual generalization BabelNet (Navigli andPonzetto, 2010, 2012).For example, Loureiro and Jorge (2019) depend on WordNet to derive representations of senses not found in their training corpus, while Scarlini et al. (2020) use WordNet to induce contextualized sense embeddings.
In this paper, we propose a theoretical framework that allows us to explore the limits of using translations as sense inventories.We formally define the properties of "One Sense Per Translation" (OSPT), and "One Translation Per Sense" (OTPS), and argue that they are indispensable for TSI.We meticulously prove several novel propositions that establish the relationship between these two properties and the polysemy and synonymy assumptions of Yao et al. (2012).We also consider the implications of the alternative assumptions of "one sense per word" and "one sense per context."The framework enables us to explain, clarify, and correct the findings and claims in previous work.
In addition to the theoretical contributions, we empirically quantify and validate our theoretical results on BabelNet, a large multi-lingual semantic knowledge base.Our analysis is focused on English, with three languages of translation: Italian, Polish, and Chinese.We demonstrate that TSI is not a viable paradigm because the necessary properties hold for only a small number of senses.However, the subset of senses that do satisfy OSPT can be reliably annotated in an unsupervised fashion, to partially alleviate the WSD knowledge bottleneck.
This paper is structured as follows.We start with theoretical preliminaries in Section 2, followed by the bulk of the theory in Section 3. In Sections 4 and 5, respectively, we propose answers to the questions of why TSI failed and why finegrained inventories such as BabelNet succeeded.
In this section, we provide an overview of the semantic knowledge structures which underlie both our theoretical models and the concrete lexical resources on which they are evaluated.

Wordnets and Synsets
The Princeton WordNet, henceforth WordNet, was originally developed for psycholinguistic research, on the basis of the separability hypothesis, which states that the study of lexicon on its own can contribute to the study of language as a whole (Fellbaum, 1998).It has since found widespread use as a knowledge base for computational lexical semantics research, having been cited thousands of times in scientific publications.It also inspired the creation of wordnets in many other languages.
Wordnets are comprised of synonym sets, or synsets, with each synset containing one or more words.For each synset, there exists a context in which any word in the synset can be replaced with any other without altering the meaning of the sentence.Absolute synonyms can replace one another in any context.Each synset corresponds to a unique lexicalized concept, which is expressed by each word in the synset.Any set of synsets also induces a sense inventory, a representation of the possible meanings a word may have, with each synset of a word corresponding to one of its senses.Synonymy of the word senses within a synset is always absolute.
These observations are consistent with the synset properties formulated by Hauer and Kondrak (2020b), which will be used in the proofs that follow: 1.A word is monosemous iff it is in a single synset.A word is polysemous iff it is in multiple synsets.2. Words are synonyms iff they share at least one synset.Words are absolute synonyms iff they share all their synsets.3. Word senses are synonymous iff they are in the same synset.4. Every word sense belongs to exactly one synset.5. Every sense of a polysemous word belongs to a different synset.

Multi-wordnets and Multi-synsets
The applicability of WordNet is limited by its monolinguality: it covers English exclusively.
This has inspired attempts to construct multilingual wordnets, or multi-wordnets.such as Ba-belNet, the standard sense inventory for multilingual WSD (Navigli et al., 2013).The multilingual analogue of the synset is the multi-synset.As with synsets, each multi-synset corresponds to a lexicalized concept; the distinction is that multi-synsets may contain words from multiple languages, each capable of expressing that concept (Navigli and Ponzetto, 2012).A subset of a multi-synset containing only words from a single language constitutes a synset, and maintains the synset properties.
Words in different languages sharing a multisynset can be interpreted as contextual translations, each capable of serving as a translation of any other in some context (Navigli and Ponzetto, 2010).Multi-synsets are constrained to contain senses from different languages that are translationally equivalent.Hauer and Kondrak (2020b) refer to this property as the multi-wordnet assumption.
Finally, a lexical gap occurs when a concept is not lexicalized in a particular language.In a multiwordnet, a multi-synset corresponding to a lexical gap in some language will contain no words from that language.

Theory
In this section, we state and prove several novel propositions characterizing the relations between concepts, senses, and translations.Translations are taken to be meaning-preserving or literal, such as those found in bilingual dictionaries, where the source word and its translation express the same concept.(Weak) Synonymy Assumption: If two different words f 1 and f 2 in language F are aligned to the same word e in language E, then f 1 and f 2 are synonyms.

Synonymy and Polysemy
(Weak) Polysemy Assumption: If two different words f 1 and f 2 in language F are aligned to the same word e in language E, then e is polysemous.
We refer to the formulations of Yao et al. (2012) as "weak" because they are insufficient to provide theoretical support for methods proposed by previous work.In particular, these assumptions do not allow us to conclude that f 1 and f 2 signal the same sense of e, as postulated by Lefever et al. (2011), or different senses of e, as postulated by Gale et al. (1992).For example, the fact that the Italian word prova (which is polysemous) can be aligned to both English test and trial (which are synonyms) does not imply that we can use those English words as indicators of different Italian senses.
To remedy this deficiency, we formulate two stronger assumptions which explicitly depend on not only the identities of the words, but on the concepts they express.In the following, the term translation refers to a dictionary translation, whereas the term translating refers to translation in context, such as a bitext word alignment.
(Strong) Synonymy Assumption (SSA): If a word e in language E has two different translations f 1 and f 2 in language F then f 1 and f 2 express the same concept when translating e.
(Strong) Polysemy Assumption (SPA): If a word e in language E has two different translations f 1 and f 2 in language F then f 1 and f 2 express different concepts when translating e. 1It is clear that our "strong" formulations imply the "weak" formulations of Yao et al. (2012).Indeed, SSA implies that there exists a multi-synset that contains e, f 1 , and f 2 ; therefore, by synset property #2, f 1 and f 2 are synonyms.Conversely, SPA implies that there exists a multi-synset that contains e and f 1 , and a different multi-synset that contains e and f 2 ; therefore, by synset property #1, e is polysemous.
It is worth noting that neither the strong nor weak versions of the two assumptions are strictly complementary.For example, each of the two senses of the English verb overcome that are glossed in BabelNet as "win a victory over" and "deal with successfully" can be translated by both Italian battere and vincere; in this case, neither the polysemy nor the synonymy assumption hold.Furthermore, the assumptions are not symmetrical with respect to the two languages; that is, they may hold for only one of the translation directions.
The "strong" versions of assumptions can be further generalized to involve all translations of a given source word: General Synonymy Assumption: All translations f 1 , . .., f n of a given word e express the same concept when translating e.
General Polysemy Assumption: Each distinct translation f 1 , . .., f n of a given word e expresses a unique concept when translating e.
Although the general versions of the two assumptions imply their strong versions, the reverse implications do not hold.For example, both senses of the English noun memory that are glossed in BabelNet as "a specific recollection" and "a cognitive faculty" can be translated into Italian as ricordo; in this case, GSA is violated, but SSA is not, as there are no two distinct translations to consider.We refer to a situation in which different multi-synsets contain the same pair of translations as parallel polysemy.
The gap between the General and Strong Synonymy Assumptions can be bridged by disallowing parallel polysemy using the following symmetrical assumption.
Pair Synonymy Assumption (PSA): If a word e in language E and a word f in language F are mutual translations, then f always expresses the same concept when translating e, and vice versa.
It can be shown that General Synonymy is equivalent to the conjunction of Strong Synonymy and Pair Synonymy; and further that General Polysemy is equivalent to the conjunction of Strong Polysemy and Pair Synonymy.

One Sense Per Translation (OSPT)
The OSPT assumption can be traced back to Gale et al. (1992), who expanded upon the idea of translation-based word sense annotation, introduced by Brown et al. (1991) and Dagan et al. (1991).Although no concrete algorithm is formulated, they construct a proof-of-concept set of six English words, each with a pair of senses such that the word can be disambiguated by its French translation in a bitext.For example, the two senses of duty, that refer to "tax" and "obligation", respectively, correspond to the French translations droit and devoir.
We explicitly formulate the OSPT assumption using the notion of translations sets, which correspond to the target-language words in the multi-synset for a given concept.
One Sense Per Translation Assumption: Senses of a given word in language E have disjoint translation sets in language F.
Note that OSPT is a directional assumption, which reflects the objective of annotating the senses of a word in the source language using its translations in the target language.As noted by Gale et al. (1992), the OSPT assumption does not hold universally; for example, several different senses of the English word interest are all translated by French intérêt (an instance of parallel polysemy).However, since OSPT does hold for a subset of the set of all senses in a lexicon, word senses that belong to such a subset can be disambiguated in an unsupervised manner, as we demonstrate in Section 5.
Surprisingly, our theoretical approach reveals that the OSPT assumption is exactly equivalent to the Pair Synonymy Assumption (PSA), which we introduced in Section 3.1 as the bridge between Strong and General Synonymy.Indeed, both OSPT and PSA convey that each pair of mutual translations must correspond to a unique concept.We consider this as one of the principal theoretical results in this work, which links the polysemy and synonymy assumptions of Yao et al. (2012) with the seminal TSI idea of Gale et al. (1992).
Theorem 1. OSPT ⇔ PSA Proof.OSPT states that senses of any given word e in language E have disjoint translation sets in language F .By the multi-wordnet assumption, this is equivalent to the statement that no two multi-synsets that contain e can contain the same word in language F , which in turn is equivalent to asserting that e shares exactly one multi-synset with each of its translations.This is equivalent to PSA: each word in language F can only express a single concept when translating e.
The fact that PSA is bidirectional (i.e., swapping the languages does not alter its truth value) immediately implies another unexpected finding: OSPT is also bidirectional.In other words, OSPT is satisfied in the E→F direction if and only if it is satisfied in the F →E direction.This finding is important: if we can always determine the sense of an English word using its French translation, then we can likewise determine the corresponding sense of the French words that are translated by the English word.

One Concept Per Word (OCPW)
Let us consider an extreme scenario: a hypothetical language in which polysemy does not exist.By synset property #2, every word in such a language would be monosemous, expressing exactly one concept, and belonging to exactly one synset.Such language would thus confirm to the One Concept Per Word (OCPW) assumption.Although no actual natural language2 satisfies this requirement, a subset of a language lexicon may do so.
A related view referred to as monosemism in linguistics holds that different observed senses of a polysemous word result from a combination of its core meaning with the pragmatics of each specific context (Franc ¸ois, 2008).A similar position in computational linguistics would correspond to the exclusive use of static word embeddings, such as those learned by word2vec (Mikolov et al., 2013), without any allowance for discrete senses or sense embeddings.We could also refer to this position as "one sense per word".
We can show that One Concept Per Word implies General Synonymy: Observation 3. OCPW ⇒ GSA Proof.Consider a word e in language E that has one or more different translations in language F .OCPW implies that word e belongs to a single multi-synset.Therefore all translations of e belong to the same multi-synset.Thus, all translations of e express the same concept when translating e, and so GSA holds.Furthermore, since GSA implies both PSA and SSA (Observation 1), One Concept Per Word implies all four Synonymy Assumptions (i.e., Weak, Strong, General and Pair SA), as well as One Sense Per Translation, which is equivalent to PSA (Theorem 1).
Interestingly, the Diab and Resnik (2002) algorithm for WSD, which aims to disambiguate English words based on their French translations, is based on the assumption that all target-language words are monosemous.By the above observation, this assumption is sufficient to guarantee that OSPT holds.
General Synonymy Assumption is almost equivalent to One Concept Per Word, except for the concepts that correspond to lexical gaps in the target language.We first formalize the (unidirectional) assumption of the lack of lexical gaps.which we refer to as NoLG.
No Lexical Gaps Assumption (NoLG): For each sense of a word in language E there exists at least one word in language F that translates it.Observation 4. GSA ∧ NoLG ⇒ OCPW Proof.GSA states that all translations of a word e express the same concept when translating e. Therefore all translations of word e are in the same multi-synset, corresponding to that translated concept, together with word e itself.By NoLG, there are no lexical gaps in language F , and so e cannot belong to any other multi-synset, which implies that OCPW holds for language E.
Via our theoretical investigation of translations as sense inventories, we have arrived at a surprising conclusion: the position that rejects any partitioning of word meanings logically leads to One Sense Per Translation, a proposition which fails to hold for a substantial portion of the lexicon, as we will demonstrate in Section 4. These findings underscore the importance of formal approaches to lexical semantics, as well as computational linguistics in general.

One Translation Per Sense (OTPS)
We now define OTPS, which can be seen as a dual of OSPT, reversing the roles of senses and translations.OTPS is a unidirectional and asymmetric assumption.
One Translation Per Sense Assumption: Each sense of a word in language E has at most one translation in language F. Surprisingly, it can be shown that OTPS is equivalent to Strong Polysemy Assumption (SPA).
Theorem 2. OTPS ⇔ SPA Proof.OTPS states that each sense of word e in language E has at most one translation in language F .By the multi-wordnet assumption, this means that each multi-synset that contains e contains at most one word in language F , which in turn is equivalent to asserting SPA: every pair of distinct translations of e in language F always express different concepts when translating e.

One Word Per Concept (OWPC)
Consider a hypothetical language in which synonymy does not exist.By synset property #2, every synset in such a language would contain only one word, and therefore there would exist an unambiguous mapping of concepts into words.Such language would confirm to the One Word Per Concept (OWPC) assumption.Although no actual natural language satisfies this requirement, a subset of any language may be identified that does satisfy it. 3In fact, approximately 56% of WordNet 3.0 synsets contain only one word.
In particular, if we take a position that discrete senses do not exist, every occurrence of a given word in a distinct context would correspond to a unique sense exclusive to that word, and no meaning-preserving substitution of the focus word would be possible.This approach is diametrically opposite to the position described in Section 3.3.A similar position in computational linguistics argues in favor of the exclusive use of contextual embeddings, without any role for discrete senses or sense embeddings, which we could refer to as "one sense per context".
We can prove that One Word Per Concept implies One Translation Per Sense: Observation 5. OWPC ⇒ OTPS Proof.(By contraposition.)Suppose that OTPS does not hold.Then there exists a sense of a word e in language E that has more than one translation in language F .By the multi-wordnet assumption, the corresponding multi-synset must contain more than one word in language F .This implies that OWPC does not hold in language F .The converse of Observation 5 above is true if there are no lexical gaps, in which case One Word Per Concept and One Translation Per Sense become equivalent.This is captured by the following observation: Observation 6. OTPS ∧ NoLG ⇒ OWPC Proof.(by contradiction) Suppose that there exists a multi-synset S with two words in language F .If there is no lexical gap in language E for this concept, S must contain a word in language E. This would imply that there is a sense in language E with two translations in language F .Contradiction.
We have again arrived at a surprising conclusion: the assumption discrete senses do not exist (OWPC) logically leads to another unrealistic proposition (OTPS).Indeed, if every distinct usage of a word constitutes a unique sense, there can be no synonymy.Yet, machine translation needs to account for synonymy if its goal is to produce fully fluent, rather than just semantically correct translations.Just as our formalism contributes to the understanding of leveraging translations for WSD, we interpret these findings as evidence that a theory-oriented investigation may also benefit machine translation.

Why TSI Failed
In this section, we attempt to answer an important open question in computational lexical semantics: why was the TSI idea abandoned?We establish which of the assumptions discussed in Section 3 need to hold for TSI to work, and then investigate the extent to which these assumptions hold in practice.

Theoretical Analysis of TSI
To address the knowledge-acquisition bottleneck in WSD, the TSI approach advocates using translations to define sense inventories (Resnik and Yarowsky, 1997).The basic idea is to sense-annotate all content words on the source side of a bitext by using their translations on the target side as sense tags.Our theoretical formalization specifies that three assumptions need to hold in order to make this idea feasible: OSPT, OTPS, and NoLG.We discuss these three requirements in turn in the following paragraphs.
First, if One Sense Per Translation (OSPT) does not hold consistently, there is no obvious way to distinguish between senses that share translations.For example, in BabelNet, the French word ordre translates all 15 senses of the English word order.So, an instance of order is not disambiguated by the virtue of being translated into French as ordre.This is an instance of extreme parallel polysemy.In general, a bitext, which by definition offers only a single language of translation, cannot be construed as a sense annotated corpus.
Second, when One Translation Per Sense (OTPS) does not hold, synonymous translations that express a single concept would lead to creation of duplicate senses.For example, according to BabelNet, the French words rive and berge both translate bank, but correspond to the same sense (as in "river bank").In order to avoid creating spurious senses, synonymous translations would need to be either manually identified, or looked up in a multi-wordnet.No effective algorithm for automatic assigning of translations to senses has been proposed to date.
Finally, there is no obvious way to annotate senses that correspond to lexical gaps in the target language, or more generally any violations of the No Lexical Gaps (NoLG) assumption.For example, since the English noun performer cannot be translated precisely into French (Sagot and Fišer, 2008), it cannot be sense-annotated on the basis of its French translations.

Polysemy and Synonymy in BabelNet
Our objective is to empirically test the properties defined in Section 3 in BabelNet (Navigli andPonzetto, 2010, 2012), a large, commonly-used multi-wordnet.We focus on English, with three languages of translation: Italian, Polish, and Chinese.These languages represent various degrees of similarity to English.For each language, we compute the proportion of English words for which each property holds.We consider only words with at least two senses in WordNet 3.0, and at least one translation in the target language in BabelNet 4.0.There are 20,426 such words in Italian, 17,404 in Polish, and 19,973 in Chinese.
The results are presented in Table 1.They indicate that properties based on synonymy are relatively more reliable compared to properties based on polysemy.For all three languages of translation, we found no exceptions to any of the theorems proven in Section 3. Since this theory was not explicitly used to construct BabelNet or Word-Net, the results provide evidence for the applicability of our work to modern semantic resources, and confirm that our underlying assumptions are reflective of reliable linguistic phenomena.In particular, the results are in agreement with the Observations 1 and 2 in Section 3 which state that the General versions of the Synonymy and Polysemy Assumptions, respectively, imply their Strong versions: GSA ⇒ SSA, and GPA ⇒ SPA.One Sense Per Translation (OSPT), One Translation Per Sense (OTPS), and No Lexical Gaps (NoLG) for the same polysemous words.If the three assumptions all hold for a particular word type, then there exists a bijective mapping between the senses and translations of that word, and so all instances of the word could be annotated using only their translations.The last row in Table 2 shows that the proportion of words that have this property is very low.

Discussion
Our experimental validation shows that only a very small proportion of words in the lexical resources satisfy all three necessary assumptions.Together with our theoretical results, this provides an explanation for the failure of TSI as a general solution for the WSD knowledge bottleneck problem; the relations between senses and translations which would make TSI viable simply do not hold in practice.
The TSI idea may still be applicable to handcrafted lexical samples, such as the six word sample of Gale et al. (1992).Indeed, the Se-mEval 2010 shared task on cross-lingual WSD (Lefever and Hoste, 2010) was limited to small lexical samples, and involved substantial manualannotation effort for each tested language pair.However, there exists an alternative approach that is applicable to the entire lexicon, which we discuss in the next section.

Why Multi-Wordnets Work
Wordnets and multi-wordnets have supplanted TSI as the dominant paradigm for defining the sense inventories used in WSD, with BabelNet in particular emerging as the de facto language independent sense inventory (Navigli et al., 2013;Moro and Navigli, 2015).In this section, we We have demonstrated that OSPT does not hold in general, therefore precluding the TSI approach to annotating word senses.However, for any source and target language, there does exist a subset of source language words such that at least one translation indicates a specific sense of the source word.According to our theory, we can precisely annotate such word tokens on the basis of their lexical translations.

Experimental Setup
We test this idea on MultiSemCor, an English-Italian a word-aligned bitext which is tagged with gold-standard sense annotations.We obtain the sense-to-translation mapping from a multiwordnet which covers both languages of the bitext.For each source word which is aligned to a target word, we consult the multi-wordnet to determine how many multi-synsets the two words share.That is, we identify the set of concepts which can be expressed by both the word and its translation.If there is exactly one such multi-synset, i.e. if OSPT holds for this word-translation pair, we annotate that word with its sense corresponding to that multi-synset.We evaluate these annotations by comparing them to the gold standard sense tags included with the bitext.
As our multi-wordnet, we use MultiWordNet (MWN, Pianta et al., 2002) Version 1.5.0,wherein each multi-synset is associated with a part of speech and a unique multi-synset ID.Each multisynset contains at least one English or Italian word.Each English word in each multi-synset is associated with a corresponding WordNet 1.6 sense.
Our word-aligned sense-annotated bitext is Mul-tiSemCor (MSC, Bentivogli and Pianta, 2005).which was created by professionally translating SemCor (Miller et al., 1993), and word-aligning the resulting bitext with a knowledge-based aligner.There are 91,937 English word tokens in MSC which are annotated with exactly one Word-Net 1.6 sense, and aligned with a single Italian word.

Results
Of the 78,247 annotated tokens which are polysemous, 19,179 can be disambiguated using our method.These are the tokens for which OSPT holds, and so a single multi-synset can be identified from the translation alone.15,545 of those annotations are correct, yielding 25% coverage and 81% precision.If we include monosemous words, the total number of word tokens rises to 91,937.The experimental procedure annotates 30,634 English word tokens, approximately 33% of all senseannotated English tokens in MSC.Comparing to the gold-standard annotations, 27,000 of these annotations are correct, giving an overall precision of approximately 88%.

Discussion
Our method, which is entirely unsupervised, depending only on translation and alignment information.achieves higher accuracy (measured on the subset of instances which satisfy OSPT) than state-of-the-art supervised WSD systems on standard English datasets (Barba et al., 2021).While these results are not directly comparable, we interpret this as a strong proof of concept.
Moreover, it has been demonstrated that, with modern methods of propagating information between senses and synsets, annotations on a lexical sample can improve WSD accuracy even on words not included in that sample (Loureiro and Camacho-Collados, 2020).Note that the set of tokens which can be annotated depends on the language of translation; for a given token, OSPT may not hold for its Italian translation, but may hold for a translation into, for example, Polish.Therefore, we posit that our method of automatically tagging a substantial proportion of the words in any aligned bitext could be used to quickly generate high-precision sense annotations, which could benefit WSD and other semantic tasks.We leave an exploration of this idea for future work.
As a final note, Hauer and Kondrak (2020a) argue that homonym distinctions are the coarsest possible sense inventory, and that further, each homonymous word (that is, each word which has multiple homonymous senses), with very few exceptions, has a disjoint set of translations.Therefore, while One Sense Per Translation (OSPT) does not hold in general, One Homonym Per Translation does hold.So, using our approach, the homonymous words in any corpus could be disambiguated, at the homonym level, with near-perfect accuracy using translation information alone.

Conclusion
We have formulated and proved several propositions related to senses, translations, synonymy, and polysemy.We have shown empirically that the assumptions that would allow translations to serve as a sense inventory hold simultaneously only for a small fraction of senses.Finally, we demonstrated that, given a bitext and a multi-wordnet, translations can be used for high-precision unsupervised sense annotation.We hope that these contributions will inspire further theoretical analysis of multi-lingual lexical semantics, investigation of other open issues, and guide empirical research toward more explainable models and understandable results.
Yao et al. (2012) observed that methods which involve senses and translations in bitexts tend to be based on one of two, often unstated, assumptions about what a translation distinction indicates:

Table 1 :
Table 2 further illustrates the relations between The percentage of English polysemous words in BabelNet that confirm to the formal assumptions.