2023
pdf
bib
abs
A Natural Bias for Language Generation Models
Clara Meister
|
Wojciech Stokowiec
|
Tiago Pimentel
|
Lei Yu
|
Laura Rimell
|
Adhiguna Kuncoro
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
After just a few hundred training updates, a standard probabilistic model for language generation has likely not yet learnt many semantic or syntactic rules of natural language, making it difficult to estimate the probability distribution over next tokens. Yet around this point, these models have identified a simple, loss-minimising behaviour: to output the unigram distribution of the target training corpus. The use of such a heuristic raises the question: Can we initialise our models with this behaviour and save precious compute resources and model capacity? Here we show that we can effectively endow standard neural language generation models with a separate module that reflects unigram frequency statistics as prior knowledge, simply by initialising the bias term in a model’s final linear layer with the log-unigram distribution. We use neural machine translation as a test bed for this simple technique and observe that it: (i) improves learning efficiency; (ii) achieves better overall performance; and perhaps most importantly (iii) appears to disentangle strong frequency effects by encouraging the model to specialise in non-frequency-related aspects of language.
2020
pdf
bib
abs
The DeepMind Chinese–English Document Translation System at WMT2020
Lei Yu
|
Laurent Sartran
|
Po-Sen Huang
|
Wojciech Stokowiec
|
Domenic Donato
|
Srivatsan Srinivasan
|
Alek Andreev
|
Wang Ling
|
Sona Mokra
|
Agustin Dal Lago
|
Yotam Doron
|
Susannah Young
|
Phil Blunsom
|
Chris Dyer
Proceedings of the Fifth Conference on Machine Translation
This paper describes the DeepMind submission to the Chinese→English constrained data track of the WMT2020 Shared Task on News Translation. The submission employs a noisy channel factorization as the backbone of a document translation system. This approach allows the flexible combination of a number of independent component models which are further augmented with back-translation, distillation, fine-tuning with in-domain data, Monte-Carlo Tree Search decoding, and improved uncertainty estimation. In order to address persistent issues with the premature truncation of long sequences we included specialized length models and sentence segmentation techniques. Our final system provides a 9.9 BLEU points improvement over a baseline Transformer on our test set (newstest 2019).
pdf
bib
abs
Better Document-Level Machine Translation with Bayes’ Rule
Lei Yu
|
Laurent Sartran
|
Wojciech Stokowiec
|
Wang Ling
|
Lingpeng Kong
|
Phil Blunsom
|
Chris Dyer
Transactions of the Association for Computational Linguistics, Volume 8
We show that Bayes’ rule provides an effective mechanism for creating document translation models that can be learned from only parallel sentences and monolingual documents a compelling benefit because parallel documents are not always available. In our formulation, the posterior probability of a candidate translation is the product of the unconditional (prior) probability of the candidate output document and the “reverse translation probability” of translating the candidate output back into the source language. Our proposed model uses a powerful autoregressive language model as the prior on target language documents, but it assumes that each sentence is translated independently from the target to the source language. Crucially, at test time, when a source document is observed, the document language model prior induces dependencies between the translations of the source sentences in the posterior. The model’s independence assumption not only enables efficient use of available data, but it additionally admits a practical left-to-right beam-search algorithm for carrying out inference. Experiments show that our model benefits from using cross-sentence context in the language model, and it outperforms existing document translation approaches.
2016
pdf
bib
abs
LanguageCrawl: A Generic Tool for Building Language Models Upon Common-Crawl
Szymon Roziewski
|
Wojciech Stokowiec
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
The web data contains immense amount of data, hundreds of billion words are waiting to be extracted and used for language research. In this work we introduce our tool LanguageCrawl which allows NLP researchers to easily construct web-scale corpus from Common Crawl Archive: a petabyte scale, open repository of web crawl information. Three use-cases are presented: filtering Polish websites, building an N-gram corpora and training continuous skip-gram language model with hierarchical softmax. Each of them has been implemented within the LanguageCrawl toolkit, with the possibility to adjust specified language and N-gram ranks. Special effort has been put on high computing efficiency, by applying highly concurrent multitasking. We make our tool publicly available to enrich NLP resources. We strongly believe that our work will help to facilitate NLP research, especially in under-resourced languages, where the lack of appropriately sized corpora is a serious hindrance to applying data-intensive methods, such as deep neural networks.