Miruna Pislar


2021

pdf bib
Machine Translation Decoding beyond Beam Search
Rémi Leblond | Jean-Baptiste Alayrac | Laurent Sifre | Miruna Pislar | Lespiau Jean-Baptiste | Ioannis Antonoglou | Karen Simonyan | Oriol Vinyals
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Beam search is the go-to method for decoding auto-regressive machine translation models. While it yields consistent improvements in terms of BLEU, it is only concerned with finding outputs with high model likelihood, and is thus agnostic to whatever end metric or score practitioners care about. Our aim is to establish whether beam search can be replaced by a more powerful metric-driven search technique. To this end, we explore numerous decoding algorithms, including some which rely on a value function parameterised by a neural network, and report results on a variety of metrics. Notably, we introduce a Monte-Carlo Tree Search (MCTS) based method and showcase its competitiveness. We provide a blueprint for how to use MCTS fruitfully in language applications, which opens promising future directions. We find that which algorithm is best heavily depends on the characteristics of the goal metric; we believe that our extensive experiments and analysis will inform further research in this area.

2020

pdf bib
Seeing Both the Forest and the Trees: Multi-head Attention for Joint Classification on Different Compositional Levels
Miruna Pislar | Marek Rei
Proceedings of the 28th International Conference on Computational Linguistics

In natural languages, words are used in association to construct sentences. It is not words in isolation, but the appropriate use of hierarchical structures that conveys the meaning of the whole sentence. Neural networks have the ability to capture expressive language features; however, insights into the link between words and sentences are difficult to acquire automatically. In this work, we design a deep neural network architecture that explicitly wires lower and higher linguistic components; we then evaluate its ability to perform the same task at different hierarchical levels. Settling on broad text classification tasks, we show that our model, MHAL, learns to simultaneously solve them at different levels of granularity by fluidly transferring knowledge between hierarchies. Using a multi-head attention mechanism to tie the representations between single words and full sentences, MHAL systematically outperforms equivalent models that are not incentivized towards developing compositional representations. Moreover, we demonstrate that, with the proposed architecture, the sentence information flows naturally to individual words, allowing the model to behave like a sequence labeler (which is a lower, word-level task) even without any word supervision, in a zero-shot fashion.