2020
pdf
bib
abs
University of Tsukuba’s Machine Translation System for IWSLT20 Open Domain Translation Task
Hongyi Cui
|
Yizhen Wei
|
Shohei Iida
|
Takehito Utsuro
|
Masaaki Nagata
Proceedings of the 17th International Conference on Spoken Language Translation
In this paper, we introduce University of Tsukuba’s submission to the IWSLT20 Open Domain Translation Task. We participate in both Chinese→Japanese and Japanese→Chinese directions. For both directions, our machine translation systems are based on the Transformer architecture. Several techniques are integrated in order to boost the performance of our models: data filtering, large-scale noised training, model ensemble, reranking and postprocessing. Consequently, our efforts achieve 33.0 BLEU scores for Chinese→Japanese translation and 32.3 BLEU scores for Japanese→Chinese translation.
2019
pdf
bib
abs
Attention over Heads: A Multi-Hop Attention for Neural Machine Translation
Shohei Iida
|
Ryuichiro Kimura
|
Hongyi Cui
|
Po-Hsuan Hung
|
Takehito Utsuro
|
Masaaki Nagata
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop
In this paper, we propose a multi-hop attention for the Transformer. It refines the attention for an output symbol by integrating that of each head, and consists of two hops. The first hop attention is the scaled dot-product attention which is the same attention mechanism used in the original Transformer. The second hop attention is a combination of multi-layer perceptron (MLP) attention and head gate, which efficiently increases the complexity of the model by adding dependencies between heads. We demonstrate that the translation accuracy of the proposed multi-hop attention outperforms the baseline Transformer significantly, +0.85 BLEU point for the IWSLT-2017 German-to-English task and +2.58 BLEU point for the WMT-2017 German-to-English task. We also find that the number of parameters required for a multi-hop attention is smaller than that for stacking another self-attention layer and the proposed model converges significantly faster than the original Transformer.
pdf
bib
abs
Mixed Multi-Head Self-Attention for Neural Machine Translation
Hongyi Cui
|
Shohei Iida
|
Po-Hsuan Hung
|
Takehito Utsuro
|
Masaaki Nagata
Proceedings of the 3rd Workshop on Neural Generation and Translation
Recently, the Transformer becomes a state-of-the-art architecture in the filed of neural machine translation (NMT). A key point of its high-performance is the multi-head self-attention which is supposed to allow the model to independently attend to information from different representation subspaces. However, there is no explicit mechanism to ensure that different attention heads indeed capture different features, and in practice, redundancy has occurred in multiple heads. In this paper, we argue that using the same global attention in multiple heads limits multi-head self-attention’s capacity for learning distinct features. In order to improve the expressiveness of multi-head self-attention, we propose a novel Mixed Multi-Head Self-Attention (MMA) which models not only global and local attention but also forward and backward attention in different attention heads. This enables the model to learn distinct representations explicitly among multiple heads. In our experiments on both WAT17 English-Japanese as well as IWSLT14 German-English translation task, we show that, without increasing the number of parameters, our models yield consistent and significant improvements (0.9 BLEU scores on average) over the strong Transformer baseline.
pdf
bib
Selecting Informative Context Sentence by Forced Back-Translation
Ryuichiro Kimura
|
Shohei Iida
|
Hongyi Cui
|
Po-Hsuan Hung
|
Takehito Utsuro
|
Masaaki Nagata
Proceedings of Machine Translation Summit XVII: Research Track
pdf
bib
A Multi-Hop Attention for RNN based Neural Machine Translation
Shohei Iida
|
Ryuichiro Kimura
|
Hongyi Cui
|
Po-Hsuan Hung
|
Takehito Utsuro
|
Masaaki Nagata
Proceedings of the 8th Workshop on Patent and Scientific Literature Translation