Ryuichiro Kimura


2019

pdf bib
Attention over Heads: A Multi-Hop Attention for Neural Machine Translation
Shohei Iida | Ryuichiro Kimura | Hongyi Cui | Po-Hsuan Hung | Takehito Utsuro | Masaaki Nagata
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop

In this paper, we propose a multi-hop attention for the Transformer. It refines the attention for an output symbol by integrating that of each head, and consists of two hops. The first hop attention is the scaled dot-product attention which is the same attention mechanism used in the original Transformer. The second hop attention is a combination of multi-layer perceptron (MLP) attention and head gate, which efficiently increases the complexity of the model by adding dependencies between heads. We demonstrate that the translation accuracy of the proposed multi-hop attention outperforms the baseline Transformer significantly, +0.85 BLEU point for the IWSLT-2017 German-to-English task and +2.58 BLEU point for the WMT-2017 German-to-English task. We also find that the number of parameters required for a multi-hop attention is smaller than that for stacking another self-attention layer and the proposed model converges significantly faster than the original Transformer.

pdf bib
Selecting Informative Context Sentence by Forced Back-Translation
Ryuichiro Kimura | Shohei Iida | Hongyi Cui | Po-Hsuan Hung | Takehito Utsuro | Masaaki Nagata
Proceedings of Machine Translation Summit XVII: Research Track

pdf bib
A Multi-Hop Attention for RNN based Neural Machine Translation
Shohei Iida | Ryuichiro Kimura | Hongyi Cui | Po-Hsuan Hung | Takehito Utsuro | Masaaki Nagata
Proceedings of the 8th Workshop on Patent and Scientific Literature Translation

2017

pdf bib
Neural Machine Translation Model with a Large Vocabulary Selected by Branching Entropy
Zi Long | Ryuichiro Kimura | Takehito Utsuro | Tomoharu Mitsuhashi | Mikio Yamamoto
Proceedings of Machine Translation Summit XVI: Research Track

pdf bib
Patent NMT integrated with Large Vocabulary Phrase Translation by SMT at WAT 2017
Zi Long | Ryuichiro Kimura | Takehito Utsuro | Tomoharu Mitsuhashi | Mikio Yamamoto
Proceedings of the 4th Workshop on Asian Translation (WAT2017)

Neural machine translation (NMT) cannot handle a larger vocabulary because the training complexity and decoding complexity proportionally increase with the number of target words. This problem becomes even more serious when translating patent documents, which contain many technical terms that are observed infrequently. Long et al.(2017) proposed to select phrases that contain out-of-vocabulary words using the statistical approach of branching entropy. The selected phrases are then replaced with tokens during training and post-translated by the phrase translation table of SMT. In this paper, we apply the method proposed by Long et al. (2017) to the WAT 2017 Japanese-Chinese and Japanese-English patent datasets. Evaluation on Japanese-to-Chinese, Chinese-to-Japanese, Japanese-to-English and English-to-Japanese patent sentence translation proved the effectiveness of phrases selected with branching entropy, where the NMT model of Long et al.(2017) achieves a substantial improvement over a baseline NMT model without the technique proposed by Long et al.(2017).