%0 Conference Proceedings %T Residual Stacking of RNNs for Neural Machine Translation %A Shu, Raphael %A Miura, Akiva %Y Nakazawa, Toshiaki %Y Mino, Hideya %Y Ding, Chenchen %Y Goto, Isao %Y Neubig, Graham %Y Kurohashi, Sadao %Y Riza, Ir. Hammam %Y Bhattacharyya, Pushpak %S Proceedings of the 3rd Workshop on Asian Translation (WAT2016) %D 2016 %8 December %I The COLING 2016 Organizing Committee %C Osaka, Japan %F shu-miura-2016-residual %X To enhance Neural Machine Translation models, several obvious ways such as enlarging the hidden size of recurrent layers and stacking multiple layers of RNN can be considered. Surprisingly, we observe that using naively stacked RNNs in the decoder slows down the training and leads to degradation in performance. In this paper, We demonstrate that applying residual connections in the depth of stacked RNNs can help the optimization, which is referred to as residual stacking. In empirical evaluation, residual stacking of decoder RNNs gives superior results compared to other methods of enhancing the model with a fixed parameter budget. Our submitted systems in WAT2016 are based on a NMT model ensemble with residual stacking in the decoder. To further improve the performance, we also attempt various methods of system combination in our experiments. %U https://aclanthology.org/W16-4623 %P 223-229