%0 Conference Proceedings %T Understanding and Improving Hidden Representations for Neural Machine Translation %A Li, Guanlin %A Liu, Lemao %A Li, Xintong %A Zhu, Conghui %A Zhao, Tiejun %A Shi, Shuming %Y Burstein, Jill %Y Doran, Christy %Y Solorio, Thamar %S Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) %D 2019 %8 June %I Association for Computational Linguistics %C Minneapolis, Minnesota %F li-etal-2019-understanding-improving %X Multilayer architectures are currently the gold standard for large-scale neural machine translation. Existing works have explored some methods for understanding the hidden representations, however, they have not sought to improve the translation quality rationally according to their understanding. Towards understanding for performance improvement, we first artificially construct a sequence of nested relative tasks and measure the feature generalization ability of the learned hidden representation over these tasks. Based on our understanding, we then propose to regularize the layer-wise representations with all tree-induced tasks. To overcome the computational bottleneck resulting from the large number of regularization terms, we design efficient approximation methods by selecting a few coarse-to-fine tasks for regularization. Extensive experiments on two widely-used datasets demonstrate the proposed methods only lead to small extra overheads in training but no additional overheads in testing, and achieve consistent improvements (up to +1.3 BLEU) compared to the state-of-the-art translation model. %R 10.18653/v1/N19-1046 %U https://aclanthology.org/N19-1046 %U https://doi.org/10.18653/v1/N19-1046 %P 466-477