James Bradbury
2019
A High-Quality Multilingual Dataset for Structured Documentation Translation
Kazuma Hashimoto
|
Raffaella Buschiazzo
|
James Bradbury
|
Teresa Marshall
|
Richard Socher
|
Caiming Xiong
Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers)
This paper presents a high-quality multilingual dataset for the documentation domain to advance research on localization of structured text. Unlike widely-used datasets for translation of plain text, we collect XML-structured parallel text segments from the online documentation for an enterprise software platform. These Web pages have been professionally translated from English into 16 languages and maintained by domain experts, and around 100,000 text segments are available for each language pair. We build and evaluate translation models for seven target languages from English, with several different copy mechanisms and an XML-constrained beam search. We also experiment with a non-English pair to show that our dataset has the potential to explicitly enable 17 × 16 translation settings. Our experiments show that learning to translate with the XML tags improves translation accuracy, and the beam search accurately generates XML structures. We also discuss trade-offs of using the copy mechanisms by focusing on translation of numerical words and named entities. We further provide a detailed human analysis of gaps between the model output and human translations for real-world applications, including suitability for post-editing.
2017
Towards Neural Machine Translation with Latent Tree Attention
James Bradbury
|
Richard Socher
Proceedings of the 2nd Workshop on Structured Prediction for Natural Language Processing
Building models that take advantage of the hierarchical structure of language without a priori annotation is a longstanding goal in natural language processing. We introduce such a model for the task of machine translation, pairing a recurrent neural network grammar encoder with a novel attentional RNNG decoder and applying policy gradient reinforcement learning to induce unsupervised tree structures on both the source and target. When trained on character-level datasets with no explicit segmentation or parse annotation, the model learns a plausible segmentation and shallow parse, obtaining performance close to an attentional baseline.
2016
MetaMind Neural Machine Translation System for WMT 2016
James Bradbury
|
Richard Socher
Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers
Search