PheMT: A Phenomenon-wise Dataset for Machine Translation Robustness on User-Generated Contents

Ryo Fujii, Masato Mita, Kaori Abe, Kazuaki Hanawa, Makoto Morishita, Jun Suzuki, Kentaro Inui


Abstract
Neural Machine Translation (NMT) has shown drastic improvement in its quality when translating clean input, such as text from the news domain. However, existing studies suggest that NMT still struggles with certain kinds of input with considerable noise, such as User-Generated Contents (UGC) on the Internet. To make better use of NMT for cross-cultural communication, one of the most promising directions is to develop a model that correctly handles these expressions. Though its importance has been recognized, it is still not clear as to what creates the great gap in performance between the translation of clean input and that of UGC. To answer the question, we present a new dataset, PheMT, for evaluating the robustness of MT systems against specific linguistic phenomena in Japanese-English translation. Our experiments with the created dataset revealed that not only our in-house models but even widely used off-the-shelf systems are greatly disturbed by the presence of certain phenomena.
Anthology ID:
2020.coling-main.521
Volume:
Proceedings of the 28th International Conference on Computational Linguistics
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Editors:
Donia Scott, Nuria Bel, Chengqing Zong
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
5929–5943
Language:
URL:
https://aclanthology.org/2020.coling-main.521
DOI:
10.18653/v1/2020.coling-main.521
Bibkey:
Cite (ACL):
Ryo Fujii, Masato Mita, Kaori Abe, Kazuaki Hanawa, Makoto Morishita, Jun Suzuki, and Kentaro Inui. 2020. PheMT: A Phenomenon-wise Dataset for Machine Translation Robustness on User-Generated Contents. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5929–5943, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Cite (Informal):
PheMT: A Phenomenon-wise Dataset for Machine Translation Robustness on User-Generated Contents (Fujii et al., COLING 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.coling-main.521.pdf
Code
 cl-tohoku/PheMT
Data
PheMTMTNT