Adversarial Training on Disentangling Meaning and Language Representations for Unsupervised Quality Estimation

Yuto Kuroda, Tomoyuki Kajiwara, Yuki Arase, Takashi Ninomiya


Abstract
We propose a method to distill language-agnostic meaning embeddings from multilingual sentence encoders for unsupervised quality estimation of machine translation. Our method facilitates that the meaning embeddings focus on semantics by adversarial training that attempts to eliminate language-specific information. Experimental results on unsupervised quality estimation reveal that our method achieved higher correlations with human evaluations.
Anthology ID:
2022.coling-1.465
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Editors:
Nicoletta Calzolari, Chu-Ren Huang, Hansaem Kim, James Pustejovsky, Leo Wanner, Key-Sun Choi, Pum-Mo Ryu, Hsin-Hsi Chen, Lucia Donatelli, Heng Ji, Sadao Kurohashi, Patrizia Paggio, Nianwen Xue, Seokhwan Kim, Younggyun Hahm, Zhong He, Tony Kyungil Lee, Enrico Santus, Francis Bond, Seung-Hoon Na
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
5240–5245
Language:
URL:
https://aclanthology.org/2022.coling-1.465
DOI:
Bibkey:
Cite (ACL):
Yuto Kuroda, Tomoyuki Kajiwara, Yuki Arase, and Takashi Ninomiya. 2022. Adversarial Training on Disentangling Meaning and Language Representations for Unsupervised Quality Estimation. In Proceedings of the 29th International Conference on Computational Linguistics, pages 5240–5245, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
Adversarial Training on Disentangling Meaning and Language Representations for Unsupervised Quality Estimation (Kuroda et al., COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.465.pdf