Yujin Baek
2023
Towards Accurate Translation via Semantically Appropriate Application of Lexical Constraints
Yujin Baek
|
Koanho Lee
|
Dayeon Ki
|
Cheonbok Park
|
Hyoung-Gyu Lee
|
Jaegul Choo
Findings of the Association for Computational Linguistics: ACL 2023
Lexically-constrained NMT (LNMT) aims to incorporate user-provided terminology into translations. Despite its practical advantages, existing work has not evaluated LNMT models under challenging real-world conditions. In this paper, we focus on two important but understudied issues that lie in the current evaluation process of LNMT studies. The model needs to cope with challenging lexical constraints that are “homographs” or “unseen” during training. To this end, we first design a homograph disambiguation module to differentiate the meanings of homographs. Moreover, we propose PLUMCOT which integrates contextually rich information about unseen lexical constraints from pre-trained language models and strengthens a copy mechanism of the pointer network via direct supervision of a copying score. We also release HOLLY, an evaluation benchmark for assessing the ability of model to cope with “homographic” and “unseen” lexical constraints. Experiments on HOLLY and the previous test setup show the effectiveness of our method. The effects of PLUMCOT are shown to be remarkable in “unseen” constraints. Our dataset is available at https://github.com/papago-lab/HOLLY-benchmark.
Towards Formality-Aware Neural Machine Translation by Leveraging Context Information
Dohee Kim
|
Yujin Baek
|
Soyoung Yang
|
Jaegul Choo
Findings of the Association for Computational Linguistics: EMNLP 2023
Formality is one of the most important linguistic properties to determine the naturalness of translation. Although a target-side context contains formality-related tokens, the sparsity within the context makes it difficult for context-aware neural machine translation (NMT) models to properly discern them. In this paper, we introduce a novel training method to explicitly inform the NMT model by pinpointing key informative tokens using a formality classifier. Given a target context, the formality classifier guides the model to concentrate on the formality-related tokens within the context. Additionally, we modify the standard cross-entropy loss, especially toward the formality-related tokens obtained from the classifier. Experimental results show that our approaches not only improve overall translation quality but also reflect the appropriate formality from the target context.
2020
PATQUEST: Papago Translation Quality Estimation
Yujin Baek
|
Zae Myung Kim
|
Jihyung Moon
|
Hyunjoong Kim
|
Eunjeong Park
Proceedings of the Fifth Conference on Machine Translation
This paper describes the system submitted by Papago team for the quality estimation task at WMT 2020. It proposes two key strategies for quality estimation: (1) task-specific pretraining scheme, and (2) task-specific data augmentation. The former focuses on devising learning signals for pretraining that are closely related to the downstream task. We also present data augmentation techniques that simulate the varying levels of errors that the downstream dataset may contain. Thus, our PATQUEST models are exposed to erroneous translations in both stages of task-specific pretraining and finetuning, effectively enhancing their generalization capability. Our submitted models achieve significant improvement over the baselines for Task 1 (Sentence-Level Direct Assessment; EN-DE only), and Task 3 (Document-Level Score).
Search
Co-authors
- Jaegul Choo 2
- Koanho Lee 1
- Dayeon Ki 1
- Cheonbok Park 1
- Hyoung-Gyu Lee 1
- show all...