Dohee Kim
2023
Towards Formality-Aware Neural Machine Translation by Leveraging Context Information
Dohee Kim
|
Yujin Baek
|
Soyoung Yang
|
Jaegul Choo
Findings of the Association for Computational Linguistics: EMNLP 2023
Formality is one of the most important linguistic properties to determine the naturalness of translation. Although a target-side context contains formality-related tokens, the sparsity within the context makes it difficult for context-aware neural machine translation (NMT) models to properly discern them. In this paper, we introduce a novel training method to explicitly inform the NMT model by pinpointing key informative tokens using a formality classifier. Given a target context, the formality classifier guides the model to concentrate on the formality-related tokens within the context. Additionally, we modify the standard cross-entropy loss, especially toward the formality-related tokens obtained from the classifier. Experimental results show that our approaches not only improve overall translation quality but also reflect the appropriate formality from the target context.
AniEE: A Dataset of Animal Experimental Literature for Event Extraction
Dohee Kim
|
Ra Yoo
|
Soyoung Yang
|
Hee Yang
|
Jaegul Choo
Findings of the Association for Computational Linguistics: EMNLP 2023
Event extraction (EE), as a crucial information extraction (IE) task, aims to identify event triggers and their associated arguments from unstructured text, subsequently classifying them into pre-defined types and roles. In the biomedical domain, EE is widely used to extract complex structures representing biological events from literature. Due to the complicated semantics and specialized domain knowledge, it is challenging to construct biomedical event extraction datasets. Additionally, most existing biomedical EE datasets primarily focus on cell experiments or the overall experimental procedures. Therefore, we introduce AniEE, an event extraction dataset concentrated on the animal experiment stage. We establish a novel animal experiment customized entity and event scheme in collaboration with domain experts. We then create an expert-annotated high-quality dataset containing discontinuous entities and nested events and evaluate our dataset on the recent outstanding NER and EE models.
2022
Rethinking Style Transformer with Energy-based Interpretation: Adversarial Unsupervised Style Transfer using a Pretrained Model
Hojun Cho
|
Dohee Kim
|
Seungwoo Ryu
|
ChaeHun Park
|
Hyungjong Noh
|
Jeong-in Hwang
|
Minseok Choi
|
Edward Choi
|
Jaegul Choo
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Style control, content preservation, and fluency determine the quality of text style transfer models. To train on a nonparallel corpus, several existing approaches aim to deceive the style discriminator with an adversarial loss. However, adversarial training significantly degrades fluency compared to the other two metrics. In this work, we explain this phenomenon using energy-based interpretation, and leverage a pretrained language model to improve fluency. Specifically, we propose a novel approach which applies the pretrained language model to the text style transfer framework by restructuring the discriminator and the model itself, allowing the generator and the discriminator to also take advantage of the power of the pretrained model. We evaluated our model on three public benchmarks GYAFC, Amazon, and Yelp and achieved state-of-the-art performance on the overall metrics.
Search
Co-authors
- Jaegul Choo 3
- Soyoung Yang 2
- Yujin Baek 1
- Ra Yoo 1
- Hee Yang 1
- show all...