Bridging the Gap between Native Text and Translated Text through Adversarial Learning: A Case Study on Cross-Lingual Event Extraction

Pengfei Yu, Jonathan May, Heng Ji


Abstract
Recent research in cross-lingual learning has found that combining large-scale pretrained multilingual language models with machine translation can yield good performance. We explore this idea for cross-lingual event extraction with a new model architecture that jointly encodes a source language input sentence with its translation to the target language during training, and takes a target language sentence with its translation back to the source language as input during evaluation. However, we observe significant representational gap between the native source language texts during training and the texts translated into source language during evaluation, as well as the texts translated into target language during training and the native target language texts during evaluation. This representational gap undermines the effectiveness of cross-lingual transfer learning for event extraction with machine-translated data. In order to mitigate this problem, we propose an adversarial training framework that encourages the language model to produce more similar representations for the translated text and the native text. To be specific, we train the language model such that its hidden representations are able to fool a jointly trained discriminator that distinguishes translated texts’ representations from native texts’ representations. We conduct experiments on cross-lingual for event extraction across three languages. Results demonstrate that our proposed adversarial training can effectively incorporate machine translation to improve event extraction, while simply adding machine-translated data yields unstable performance due to the representational gap.
Anthology ID:
2023.findings-eacl.57
Volume:
Findings of the Association for Computational Linguistics: EACL 2023
Month:
May
Year:
2023
Address:
Dubrovnik, Croatia
Editors:
Andreas Vlachos, Isabelle Augenstein
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
754–769
Language:
URL:
https://aclanthology.org/2023.findings-eacl.57
DOI:
10.18653/v1/2023.findings-eacl.57
Bibkey:
Cite (ACL):
Pengfei Yu, Jonathan May, and Heng Ji. 2023. Bridging the Gap between Native Text and Translated Text through Adversarial Learning: A Case Study on Cross-Lingual Event Extraction. In Findings of the Association for Computational Linguistics: EACL 2023, pages 754–769, Dubrovnik, Croatia. Association for Computational Linguistics.
Cite (Informal):
Bridging the Gap between Native Text and Translated Text through Adversarial Learning: A Case Study on Cross-Lingual Event Extraction (Yu et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-eacl.57.pdf
Video:
 https://aclanthology.org/2023.findings-eacl.57.mp4