Lianzhang Lou
2024
DUAL-REFLECT: Enhancing Large Language Models for Reflective Translation through Dual Learning Feedback Mechanisms
Andong Chen
|
Lianzhang Lou
|
Kehai Chen
|
Xuefeng Bai
|
Yang Xiang
|
Muyun Yang
|
Tiejun Zhao
|
Min Zhang
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Recently, large language models (LLMs) enhanced by self-reflection have achieved promising performance on machine transla004 tion. The key idea is guiding LLMs to generate translation with human-like feedback. However, existing self-reflection methods lack effective feedback information, limiting the translation performance. To address this, we introduce a DUAL-REFLECT framework, leveraging the dual learning of translation tasks to provide effective feedback, thereby enhancing the models’ self-reflective abilities and improving translation performance. The application of this method across various translation tasks has proven its effectiveness in improving translation accuracy and eliminating ambiguities, especially in translation tasks with low-resource language pairs.
2023
CCEval: A Representative Evaluation Benchmark for the Chinese-centric Multilingual Machine Translation
Lianzhang Lou
|
Xi Yin
|
Yutao Xie
|
Yang Xiang
Findings of the Association for Computational Linguistics: EMNLP 2023
The Chinese-centric Multilingual Machine Translation (MMT) has gained more importance recently due to increasing demands from international business development and cross-cultural exchanges. However, an important factor that limits the progress of this area is the lack of highly representative and high-quality evaluation benchmarks. To fill this gap, we propose CCEval, an impartial and representative Chinese-centric MMT evaluation dataset. This benchmark dataset consists of 2500 Chinese sentences we meticulously selected and processed, and covers more diverse linguistic features as compared to other MMT evaluation benchmarks. These sentences have been translated into 11 languages of various resource levels by professional translators via a rigorously controlled process pipeline to ensure their high quality. We conduct experiments to demonstrate our sampling methodology’s effectiveness in constructing evaluation datasets strongly correlated with human evaluations. The resulting dataset enables better assessments of the Chinese-centric MMT quality. Our CCEval benchmark dataset is available at https://bright.pcl.ac.cn/en/offlineTasks.
Search
Co-authors
- Yang Xiang 2
- Xi Yin 1
- Yutao Xie 1
- Andong Chen 1
- Kehai Chen 1
- show all...