Diyang Chen


2025

This paper presents our solution for SemEval-2025 Task 2 on entity-aware machine translation. We propose a parameter-efficient adaptation framework using Low-Rank Adaptation (LoRA) to fine-tune the Qwen2.5-72B model, enabling effective knowledge transfer while preserving generalization capabilities. To address data scarcity and entity ambiguity, we design a Wiki-driven augmentation pipeline that leverages Wikidata’s multilingual entity mappings to generate synthetic training pairs. Our system achieves state-of-the-art performance across 10 languages, securing first place in the competition. Experimental results demonstrate significant improvements in both translation quality (COMET) and entity accuracy (M-ETA).
Search
Co-authors
    Venues
    Fix author