Daniel Bardanca
2026
Improving Machine Translation of Idioms: A Spanish–Galician Parallel Dataset and Synthetic Augmentation Approach
Lúa Santamaría Montesinos | Saúl Buján | Daniel Bardanca | Pablo Gamallo
Proceedings of the 17th International Conference on Computational Processing of Portuguese (PROPOR 2026) - Vol. 1
Lúa Santamaría Montesinos | Saúl Buján | Daniel Bardanca | Pablo Gamallo
Proceedings of the 17th International Conference on Computational Processing of Portuguese (PROPOR 2026) - Vol. 1
Idiomatic expressions are a well-known challenge for neural machine translation, including both traditional sequence-to-sequence models and large language models (LLMs). This paper presents a systematic approach to improve idiom translation between Spanish and Galician. First, we build a high-quality parallel dataset of idioms manually aligned across both languages. Then, we automatically extend this dataset into a large synthetic parallel corpus using LLMs, following a strategy that prioritizes the most frequent idioms observed in authentic corpora. This augmented dataset is used to retrain a seq2seq translation model. We evaluate the resulting system and compare it both to the baseline model without idiom data and to state-of-the-art LLM-based translators such as SalamandraTA. Results show that the translation of idioms improves significantly after the training, alongside a slight boost in the model’s overall performance.
2024
Training and Fine-Tuning NMT Models for Low-Resource Languages Using Apertium-Based Synthetic Corpora
Aleix Sant | Daniel Bardanca | José Ramom Pichel Campos | Francesca De Luca Fornaciari | Carlos Escolano | Javier Garcia Gilabert | Pablo Gamallo | Audrey Mash | Xixian Liao | Maite Melero
Proceedings of the Ninth Conference on Machine Translation
Aleix Sant | Daniel Bardanca | José Ramom Pichel Campos | Francesca De Luca Fornaciari | Carlos Escolano | Javier Garcia Gilabert | Pablo Gamallo | Audrey Mash | Xixian Liao | Maite Melero
Proceedings of the Ninth Conference on Machine Translation
In this paper, we present the two strategies employed for the WMT24 Shared Task on Translation into Low-Resource Languages of Spain. We participated in the language pairs of Spanish-to-Aragonese, Spanish-to-Aranese, and Spanish-to-Asturian, developing neural-based translation systems and moving away from rule-based approaches for these language directions. To create these models, two distinct strategies were employed. The first strategy involved a thorough cleaning process and curation of the limited provided data, followed by fine-tuning the multilingual NLLB-200-600M model (Constrained Submission). The other strategy involved training a transformer from scratch using a vast amount of synthetic data (Open Submission). Both approaches relied on generated synthetic data and resulted in high ChrF and BLEU scores. However, given the characteristics of the task, the strategy used in the Constrained Submission resulted in higher scores that surpassed the baselines across the three translation directions, whereas the strategy employed in the Open Submission yielded slightly lower scores than the highest baseline.