Javier García Gilabert

Also published as: Javier Garcia Gilabert


2024

pdf bib
ReSeTOX: Re-learning attention weights for toxicity mitigation in machine translation
Javier García Gilabert | Carlos Escolano | Marta Costa-jussà
Proceedings of the 25th Annual Conference of the European Association for Machine Translation (Volume 1)

Our proposed method, RESETOX (REdoSEarch if TOXic), addresses the issue ofNeural Machine Translation (NMT) gener-ating translation outputs that contain toxicwords not present in the input. The ob-jective is to mitigate the introduction oftoxic language without the need for re-training. In the case of identified addedtoxicity during the inference process, RE-SETOX dynamically adjusts the key-valueself-attention weights and re-evaluates thebeam search hypotheses. Experimental re-sults demonstrate that RESETOX achievesa remarkable 57% reduction in added tox-icity while maintaining an average trans-lation quality of 99.5% across 164 lan-guages. Our code is available at: https://github.com

pdf bib
BSC Submission to the AmericasNLP 2024 Shared Task
Javier Garcia Gilabert | Aleix Sant | Carlos Escolano | Francesca De Luca Fornaciari | Audrey Mash | Maite Melero
Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)

This paper describes the BSC’s submission to the AmericasNLP 2024 Shared Task. We participated in the Spanish to Quechua and Spanish to Guarani tasks. In this paper we show that by using LoRA adapters we can achieve similar performance as a full parameter fine-tuning by only training 14.2% of the total number of parameters. Our systems achieved the highest ChrF++ scores and ranked first for both directions in the final results outperforming strong baseline systems in the provided development and test datasets.

pdf bib
Training and Fine-Tuning NMT Models for Low-Resource Languages Using Apertium-Based Synthetic Corpora
Aleix Sant | Daniel Bardanca | José Ramom Pichel Campos | Francesca De Luca Fornaciari | Carlos Escolano | Javier Garcia Gilabert | Pablo Gamallo | Audrey Mash | Xixian Liao | Maite Melero
Proceedings of the Ninth Conference on Machine Translation

In this paper, we present the two strategies employed for the WMT24 Shared Task on Translation into Low-Resource Languages of Spain. We participated in the language pairs of Spanish-to-Aragonese, Spanish-to-Aranese, and Spanish-to-Asturian, developing neural-based translation systems and moving away from rule-based approaches for these language directions. To create these models, two distinct strategies were employed. The first strategy involved a thorough cleaning process and curation of the limited provided data, followed by fine-tuning the multilingual NLLB-200-600M model (Constrained Submission). The other strategy involved training a transformer from scratch using a vast amount of synthetic data (Open Submission). Both approaches relied on generated synthetic data and resulted in high ChrF and BLEU scores. However, given the characteristics of the task, the strategy used in the Constrained Submission resulted in higher scores that surpassed the baselines across the three translation directions, whereas the strategy employed in the Open Submission yielded slightly lower scores than the highest baseline.