Javier García Gilabert
Also published as: Javier Garcia Gilabert
2024
ReSeTOX: Re-learning attention weights for toxicity mitigation in machine translation
Javier García Gilabert
|
Carlos Escolano
|
Marta Costa-jussà
Proceedings of the 25th Annual Conference of the European Association for Machine Translation (Volume 1)
Our proposed method, RESETOX (REdoSEarch if TOXic), addresses the issue ofNeural Machine Translation (NMT) gener-ating translation outputs that contain toxicwords not present in the input. The ob-jective is to mitigate the introduction oftoxic language without the need for re-training. In the case of identified addedtoxicity during the inference process, RE-SETOX dynamically adjusts the key-valueself-attention weights and re-evaluates thebeam search hypotheses. Experimental re-sults demonstrate that RESETOX achievesa remarkable 57% reduction in added tox-icity while maintaining an average trans-lation quality of 99.5% across 164 lan-guages. Our code is available at: https://github.com
BSC Submission to the AmericasNLP 2024 Shared Task
Javier Garcia Gilabert
|
Aleix Sant
|
Carlos Escolano
|
Francesca De Luca Fornaciari
|
Audrey Mash
|
Maite Melero
Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)
This paper describes the BSC’s submission to the AmericasNLP 2024 Shared Task. We participated in the Spanish to Quechua and Spanish to Guarani tasks. In this paper we show that by using LoRA adapters we can achieve similar performance as a full parameter fine-tuning by only training 14.2% of the total number of parameters. Our systems achieved the highest ChrF++ scores and ranked first for both directions in the final results outperforming strong baseline systems in the provided development and test datasets.
Search
Co-authors
- Carlos Escolano 2
- Marta Costa-jussà 1
- Aleix Sant 1
- Francesca De Luca Fornaciari 1
- Audrey Mash 1
- show all...