FTFT: Efficient and Robust Fine-Tuning by Transferring Training Dynamics

Yupei Du, Albert Gatt, Dong Nguyen


Abstract
Despite the massive success of fine-tuning Pre-trained Language Models (PLMs), they remain susceptible to out-of-distribution input. Dataset cartography is a simple yet effective dual-model approach that improves the robustness of fine-tuned PLMs. It involves fine-tuning a model on the original training set (i.e. reference model), selecting a subset of important training instances based on the training dynamics, % of the reference model, and fine-tuning again only on these selected examples (i.e. main model). However, this approach requires fine-tuning the same model twice, which is computationally expensive for large PLMs. In this paper, we show that 1) training dynamics are highly transferable across model sizes and pre-training methods, and that 2) fine-tuning main models using these selected training instances achieves higher training efficiency than empirical risk minimization (ERM). Building on these observations, we propose a novel fine-tuning approach: Fine-Tuning by transFerring Training dynamics (FTFT). Compared with dataset cartography, FTFT uses more efficient reference models and aggressive early stopping. FTFT achieves robustness improvements over ERM while lowering the training cost by up to ~50%
Anthology ID:
2025.coling-main.86
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1294–1308
Language:
URL:
https://aclanthology.org/2025.coling-main.86/
DOI:
Bibkey:
Cite (ACL):
Yupei Du, Albert Gatt, and Dong Nguyen. 2025. FTFT: Efficient and Robust Fine-Tuning by Transferring Training Dynamics. In Proceedings of the 31st International Conference on Computational Linguistics, pages 1294–1308, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
FTFT: Efficient and Robust Fine-Tuning by Transferring Training Dynamics (Du et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.86.pdf