On Enhancing Fine-Tuning for Pre-trained Language Models

Abir Betka, Zeyd Ferhat, Riyadh Barka, Selma Boutiba, Zineddine Kahhoul, Tiar Lakhdar, Ahmed Abdelali, Habiba Dahmani


Abstract
The remarkable capabilities of Natural Language Models to grasp language subtleties has paved the way for their widespread adoption in diverse fields. However, adapting them for specific tasks requires the time-consuming process of fine-tuning, which consumes significant computational power and energy. Therefore, optimizing the fine-tuning time is advantageous. In this study, we propose an alternate approach that limits parameter manipulation to select layers. Our exploration led to identifying layers that offer the best trade-off between time optimization and performance preservation. We further validated this approach on multiple downstream tasks, and the results demonstrated its potential to reduce fine-tuning time by up to 50% while maintaining performance within a negligible deviation of less than 5%. This research showcases a promising technique for significantly improving fine-tuning efficiency without compromising task- or domain-specific learning capabilities.
Anthology ID:
2023.arabicnlp-1.33
Volume:
Proceedings of ArabicNLP 2023
Month:
December
Year:
2023
Address:
Singapore (Hybrid)
Editors:
Hassan Sawaf, Samhaa El-Beltagy, Wajdi Zaghouani, Walid Magdy, Ahmed Abdelali, Nadi Tomeh, Ibrahim Abu Farha, Nizar Habash, Salam Khalifa, Amr Keleg, Hatem Haddad, Imed Zitouni, Khalil Mrini, Rawan Almatham
Venues:
ArabicNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
405–410
Language:
URL:
https://aclanthology.org/2023.arabicnlp-1.33
DOI:
10.18653/v1/2023.arabicnlp-1.33
Bibkey:
Cite (ACL):
Abir Betka, Zeyd Ferhat, Riyadh Barka, Selma Boutiba, Zineddine Kahhoul, Tiar Lakhdar, Ahmed Abdelali, and Habiba Dahmani. 2023. On Enhancing Fine-Tuning for Pre-trained Language Models. In Proceedings of ArabicNLP 2023, pages 405–410, Singapore (Hybrid). Association for Computational Linguistics.
Cite (Informal):
On Enhancing Fine-Tuning for Pre-trained Language Models (Betka et al., ArabicNLP-WS 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.arabicnlp-1.33.pdf