Combining Denoising Autoencoders with Contrastive Learning to fine-tune Transformer Models

Alejo Lopez-Avila, Víctor Suárez-Paniagua


Abstract
Recently, using large pre-trained Transformer models for transfer learning tasks has evolved to the point where they have become one of the flagship trends in the Natural Language Processing (NLP) community, giving rise to various outlooks such as prompt-based, adapters, or combinations with unsupervised approaches, among many others. In this work, we propose a 3-Phase technique to adjust a base model for a classification task. First, we adapt the model’s signal to the data distribution by performing further training with a Denoising Autoencoder (DAE). Second, we adjust the representation space of the output to the corresponding classes by clustering through a Contrastive Learning (CL) method. In addition, we introduce a new data augmentation approach for Supervised Contrastive Learning to correct the unbalanced datasets. Third, we apply fine-tuning to delimit the predefined categories. These different phases provide relevant and complementary knowledge to the model to learn the final task. We supply extensive experimental results on several datasets to demonstrate these claims. Moreover, we include an ablation study and compare the proposed method against other ways of combining these techniques.
Anthology ID:
2023.emnlp-main.124
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2021–2032
Language:
URL:
https://aclanthology.org/2023.emnlp-main.124
DOI:
10.18653/v1/2023.emnlp-main.124
Bibkey:
Cite (ACL):
Alejo Lopez-Avila and Víctor Suárez-Paniagua. 2023. Combining Denoising Autoencoders with Contrastive Learning to fine-tune Transformer Models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 2021–2032, Singapore. Association for Computational Linguistics.
Cite (Informal):
Combining Denoising Autoencoders with Contrastive Learning to fine-tune Transformer Models (Lopez-Avila & Suárez-Paniagua, EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.124.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.124.mp4