Complexifying BERT Using LoRA Adapters

Fabio Tamburini


Abstract
This paper presents the first results of a pilot study for transforming a real-valued pre-trained transformer encoder into a complex-valued one. Following recent findings about pre-training using LoRA, the main idea is to employ complex-valued LoRA adapters to make the trick and continue the pre-training of a given Italian model for setting up the adapters. After pre-training, the proposed complex-valued model has been evaluated on a standardised benchmark for Italian natural-language understanding obtaining very encouraging results.
Anthology ID:
2024.clicit-1.102
Volume:
Proceedings of the 10th Italian Conference on Computational Linguistics (CLiC-it 2024)
Month:
December
Year:
2024
Address:
Pisa, Italy
Editors:
Felice Dell'Orletta, Alessandro Lenci, Simonetta Montemagni, Rachele Sprugnoli
Venue:
CLiC-it
SIG:
Publisher:
CEUR Workshop Proceedings
Note:
Pages:
948–954
Language:
URL:
https://aclanthology.org/2024.clicit-1.102/
DOI:
Bibkey:
Cite (ACL):
Fabio Tamburini. 2024. Complexifying BERT Using LoRA Adapters. In Proceedings of the 10th Italian Conference on Computational Linguistics (CLiC-it 2024), pages 948–954, Pisa, Italy. CEUR Workshop Proceedings.
Cite (Informal):
Complexifying BERT Using LoRA Adapters (Tamburini, CLiC-it 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.clicit-1.102.pdf