Fine-Tuned Transformers Show Clusters of Similar Representations Across Layers

Jason Phang, Haokun Liu, Samuel R. Bowman


Abstract
Despite the success of fine-tuning pretrained language encoders like BERT for downstream natural language understanding (NLU) tasks, it is still poorly understood how neural networks change after fine-tuning. In this work, we use centered kernel alignment (CKA), a method for comparing learned representations, to measure the similarity of representations in task-tuned models across layers. In experiments across twelve NLU tasks, we discover a consistent block diagonal structure in the similarity of representations within fine-tuned RoBERTa and ALBERT models, with strong similarity within clusters of earlier and later layers, but not between them. The similarity of later layer representations implies that later layers only marginally contribute to task performance, and we verify in experiments that the top few layers of fine-tuned Transformers can be discarded without hurting performance, even with no further tuning.
Anthology ID:
2021.blackboxnlp-1.42
Volume:
Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Jasmijn Bastings, Yonatan Belinkov, Emmanuel Dupoux, Mario Giulianelli, Dieuwke Hupkes, Yuval Pinter, Hassan Sajjad
Venue:
BlackboxNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
529–538
Language:
URL:
https://aclanthology.org/2021.blackboxnlp-1.42
DOI:
10.18653/v1/2021.blackboxnlp-1.42
Bibkey:
Cite (ACL):
Jason Phang, Haokun Liu, and Samuel R. Bowman. 2021. Fine-Tuned Transformers Show Clusters of Similar Representations Across Layers. In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 529–538, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Fine-Tuned Transformers Show Clusters of Similar Representations Across Layers (Phang et al., BlackboxNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.blackboxnlp-1.42.pdf
Data
BoolQCoLACosmosQAGLUEHellaSwagMRPCMultiNLIQNLISSTSST-2