A Meta-Learning Perspective on Transformers for Causal Language Modeling

Xinbo Wu, Lav Varshney


Abstract
The Transformer architecture has become prominent in developing large causal language models. However, mechanisms to explain its capabilities are not well understood. Focused on the training process, here we establish a meta-learning view of the Transformer architecture when trained for the causal language modeling task, by explicating an inner optimization process that may happen within the Transformer. Further, from within the inner optimization, we discover and theoretically analyze a special characteristic of the norms of learned token representations within Transformer-based causal language models. Our analysis is supported by experiments conducted on pre-trained large language models and real-world data.
Anthology ID:
2024.findings-acl.922
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15612–15622
Language:
URL:
https://aclanthology.org/2024.findings-acl.922
DOI:
Bibkey:
Cite (ACL):
Xinbo Wu and Lav Varshney. 2024. A Meta-Learning Perspective on Transformers for Causal Language Modeling. In Findings of the Association for Computational Linguistics ACL 2024, pages 15612–15622, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
A Meta-Learning Perspective on Transformers for Causal Language Modeling (Wu & Varshney, Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.922.pdf