Identifying Semantic Induction Heads to Understand In-Context Learning

Jie Ren, Qipeng Guo, Hang Yan, Dongrui Liu, Quanshi Zhang, Xipeng Qiu, Dahua Lin


Abstract
Although large language models (LLMs) have demonstrated remarkable performance, the lack of transparency in their inference logic raises concerns about their trustworthiness. To gain a better understanding of LLMs, we conduct a detailed analysis of the operations of attention heads and aim to better understand the in-context learning of LLMs. Specifically, we investigate whether attention heads encode two types of relationships between tokens present in natural languages: the syntactic dependency parsed from sentences and the relation within knowledge graphs. We find that certain attention heads exhibit a pattern where, when attending to subject tokens, they recall object tokens and increase the output logits of those object tokens. More crucially, the formulation of such semantic induction heads has a close correlation with the emergence of the in-context learning ability of language models. The study of semantic attention heads advances our understanding of the intricate operations of attention heads in transformers, and further provides new insights into the in-context learning of LLMs.
Anthology ID:
2024.findings-acl.412
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6916–6932
Language:
URL:
https://aclanthology.org/2024.findings-acl.412
DOI:
Bibkey:
Cite (ACL):
Jie Ren, Qipeng Guo, Hang Yan, Dongrui Liu, Quanshi Zhang, Xipeng Qiu, and Dahua Lin. 2024. Identifying Semantic Induction Heads to Understand In-Context Learning. In Findings of the Association for Computational Linguistics ACL 2024, pages 6916–6932, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Identifying Semantic Induction Heads to Understand In-Context Learning (Ren et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.412.pdf