Plausibility Processing in Transformer Language Models: Focusing on the Role of Attention Heads in GPT

Soo Ryu


Abstract
The goal of this paper is to explore how Transformer language models process semantic knowledge, especially regarding the plausibility of noun-verb relations. First, I demonstrate GPT2 exhibits a higher degree of similarity with humans in plausibility processing compared to other Transformer language models. Next, I delve into how knowledge of plausibility is contained within attention heads of GPT2 and how these heads causally contribute to GPT2’s plausibility processing ability. Through several experiments, it was found that: i) GPT2 has a number of attention heads that detect plausible noun-verb relationships; ii) these heads collectively contribute to the Transformer’s ability to process plausibility, albeit to varying degrees; and iii) attention heads’ individual performance in detecting plausibility does not necessarily correlate with how much they contribute to GPT2’s plausibility processing ability.
Anthology ID:
2023.findings-emnlp.27
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
356–369
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.27
DOI:
10.18653/v1/2023.findings-emnlp.27
Bibkey:
Cite (ACL):
Soo Ryu. 2023. Plausibility Processing in Transformer Language Models: Focusing on the Role of Attention Heads in GPT. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 356–369, Singapore. Association for Computational Linguistics.
Cite (Informal):
Plausibility Processing in Transformer Language Models: Focusing on the Role of Attention Heads in GPT (Ryu, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.27.pdf