The Heuristic Core: Understanding Subnetwork Generalization in Pretrained Language Models

Adithya Bhaskar, Dan Friedman, Danqi Chen


Abstract
Prior work has found that pretrained language models (LMs) fine-tuned with different random seeds can achieve similar in-domain performance but generalize differently on tests of syntactic generalization. In this work, we show that, even within a single model, we can find multiple subnetworks that perform similarly in-domain, but generalize vastly differently. To better understand these phenomena, we investigate if they can be understood in terms of “competing subnetworks”: the model initially represents a variety of distinct algorithms, corresponding to different subnetworks, and generalization occurs when it ultimately converges to one. This explanation has been used to account for generalization in simple algorithmic tasks (“grokking”). Instead of finding competing subnetworks, we find that all subnetworks—whether they generalize or not—share a set of attention heads, which we refer to as the _heuristic core_. Further analysis suggests that these attention heads emerge early in training and compute shallow, non-generalizing features. The model learns to generalize by incorporating additional attention heads, which depend on the outputs of the “heuristic” heads to compute higher-level features. Overall, our results offer a more detailed picture of the mechanisms for syntactic generalization in pre-trained LMs.
Anthology ID:
2024.acl-long.774
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14351–14368
Language:
URL:
https://aclanthology.org/2024.acl-long.774
DOI:
Bibkey:
Cite (ACL):
Adithya Bhaskar, Dan Friedman, and Danqi Chen. 2024. The Heuristic Core: Understanding Subnetwork Generalization in Pretrained Language Models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 14351–14368, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
The Heuristic Core: Understanding Subnetwork Generalization in Pretrained Language Models (Bhaskar et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.774.pdf