Token-wise Decomposition of Autoregressive Language Model Hidden States for Analyzing Model Predictions

Byung-Doh Oh, William Schuler


Abstract
While there is much recent interest in studying why Transformer-based large language models make predictions the way they do, the complex computations performed within each layer have made their behavior somewhat opaque. To mitigate this opacity, this work presents a linear decomposition of final hidden states from autoregressive language models based on each initial input token, which is exact for virtually all contemporary Transformer architectures. This decomposition allows the definition of probability distributions that ablate the contribution of specific input tokens, which can be used to analyze their influence on model probabilities over a sequence of upcoming words with only one forward pass from the model. Using the change in next-word probability as a measure of importance, this work first examines which context words make the biggest contribution to language model predictions. Regression experiments suggest that Transformer-based language models rely primarily on collocational associations, followed by linguistic factors such as syntactic dependencies and coreference relationships in making next-word predictions. Additionally, analyses using these measures to predict syntactic dependencies and coreferent mention spans show that collocational association and repetitions of the same token largely explain the language models’ predictions on these tasks.
Anthology ID:
2023.acl-long.562
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10105–10117
Language:
URL:
https://aclanthology.org/2023.acl-long.562
DOI:
10.18653/v1/2023.acl-long.562
Bibkey:
Cite (ACL):
Byung-Doh Oh and William Schuler. 2023. Token-wise Decomposition of Autoregressive Language Model Hidden States for Analyzing Model Predictions. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 10105–10117, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Token-wise Decomposition of Autoregressive Language Model Hidden States for Analyzing Model Predictions (Oh & Schuler, ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-long.562.pdf
Video:
 https://aclanthology.org/2023.acl-long.562.mp4