Kenshiro Tanaka
2025
Understanding Token Probability Encoding in Output Embeddings
Hakaze Cho
|
Yoshihiro Sakai
|
Kenshiro Tanaka
|
Mariko Kato
|
Naoya Inoue
Proceedings of the 31st International Conference on Computational Linguistics
In this paper, we investigate the output token probability information in the output embedding of language models. We find an approximate common log-linear encoding of output token probabilities within the output embedding vectors and empirically demonstrate that it is accurate and sparse. As a causality examination, we steer the encoding in output embedding to modify the output probability distribution accurately. Moreover, the sparsity we find in output probability encoding suggests that a large number of dimensions in the output embedding do not contribute to causal language modeling. Therefore, we attempt to delete the output-unrelated dimensions and find more than 30% of the dimensions can be deleted without significant movement in output distribution and sequence generation. Additionally, in the pre-training dynamics of language models, we find that the output embeddings capture the corpus token frequency information in early steps, even before an obvious convergence of parameters starts.