Muhan Gao
2024
Insights into LLM Long-Context Failures: When Transformers Know but Don’t Tell
Muhan Gao
|
TaiMing Lu
|
Kuai Yu
|
Adam Byerly
|
Daniel Khashabi
Findings of the Association for Computational Linguistics: EMNLP 2024
Large Language Models (LLMs) exhibit positional bias, struggling to utilize information from the middle or end of long contexts. Our study explores LLMs’ long-context reasoning by probing their hidden representations. We find that while LLMs encode the position of target information, they often fail to leverage this in generating accurate responses. This reveals a disconnect between information retrieval and utilization, a “know but don’t tell” phenomenon. We further analyze the relationship between extraction time and final accuracy, offering insights into the underlying mechanics of transformer models.
Search