Systematic Evaluation of Long-Context LLMs on Financial Concepts

Lavanya Gupta, Saket Sharma, Yiyun Zhao


Abstract
Long-context large language models (LC LLMs) promise to increase reliability of LLMs in real-world tasks requiring processing and understanding of long input documents. However, this ability of LC LLMs to reliably utilize their growing context windows remains under investigation. In this work, we evaluate the performance of state-of-the-art GPT-4 suite of LC LLMs in solving a series of progressively challenging tasks, as a function of factors such as context length, task difficulty, and position of key information by creating a real world financial news dataset. Our findings indicate that LC LLMs exhibit brittleness at longer context lengths even for simple tasks, with performance deteriorating sharply as task complexity increases. At longer context lengths, these state-of-the-art models experience catastrophic failures in instruction following resulting in degenerate outputs. Our prompt ablations also reveal unfortunate continued sensitivity to both the placement of the task instruction in the context window as well as minor markdown formatting. Finally, we advocate for more rigorous evaluation of LC LLMs by employing holistic metrics such as F1 (rather than recall) and reporting confidence intervals, thereby ensuring robust and conclusive findings.
Anthology ID:
2024.emnlp-industry.88
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track
Month:
November
Year:
2024
Address:
Miami, Florida, US
Editors:
Franck Dernoncourt, Daniel Preoţiuc-Pietro, Anastasia Shimorina
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1163–1175
Language:
URL:
https://aclanthology.org/2024.emnlp-industry.88
DOI:
Bibkey:
Cite (ACL):
Lavanya Gupta, Saket Sharma, and Yiyun Zhao. 2024. Systematic Evaluation of Long-Context LLMs on Financial Concepts. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track, pages 1163–1175, Miami, Florida, US. Association for Computational Linguistics.
Cite (Informal):
Systematic Evaluation of Long-Context LLMs on Financial Concepts (Gupta et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-industry.88.pdf