LEGOBench: Scientific Leaderboard Generation Benchmark

Shruti Singh, Shoaib Alam, Husain Malwat, Mayank Singh


Abstract
The ever-increasing volume of paper submissions makes it difficult to stay informed about the latest state-of-the-art research. To address this challenge, we introduce LEGOBench, a benchmark for evaluating systems that generate scientific leaderboards. LEGOBench is curated from 22 years of preprint submission data on arXiv and more than 11k machine learning leaderboards on the PapersWithCode portal. We present a language model-based and four graph-based leaderboard generation task configuration. We evaluate popular encoder-only scientific language models as well as decoder-only large language models across these task configurations. State-of-the-art models showcase significant performance gaps in automatic leaderboard generation on LEGOBench. The code is available on GitHub and the dataset is hosted on OSF.
Anthology ID:
2024.findings-emnlp.855
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14598–14613
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.855
DOI:
Bibkey:
Cite (ACL):
Shruti Singh, Shoaib Alam, Husain Malwat, and Mayank Singh. 2024. LEGOBench: Scientific Leaderboard Generation Benchmark. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 14598–14613, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
LEGOBench: Scientific Leaderboard Generation Benchmark (Singh et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.855.pdf