HumMusQA: A Human-written Music Understanding QA Benchmark Dataset

Benno Weck, Pablo Puentes, Andrea Poltronieri, Satyajeet Prabhu, Dmitry Bogdanov


Abstract
The evaluation of music understanding in Large Audio-Language Models (LALMs) requires a rigorously defined benchmark that truly tests whether models can perceive and interpret music, a standard that current data methodologies frequently fail to meet.This paper introduces a meticulously structured approach to music evaluation, proposing a new dataset of 320 hand-written questions curated and validated by experts with musical training, arguing that such focused, manual curation is superior for probing complex audio comprehension.To demonstrate the use of the dataset, we benchmark six state-of-the-art LALMs and additionally test their robustness to uni-modal shortcuts.
Anthology ID:
2026.nlp4musa-1.9
Volume:
Proceedings of the 4th Workshop on NLP for Music and Audio (NLP4MusA 2026)
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Elena V. Epure, Sergio Oramas, SeungHeon Doh, Pedro Ramoneda, Anna Kruspe, Mohamed Sordo
Venues:
NLP4MusA | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
58–67
Language:
URL:
https://aclanthology.org/2026.nlp4musa-1.9/
DOI:
Bibkey:
Cite (ACL):
Benno Weck, Pablo Puentes, Andrea Poltronieri, Satyajeet Prabhu, and Dmitry Bogdanov. 2026. HumMusQA: A Human-written Music Understanding QA Benchmark Dataset. In Proceedings of the 4th Workshop on NLP for Music and Audio (NLP4MusA 2026), pages 58–67, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
HumMusQA: A Human-written Music Understanding QA Benchmark Dataset (Weck et al., NLP4MusA 2026)
Copy Citation:
PDF:
https://aclanthology.org/2026.nlp4musa-1.9.pdf