Xavier Serra


2024

pdf bib
The Role of Large Language Models in Musicology: Are We Ready to Trust the Machines?
Pedro Ramoneda | Emila Parada-Cabaleiro | Benno Weck | Xavier Serra
Proceedings of the 3rd Workshop on NLP for Music and Audio (NLP4MusA)

In this work, we explore the use and reliability of Large Language Models (LLMs) in musicology. From a discussion with experts and students, we assess the current acceptance and concerns regarding this, nowadays ubiquitous, technology. We aim to go one step further, proposing a semi-automatic method to create an initial benchmark using retrieval-augmented generation models and multiple-choice question generation, validated by human experts. Our evaluation on 400 human-validated questions shows that current vanilla LLMs are less reliable than retrieval augmented generation from music dictionaries. This paper suggests that the potential of LLMs in musicology requires musicology driven research that can specialized LLMs by including accurate and reliable domain knowledge.

2016

pdf bib
ELMD: An Automatically Generated Entity Linking Gold Standard Dataset in the Music Domain
Sergio Oramas | Luis Espinosa Anke | Mohamed Sordo | Horacio Saggion | Xavier Serra
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

In this paper we present a gold standard dataset for Entity Linking (EL) in the Music Domain. It contains thousands of musical named entities such as Artist, Song or Record Label, which have been automatically annotated on a set of artist biographies coming from the Music website and social network Last.fm. The annotation process relies on the analysis of the hyperlinks present in the source texts and in a voting-based algorithm for EL, which considers, for each entity mention in text, the degree of agreement across three state-of-the-art EL systems. Manual evaluation shows that EL Precision is at least 94%, and due to its tunable nature, it is possible to derive annotations favouring higher Precision or Recall, at will. We make available the annotated dataset along with evaluation data and the code.