GLADIS: A General and Large Acronym Disambiguation Benchmark

Lihu Chen, Gael Varoquaux, Fabian M. Suchanek


Abstract
Acronym Disambiguation (AD) is crucial for natural language understanding on various sources, including biomedical reports, scientific papers, and search engine queries. However, existing acronym disambiguationbenchmarks and tools are limited to specific domains, and the size of prior benchmarks is rather small. To accelerate the research on acronym disambiguation, we construct a new benchmark with three components: (1) a much larger acronym dictionary with 1.5M acronyms and 6.4M long forms; (2) a pre-training corpus with 160 million sentences;(3) three datasets that cover thegeneral, scientific, and biomedical domains. We then pre-train a language model, AcroBERT, on our constructed corpus for general acronym disambiguation, and show the challenges and values of our new benchmark.
Anthology ID:
2023.eacl-main.152
Volume:
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
Month:
May
Year:
2023
Address:
Dubrovnik, Croatia
Editors:
Andreas Vlachos, Isabelle Augenstein
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2073–2088
Language:
URL:
https://aclanthology.org/2023.eacl-main.152
DOI:
10.18653/v1/2023.eacl-main.152
Bibkey:
Cite (ACL):
Lihu Chen, Gael Varoquaux, and Fabian M. Suchanek. 2023. GLADIS: A General and Large Acronym Disambiguation Benchmark. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 2073–2088, Dubrovnik, Croatia. Association for Computational Linguistics.
Cite (Informal):
GLADIS: A General and Large Acronym Disambiguation Benchmark (Chen et al., EACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.eacl-main.152.pdf
Video:
 https://aclanthology.org/2023.eacl-main.152.mp4