Ábel Elekes
Also published as: Abel Elekes
2025
Unequal Scientific Recognition in the Age of LLMs
Yixuan Liu
|
Abel Elekes
|
Jianglin Lu
|
Rodrigo Dorantes-Gilardi
|
Albert-Laszlo Barabasi
Findings of the Association for Computational Linguistics: EMNLP 2025
Large language models (LLMs) are reshaping how scientific knowledge is accessed and represented. This study evaluates the extent to which popular and frontier LLMs including GPT-4o, Claude 3.5 Sonnet, and Gemini 1.5 Pro recognize scientists, benchmarking their outputs against OpenAlex and Wikipedia. Using a dataset focusing on 100,000 physicists from OpenAlex to evaluate LLM recognition, we uncover substantial disparities: LLMs exhibit selective and inconsistent recognition patterns. Recognition correlates strongly with scholarly impact such as citations, and remains uneven across gender and geography. Women researchers, and researchers from Africa, Asia, and Latin America are significantly underrecognized. We further examine the role of training data provenance, identifying Wikipedia as a potential sources that contributes to recognition gaps. Our findings highlight how LLMs can reflect, and potentially amplify existing disparities in science, underscoring the need for more transparent and inclusive knowledge systems.
2018
Resources to Examine the Quality of Word Embedding Models Trained on n-Gram Data
Ábel Elekes
|
Adrian Englhardt
|
Martin Schäler
|
Klemens Böhm
Proceedings of the 22nd Conference on Computational Natural Language Learning
Word embeddings are powerful tools that facilitate better analysis of natural language. However, their quality highly depends on the resource used for training. There are various approaches relying on n-gram corpora, such as the Google n-gram corpus. However, n-gram corpora only offer a small window into the full text – 5 words for the Google corpus at best. This gives way to the concern whether the extracted word semantics are of high quality. In this paper, we address this concern with two contributions. First, we provide a resource containing 120 word-embedding models – one of the largest collection of embedding models. Furthermore, the resource contains the n-gramed versions of all used corpora, as well as our scripts used for corpus generation, model generation and evaluation. Second, we define a set of meaningful experiments allowing to evaluate the aforementioned quality differences. We conduct these experiments using our resource to show its usage and significance. The evaluation results confirm that one generally can expect high quality for n-grams with n > 3.
Search
Fix author
Co-authors
- Albert-Laszlo Barabasi 1
- Klemens Böhm 1
- Rodrigo Dorantes-Gilardi 1
- Adrian Englhardt 1
- Yixuan Liu 1
- show all...