Alicia Picazo-Izquierdo


2025

pdf bib
Proceedings of the First Workshop on Advancing NLP for Low-Resource Languages
Ernesto Luis Estevanell-Valladares | Alicia Picazo-Izquierdo | Tharindu Ranasinghe | Besik Mikaberidze | Simon Ostermann | Daniil Gurgurov | Philipp Mueller | Claudia Borg | Marián Šimko
Proceedings of the First Workshop on Advancing NLP for Low-Resource Languages

pdf bib
Proceedings of the First Workshop on Comparative Performance Evaluation: From Rules to Language Models
Alicia Picazo-Izquierdo | Ernesto Luis Estevanell-Valladares | Ruslan Mitkov | Rafael Muñoz Guillena | Raúl García Cerdá
Proceedings of the First Workshop on Comparative Performance Evaluation: From Rules to Language Models

pdf bib
Detection of AI-generated Content in Scientific Abstracts
Ernesto Luis Estevanell-Valladares | Alicia Picazo-Izquierdo | Ruslan Mitkov
Proceedings of the First Workshop on Comparative Performance Evaluation: From Rules to Language Models

The growing use of generative AI in academic writing raises urgent questions about authorship and the integrity of scientific communication. This study addresses the detection of AI-generated scientific abstracts by constructing a temporally anchored dataset of paired abstracts—each with a human-written version that contains scientific abstracts of works published before 2021 and a synthetic version generated using GPT-4.1. We evaluate three approaches to authorship classification: zero-shot large language models (LLMs), fine-tuned encoder-based transformers, and traditional machine learning classifiers. Results show that LLMs perform near chance level, while a LoRA-fine-tuned DistilBERT and a PassiveAggressive classifier achieve near-perfect performance. These findings suggest that shallow lexical or stylistic patterns still differentiate human and AI writing, and that supervised learning is key to capturing these signals.