Ginevra Martinelli
2026
Tracking Autism Stigma in Italian Newspapers: A Longitudinal Analysis of Media Discourse (2016–2025)
Ginevra Martinelli | Chiara Barattieri di San Pietro | Daniela Ovadia | Marta Bosia | Valentina Bambini
Proceedings of the 1st Workshop on Linguistic Analysis for Health (HeaLing 2026)
Ginevra Martinelli | Chiara Barattieri di San Pietro | Daniela Ovadia | Marta Bosia | Valentina Bambini
Proceedings of the 1st Workshop on Linguistic Analysis for Health (HeaLing 2026)
Public awareness of Autism Spectrum Disorder (ASD) has grown in recent years, yet stigma surrounding this condition persists. Building on prior research showing increasingly positive portrayals of ASD, this study examines recent longitudinal trends in stigma and ASD, with a focus on Italian newspapers, and how these were affected by a key event such as the COVID-19 pandemic. We analyzed nearly 3,000 articles published between 2016 and 2025 using an innovative multi-layered Natural Language Processing (NLP) framework to capture multiple dimensions of stigma, including discriminatory language, emotional framings indicative of prejudices, stereotypes, and the thematic contexts in which ASD-related stigma appears. Overall, results indicate low levels of overt stigma and a gradual shift toward more positive portrayals, with only temporary disruptions during the pandemic. Some stereotypes remain, highlighting the need for ongoing attention to ASD representation in the media.
2024
Exploring Neural Topic Modeling on a Classical Latin Corpus
Ginevra Martinelli | Paola Impicciché | Elisabetta Fersini | Francesco Mambrini | Marco Passarotti
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Ginevra Martinelli | Paola Impicciché | Elisabetta Fersini | Francesco Mambrini | Marco Passarotti
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
The large availability of processable textual resources for Classical Latin has made it possible to study Latin literature through methods and tools that support distant reading. This paper describes a number of experiments carried out to test the possibility of investigating the thematic distribution of the Classical Latin corpus Opera Latina by means of topic modeling. For this purpose, we train, optimize and compare two neural models, Product-of-Experts LDA (ProdLDA) and Embedded Topic Model (ETM), opportunely revised to deal with the textual data from a Classical Latin corpus, to evaluate which one performs better both on the basis of topic diversity and topic coherence metrics, and from a human judgment point of view. Our results show that the topics extracted by neural models are coherent and interpretable and that they are significant from the perspective of a Latin scholar. The source code of the proposed model is available at https://github.com/MIND-Lab/LatinProdLDA.