Lucia Passaro
2024
Women’s Professions and Targeted Misogyny Online
Alessio Cascione
|
Aldo Cerulli
|
Marta Marchiori Manerba
|
Lucia Passaro
Proceedings of the 10th Italian Conference on Computational Linguistics (CLiC-it 2024)
With the increasing popularity of social media platforms, the dissemination of misogynistic content has become more prevalent and challenging to address. In this paper, we investigate the phenomenon of online misogyny on Twitter through the lens of hurtfulness, qualifying its different manifestation considering the profession of the targets of misogynistic attacks.By leveraging manual annotation and a BERTweet model trained for fine-grained misogyny identification, we find that specific types of misogynistic speech are more intensely directed towards particular professions: derailing discourse predominantly targets authors and cultural figures, while dominance-oriented speech and sexual harassment are mainly directed at politicians and athletes. Additionally, we use the HurtLex lexicon and ItEM to assign hurtfulness scores to tweets based on different hate speech categories. Our analysis reveals that these scores align with the profession-based distribution of misogynistic speech, highlighting the targeted nature of such attacks.
VeryfIT - Benchmark of Fact-Checked Claims for Italian: A CALAMITA Challenge
Jacopo Gili
|
Viviana Patti
|
Lucia Passaro
|
Tommaso Caselli
Proceedings of the 10th Italian Conference on Computational Linguistics (CLiC-it 2024)
Achieving factual accuracy is a known pending issue for language models. Their design centered around the interactive component of user interaction and the extensive use of “spontaneous” training data, has made them highly adept at conversational tasks but not fully reliable in terms of factual correctness. VeryfIT addresses this issue by evaluating the in-memory factual knowledge of language models on data written by professional fact-checkers, posing it as a true or false question.Topics of the statements vary but most are in specific domains related to the Italian government, policies, and social issues. The task presents several challenges: extracting statements from segments of speeches, determining appropriate contextual relevance both temporally and factually, and ultimately verifying the accuracy of the statements.
2022
Bias Discovery within Human Raters: A Case Study of the Jigsaw Dataset
Marta Marchiori Manerba
|
Riccardo Guidotti
|
Lucia Passaro
|
Salvatore Ruggieri
Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022
Understanding and quantifying the bias introduced by human annotation of data is a crucial problem for trustworthy supervised learning. Recently, a perspectivist trend has emerged in the NLP community, focusing on the inadequacy of previous aggregation schemes, which suppose the existence of single ground truth. This assumption is particularly problematic for sensitive tasks involving subjective human judgments, such as toxicity detection. To address these issues, we propose a preliminary approach for bias discovery within human raters by exploring individual ratings for specific sensitive topics annotated in the texts. Our analysis’s object consists of the Jigsaw dataset, a collection of comments aiming at challenging online toxicity identification.
Search
Fix data
Co-authors
- Marta Marchiori Manerba 2
- Alessio Cascione 1
- Tommaso Caselli 1
- Aldo Cerulli 1
- Jacopo Gili 1
- show all...