Gloria Origgi
2020
An Annotated Corpus for Sexism Detection in French Tweets
Patricia Chiril
|
Véronique Moriceau
|
Farah Benamara
|
Alda Mari
|
Gloria Origgi
|
Marlène Coulomb-Gully
Proceedings of the Twelfth Language Resources and Evaluation Conference
Social media networks have become a space where users are free to relate their opinions and sentiments which may lead to a large spreading of hatred or abusive messages which have to be moderated. This paper presents the first French corpus annotated for sexism detection composed of about 12,000 tweets. In a context of offensive content mediation on social media now regulated by European laws, we think that it is important to be able to detect automatically not only sexist content but also to identify if a message with a sexist content is really sexist (i.e. addressed to a woman or describing a woman or women in general) or is a story of sexism experienced by a woman. This point is the novelty of our annotation scheme. We also propose some preliminary results for sexism detection obtained with a deep learning approach. Our experiments show encouraging results.
He said “who’s gonna take care of your children when you are at ACL?”: Reported Sexist Acts are Not Sexist
Patricia Chiril
|
Véronique Moriceau
|
Farah Benamara
|
Alda Mari
|
Gloria Origgi
|
Marlène Coulomb-Gully
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
In a context of offensive content mediation on social media now regulated by European laws, it is important not only to be able to automatically detect sexist content but also to identify if a message with a sexist content is really sexist or is a story of sexism experienced by a woman. We propose: (1) a new characterization of sexist content inspired by speech acts theory and discourse analysis studies, (2) the first French dataset annotated for sexism detection, and (3) a set of deep learning experiments trained on top of a combination of several tweet’s vectorial representations (word embeddings, linguistic features, and various generalization strategies). Our results are encouraging and constitute a first step towards offensive content moderation.
Search