Ignacio Castro


2023

pdf bib
Lon-eå at SemEval-2023 Task 11: A Comparison of Activation Functions for Soft and Hard Label Prediction
Peyman Hosseini | Mehran Hosseini | Sana Al-azzawi | Marcus Liwicki | Ignacio Castro | Matthew Purver
Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)

We study the influence of different activation functions in the output layer of pre-trained transformer models for soft and hard label prediction in the learning with disagreement task. In this task, the goal is to quantify the amount of disagreement via predicting soft labels. To predict the soft labels, we use BERT-based preprocessors and encoders and vary the activation function used in the output layer, while keeping other parameters constant. The soft labels are then used for the hard label prediction. The activation functions considered are sigmoid as well as a step-function that is added to the model post-training and a sinusoidal activation function, which is introduced for the first time in this paper.

pdf bib
Tracing Linguistic Markers of Influence in a Large Online Organisation
Prashant Khare | Ravi Shekhar | Vanja Mladen Karan | Stephen McQuistin | Colin Perkins | Ignacio Castro | Gareth Tyson | Patrick Healey | Matthew Purver
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

Social science and psycholinguistic research have shown that power and status affect how people use language in a range of domains. Here, we investigate a similar question in a large, distributed, consensus-driven community with little traditional power hierarchy – the Internet Engineering Task Force (IETF), a collaborative organisation that designs internet standards. Our analysis based on lexical categories (LIWC) and BERT, shows that participants’ levels of influence can be predicted from their email text, and identify key linguistic differences (e.g., certain LIWC categories, such as “WE” are positively correlated with high-influence). We also identify the differences in language use for the same person before and after becoming influential.

pdf bib
LEDA: a Large-Organization Email-Based Decision-Dialogue-Act Analysis Dataset
Vanja Mladen Karan | Prashant Khare | Ravi Shekhar | Stephen McQuistin | Ignacio Castro | Gareth Tyson | Colin Perkins | Patrick Healey | Matthew Purver
Findings of the Association for Computational Linguistics: ACL 2023

Collaboration increasingly happens online. This is especially true for large groups working on global tasks, with collaborators all around the globe. The size and distributed nature of such groups makes decision-making challenging. This paper proposes a set of dialog acts for the study of decision-making mechanisms in such groups, and provides a new annotated dataset based on real-world data from the public mail-archives of one such organisation – the Internet Engineering Task Force (IETF). We provide an initial data analysis showing that this dataset can be used to better understand decision-making in such organisations. Finally, we experiment with a preliminary transformer-based dialog act tagging model.

2022

pdf bib
MMVAE at SemEval-2022 Task 5: A Multi-modal Multi-task VAE on Misogynous Meme Detection
Yimeng Gu | Ignacio Castro | Gareth Tyson
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)

Nowadays, memes have become quite common in day-to-day communications on social media platforms. They appear to be amusing, evoking and attractive to audiences. However, some memes containing malicious contents can be harmful to the targeted group and arouse public anger in the long run. In this paper, we study misogynous meme detection, a shared task in SemEval 2022 - Multimedia Automatic Misogyny Identification (MAMI). The challenge of misogynous meme detection is to co-represent multi-modal features. To tackle with this challenge, we propose a Multi-modal Multi-task Variational AutoEncoder (MMVAE) to learn an effective co-representation of visual and textual features in the latent space, and determine if the meme contains misogynous information and identify its fine-grained categories. Our model achieves 0.723 on sub-task A and 0.634 on sub-task B in terms of F1 scores. We carry out comprehensive experiments on our model’s architecture and show that our approach significantly outperforms several strong uni-modal and multi-modal approaches. Our code is released on github.

2021

pdf bib
Racist or Sexist Meme? Classifying Memes beyond Hateful
Haris Bin Zia | Ignacio Castro | Gareth Tyson
Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)

Memes are the combinations of text and images that are often humorous in nature. But, that may not always be the case, and certain combinations of texts and images may depict hate, referred to as hateful memes. This work presents a multimodal pipeline that takes both visual and textual features from memes into account to (1) identify the protected category (e.g. race, sex etc.) that has been attacked; and (2) detect the type of attack (e.g. contempt, slurs etc.). Our pipeline uses state-of-the-art pre-trained visual and textual representations, followed by a simple logistic regression classifier. We employ our pipeline on the Hateful Memes Challenge dataset with additional newly created fine-grained labels for protected category and type of attack. Our best model achieves an AUROC of 0.96 for identifying the protected category, and 0.97 for detecting the type of attack. We release our code at https://github.com/harisbinzia/HatefulMemes