Xutao Yang


2025

This paper describes the system implemented by the EMO-NLP team for track A of task 11 in SemEval-2025: Bridging the Gap in Text-Based Emotion Detection. The task focuses on multiple datasets covering 28 languages for multi-label emotion detection. Most of these languages are low-resource languages. To achieve this goal, we propose a multilingual multi-label emotion detection system called XLMCNN, which can perform multi-label emotion detection across multiple languages. To enable emotion detection in various languages, we utilize the pre-trained model XLM-RoberTa-large to obtain embeddings for the text in different languages. Subsequently, we apply a two-dimensional convolutional operation to the embeddings to extract text features, thereby enhancing the accuracy of multi-label emotion detection. Additionally, we assign weights to different emotion labels to mitigate the impact of uneven label distribution. In this task, we focus on nine languages, among which the Amharic language achieves the best performance with our system, ranking 21st out of 45 teams.

2018

An argument is divided into two parts, the claim and the reason. To obtain a clearer conclusion, some additional explanation is required. In this task, the explanations are called warrants. This paper introduces a bi-directional long short term memory (Bi-LSTM) with an attention model to select a correct warrant from two to explain an argument. We address this question as a question-answering system. For each warrant, the model produces a probability that it is correct. Finally, the system chooses the highest correct probability as the answer. Ensemble learning is used to enhance the performance of the model. Among all of the participants, we ranked 15th on the test results.