Qinyu Que


2021

pdf bib
Simon @ LT-EDI-EACL2021: Detecting Hope Speech with BERT
Qinyu Que
Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion

In today’s society, the rapid development of communication technology allows us to communicate with people from different parts of the world. In the process of communication, each person treats others differently. Some people are used to using offensive and sarcastic language to express their views. These words cause pain to others and make people feel down. Some people are used to sharing happiness with others and encouraging others. Such people bring joy and hope to others through their words. On social media platforms, these two kinds of language are all over the place. If people want to make the online world a better place, they will have to deal with both. So identifying offensive language and hope language is an essential task. There have been many assignments about offensive language. Shared Task on Hope Speech Detection for Equality, Diversity, and Inclusion at LT-EDI 2021-EACL 2021 uses another unique perspective – to identify the language of Hope to make contributions to society. The XLM-Roberta model is an excellent multilingual model. Our team used a fine-tuned XLM-Roberta model to accomplish this task.

pdf bib
Simon @ DravidianLangTech-EACL2021: Detecting Offensive Content in Kannada Language
Qinyu Que
Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages

This article introduces the system for the shared task of Offensive Language Identification in Dravidian Languages-EACL 2021. The world’s information technology develops at a high speed. People are used to expressing their views and opinions on social media. This leads to a lot of offensive language on social media. As people become more dependent on social media, the detection of offensive language becomes more and more necessary. This shared task is in three languages: Tamil, Malayalam, and Kannada. Our team takes part in the Kannada language task. To accomplish the task, we use the XLM-Roberta model for pre-training. But the capabilities of the XLM-Roberta model do not satisfy us in terms of statement information collection. So we made some tweaks to the output of this model. In this paper, we describe the models and experiments for accomplishing the task of the Kannada language.

pdf bib
Simon @ DravidianLangTech-EACL2021: Meme Classification for Tamil with BERT
Qinyu Que
Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages

In this paper, we introduce the system for the task of meme classification for Tamil, submitted by our team. In today’s society, social media has become an important platform for people to communicate. We use social media to share information about ourselves and express our views on things. It has gradually developed a unique form of emotional expression on social media – meme. The meme is an expression that is often ironic. This also gives the meme a unique sense of humor. But it’s not just positive content on social media. There’s also a lot of offensive content. Meme’s unique expression makes it often used by some users to post offensive content. Therefore, it is very urgent to detect the offensive content of the meme. Our team uses the natural language processing method to classify the offensive content of the meme. Our team combines the BERT model with the CNN to improve the model’s ability to collect statement information. Finally, the F1-score of our team in the official test set is 0.49, and our method ranks 5th.