%0 Conference Proceedings %T GUTS at SemEval-2022 Task 4: Adversarial Training and Balancing Methods for Patronizing and Condescending Language Detection %A Lu, Junyu %A Zhang, Hao %A Zhang, Tongyue %A Wang, Hongbo %A Zhu, Haohao %A Xu, Bo %A Lin, Hongfei %Y Emerson, Guy %Y Schluter, Natalie %Y Stanovsky, Gabriel %Y Kumar, Ritesh %Y Palmer, Alexis %Y Schneider, Nathan %Y Singh, Siddharth %Y Ratan, Shyam %S Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022) %D 2022 %8 July %I Association for Computational Linguistics %C Seattle, United States %F lu-etal-2022-guts %X Patronizing and Condescending Language (PCL) towards vulnerable communities in general media has been shown to have potentially harmful effects. Due to its subtlety and the good intentions behind its use, the audience is not aware of the language’s toxicity. In this paper, we present our method for the SemEval-2022 Task4 titled “Patronizing and Condescending Language Detection”. In Subtask A, a binary classification task, we introduce adversarial training based on Fast Gradient Method (FGM) and employ pre-trained model in a unified architecture. For Subtask B, framed as a multi-label classification problem, we utilize various improved multi-label cross-entropy loss functions and analyze the performance of our method. In the final evaluation, our system achieved official rankings of 17/79 and 16/49 on Subtask A and Subtask B, respectively. In addition, we explore the relationship between PCL and emotional polarity and intensity it contains. %R 10.18653/v1/2022.semeval-1.58 %U https://aclanthology.org/2022.semeval-1.58 %U https://doi.org/10.18653/v1/2022.semeval-1.58 %P 432-437