Sohyun Park
Also published as:
SoHyun Park
2024
pdf
bib
abs
Label-aware Hard Negative Sampling Strategies with Momentum Contrastive Learning for Implicit Hate Speech Detection
Jaehoon Kim
|
Seungwan Jin
|
Sohyun Park
|
Someen Park
|
Kyungsik Han
Findings of the Association for Computational Linguistics ACL 2024
Detecting implicit hate speech that is not directly hateful remains a challenge. Recent research has attempted to detect implicit hate speech by applying contrastive learning to pre-trained language models such as BERT and RoBERTa, but the proposed models still do not have a significant advantage over cross-entropy loss-based learning. We found that contrastive learning based on randomly sampled batch data does not encourage the model to learn hard negative samples. In this work, we propose Label-aware Hard Negative sampling strategies (LAHN) that encourage the model to learn detailed features from hard negative samples, instead of naive negative samples in random batch, using momentum-integrated contrastive learning. LAHN outperforms the existing models for implicit hate speech detection both in- and cross-datasets. The code is available at https://github.com/Hanyang-HCC-Lab/LAHN
2016
pdf
bib
UNBNLP at SemEval-2016 Task 1: Semantic Textual Similarity: A Unified Framework for Semantic Processing and Evaluation
Milton King
|
Waseem Gharbieh
|
SoHyun Park
|
Paul Cook
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)
pdf
bib
abs
Classifying Out-of-vocabulary Terms in a Domain-Specific Social Media Corpus
SoHyun Park
|
Afsaneh Fazly
|
Annie Lee
|
Brandon Seibel
|
Wenjie Zi
|
Paul Cook
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
In this paper we consider the problem of out-of-vocabulary term classification in web forum text from the automotive domain. We develop a set of nine domain- and application-specific categories for out-of-vocabulary terms. We then propose a supervised approach to classify out-of-vocabulary terms according to these categories, drawing on features based on word embeddings, and linguistic knowledge of common properties of out-of-vocabulary terms. We show that the features based on word embeddings are particularly informative for this task. The categories that we predict could serve as a preliminary, automatically-generated source of lexical knowledge about out-of-vocabulary terms. Furthermore, we show that this approach can be adapted to give a semi-automated method for identifying out-of-vocabulary terms of a particular category, automotive named entities, that is of particular interest to us.