Jeong-Won Cha

Also published as: Jeongwon Cha


2019

pdf bib
Detecting context abusiveness using hierarchical deep learning
Ju-Hyoung Lee | Jun-U Park | Jeong-Won Cha | Yo-Sub Han
Proceedings of the Second Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaganda

Abusive text is a serious problem in social media and causes many issues among users as the number of users and the content volume increase. There are several attempts for detecting or preventing abusive text effectively. One simple yet effective approach is to use an abusive lexicon and determine the existence of an abusive word in text. This approach works well even when an abusive word is obfuscated. On the other hand, it is still a challenging problem to determine abusiveness in a text having no explicit abusive words. Especially, it is hard to identify sarcasm or offensiveness in context without any abusive words. We tackle this problem using an ensemble deep learning model. Our model consists of two parts of extracting local features and global features, which are crucial for identifying implicit abusiveness in context level. We evaluate our model using three benchmark data. Our model outperforms all the previous models for detecting abusiveness in a text data without abusive words. Furthermore, we combine our model and an abusive lexicon method. The experimental results show that our model has at least 4% better performance compared with the previous approaches for identifying text abusiveness in case of with/without abusive words.

2017

pdf bib
Building a Better Bitext for Structurally Different Languages through Self-training
Jungyeul Park | Loïc Dugast | Jeen-Pyo Hong | Chang-Uk Shin | Jeong-Won Cha
Proceedings of the First Workshop on Curation and Applications of Parallel and Comparable Corpora

We propose a novel method to bootstrap the construction of parallel corpora for new pairs of structurally different languages. We do so by combining the use of a pivot language and self-training. A pivot language enables the use of existing translation models to bootstrap the alignment and a self-training procedure enables to achieve better alignment, both at the document and sentence level. We also propose several evaluation methods for the resulting alignment.

2016

pdf bib
Korean Language Resources for Everyone
Jungyeul Park | Jeen-Pyo Hong | Jeong-Won Cha
Proceedings of the 30th Pacific Asia Conference on Language, Information and Computation: Oral Papers

2007

pdf bib
A Joint Statistical Model for Simultaneous Word Spacing and Spelling Error Correction for Korean
Hyungjong Noh | Jeong-Won Cha | Gary Geunbae Lee
Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions

2006

pdf bib
MMR-based Active Machine Learning for Bio Named Entity Recognition
Seokhwan Kim | Yu Song | Kyungduk Kim | Jeong-Won Cha | Gary Geunbae Lee
Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers

2002

pdf bib
Syllable-Pattern-Based Unknown-Morpheme Segmentation and Estimation for Hybrid Part-of-Speech Tagging of Korean
Gary Geunbae Lee | Jeongwon Cha | Jong-Hyeok Lee
Computational Linguistics, Volume 28, Number 1, March 2002

2000

pdf bib
POSCAT: A Morpheme-based Speech Corpus Annotation Tool
Byeongchang Kim | Jin-seok Lee | Jeongwon Cha | Geunbae Lee
Proceedings of the Second International Conference on Language Resources and Evaluation (LREC’00)

pdf bib
Structural disambiguation of morpho-syntactic categorial parsing for Korean
Jeongwon Cha | Geunbae Lee
COLING 2000 Volume 2: The 18th International Conference on Computational Linguistics

1998

pdf bib
Generalized unknown morpheme guessing for hybrid POS tagging of Korean
Jeongwon Cha | Geunbae Lee | Jong-Hyeok Lee
Sixth Workshop on Very Large Corpora