2023
pdf
bib
abs
Deep Model Compression Also Helps Models Capture Ambiguity
Hancheol Park
|
Jong Park
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Natural language understanding (NLU) tasks face a non-trivial amount of ambiguous samples where veracity of their labels is debatable among annotators. NLU models should thus account for such ambiguity, but they approximate the human opinion distributions quite poorly and tend to produce over-confident predictions. To address this problem, we must consider how to exactly capture the degree of relationship between each sample and its candidate classes. In this work, we propose a novel method with deep model compression and show how such relationship can be accounted for. We see that more reasonably represented relationships can be discovered in the lower layers and that validation accuracies are converging at these layers, which naturally leads to layer pruning. We also see that distilling the relationship knowledge from a lower layer helps models produce better distribution. Experimental results demonstrate that our method makes substantial improvement on quantifying ambiguity without gold distribution labels. As positive side-effects, our method is found to reduce the model size significantly and improve latency, both attractive aspects of NLU products.
pdf
bib
abs
Question-Answering in a Low-resourced Language: Benchmark Dataset and Models for Tigrinya
Fitsum Gaim
|
Wonsuk Yang
|
Hancheol Park
|
Jong Park
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Question-Answering (QA) has seen significant advances recently, achieving near human-level performance over some benchmarks. However, these advances focus on high-resourced languages such as English, while the task remains unexplored for most other languages, mainly due to the lack of annotated datasets. This work presents a native QA dataset for an East African language, Tigrinya. The dataset contains 10.6K question-answer pairs spanning 572 paragraphs extracted from 290 news articles on various topics. The dataset construction method is discussed, which is applicable to constructing similar resources for related languages. We present comprehensive experiments and analyses of several resource-efficient approaches to QA, including monolingual, cross-lingual, and multilingual setups, along with comparisons against machine-translated silver data. Our strong baseline models reach 76% in the F1 score, while the estimated human performance is 92%, indicating that the benchmark presents a good challenge for future work. We make the dataset, models, and leaderboard publicly available.
2015
pdf
bib
Measuring Popularity of Machine-Generated Sentences Using Term Count, Document Frequency, and Dependency Language Model
Jong Myoung Kim
|
Hancheol Park
|
Young-Seob Jeong
|
Ho-Jin Choi
|
Gahgene Gweon
|
Jeong Hur
Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation: Posters
2014
pdf
bib
Sentential Paraphrase Generation for Agglutinative Languages Using SVM with a String Kernel
Hancheol Park
|
Gahgene Gweon
|
Ho-Jin Choi
|
Jeong Heo
|
Pum-Mo Ryu
Proceedings of the 28th Pacific Asia Conference on Language, Information and Computing