2021
pdf
bib
abs
SelfTraining using Rules of Grammar for FewShot NLU
Joonghyuk Hahn

Hyunjoon Cheon

Kyuyeol Han

Cheongjae Lee

Junseok Kim

YoSub Han
Findings of the Association for Computational Linguistics: EMNLP 2021
We tackle the problem of selftraining networks for NLU in lowresource environmentâ€”few labeled data and lots of unlabeled data. The effectiveness of selftraining is a result of increasing the amount of training data while training. Yet it becomes less effective in lowresource settings due to unreliable labels predicted by the teacher model on unlabeled data. Rules of grammar, which describe the grammatical structure of data, have been used in NLU for better explainability. We propose to use rules of grammar in selftraining as a more reliable pseudolabeling mechanism, especially when there are few labeled data. We design an effective algorithm that constructs and expands rules of grammar without human involvement. Then we integrate the constructed rules as a pseudolabeling mechanism into selftraining. There are two possible scenarios regarding data distribution: it is unknown or known in prior to training. We empirically demonstrate that our approach substantially outperforms the stateoftheart methods in three benchmark datasets for both scenarios.
pdf
bib
abs
MultiFix: Learning to Repair Multiple Errors by Optimal Alignment Learning
HyeonTae Seo

YoSub Han

SangKi Ko
Findings of the Association for Computational Linguistics: EMNLP 2021
We consider the problem of learning to repair erroneous C programs by learning optimal alignments with correct programs. Since the previous approaches fix a single error in a line, it is inevitable to iterate the fixing process until no errors remain. In this work, we propose a novel sequencetosequence learning framework for fixing multiple program errors at a time. We introduce the editdistancebased data labeling approach for program error correction. Instead of labeling a program repair example by pairing an erroneous program with a line fix, we label the example by paring an erroneous program with an optimal alignment to the corresponding correct program produced by the editdistance computation. We evaluate our proposed approach on a publicly available dataset (DeepFix dataset) that consists of erroneous C programs submitted by novice programming students. On a set of 6,975 erroneous C programs from the DeepFix dataset, our approach achieves the stateoftheart result in terms of full repair rate on the DeepFix dataset (without extra data such as compiler error message or additional source codes for pretraining).
2019
pdf
bib
abs
Online Infix Probability Computation for Probabilistic Finite Automata
Marco Cognetta

YoSub Han

Soon Chan Kwon
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Probabilistic finite automata (PFAs) are com mon statistical language model in natural lan guage and speech processing. A typical task for PFAs is to compute the probability of all strings that match a query pattern. An impor tant special case of this problem is computing the probability of a string appearing as a pre fix, suffix, or infix. These problems find use in many natural language processing tasks such word prediction and text error correction. Recently, we gave the first incremental algorithm to efficiently compute the infix probabilities of each prefix of a string (Cognetta et al., 2018). We develop an asymptotic improvement of that algorithm and solve the open problem of computing the infix probabilities of PFAs from streaming data, which is crucial when process ing queries online and is the ultimate goal of the incremental approach.
pdf
bib
abs
SoftRegex: Generating Regex from Natural Language Descriptions using Softened Regex Equivalence
JunU Park

SangKi Ko

Marco Cognetta

YoSub Han
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP)
We continue the study of generating semantically correct regular expressions from natural language descriptions (NL). The current stateoftheart model SemRegex produces regular expressions from NLs by rewarding the reinforced learning based on the semantic (rather than syntactic) equivalence between two regular expressions. Since the regular expression equivalence problem is PSPACEcomplete, we introduce the EQ_Reg model for computing the similarity of two regular expressions using deep neural networks. Our EQ_Reg model essentially softens the equivalence of two regular expressions when used as a reward function. We then propose a new regex generation model, SoftRegex, using the EQ_Reg model, and empirically demonstrate that SoftRegex substantially reduces the training time (by a factor of at least 3.6) and produces stateoftheart results on three benchmark datasets.
pdf
bib
abs
Detecting context abusiveness using hierarchical deep learning
JuHyoung Lee

JunU Park

JeongWon Cha

YoSub Han
Proceedings of the Second Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaganda
Abusive text is a serious problem in social media and causes many issues among users as the number of users and the content volume increase. There are several attempts for detecting or preventing abusive text effectively. One simple yet effective approach is to use an abusive lexicon and determine the existence of an abusive word in text. This approach works well even when an abusive word is obfuscated. On the other hand, it is still a challenging problem to determine abusiveness in a text having no explicit abusive words. Especially, it is hard to identify sarcasm or offensiveness in context without any abusive words. We tackle this problem using an ensemble deep learning model. Our model consists of two parts of extracting local features and global features, which are crucial for identifying implicit abusiveness in context level. We evaluate our model using three benchmark data. Our model outperforms all the previous models for detecting abusiveness in a text data without abusive words. Furthermore, we combine our model and an abusive lexicon method. The experimental results show that our model has at least 4% better performance compared with the previous approaches for identifying text abusiveness in case of with/without abusive words.
2018
pdf
bib
abs
Incremental Computation of Infix Probabilities for Probabilistic Finite Automata
Marco Cognetta

YoSub Han

Soon Chan Kwon
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
In natural language processing, a common task is to compute the probability of a phrase appearing in a document or to calculate the probability of all phrases matching a given pattern. For instance, one computes affix (prefix, suffix, infix, etc.) probabilities of a string or a set of strings with respect to a probability distribution of patterns. The problem of computing infix probabilities of strings when the pattern distribution is given by a probabilistic contextfree grammar or by a probabilistic finite automaton is already solved, yet it was open to compute the infix probabilities in an incremental manner. The incremental computation is crucial when a new query is built from a previous query. We tackle this problem and suggest a method that computes infix probabilities incrementally for probabilistic finite automata by representing all the probabilities of matching strings as a series of transition matrix calculations. We show that the proposed approach is theoretically faster than the previous method and, using real world data, demonstrate that our approach has vastly better performance in practice.