%0 Conference Proceedings %T Planning and Generating Natural and Diverse Disfluent Texts as Augmentation for Disfluency Detection %A Yang, Jingfeng %A Yang, Diyi %A Ma, Zhaoran %Y Webber, Bonnie %Y Cohn, Trevor %Y He, Yulan %Y Liu, Yang %S Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) %D 2020 %8 November %I Association for Computational Linguistics %C Online %F yang-etal-2020-planning %X Existing approaches to disfluency detection heavily depend on human-annotated data. Numbers of data augmentation methods have been proposed to alleviate the dependence on labeled data. However, current augmentation approaches such as random insertion or repetition fail to resemble training corpus well and usually resulted in unnatural and limited types of disfluencies. In this work, we propose a simple Planner-Generator based disfluency generation model to generate natural and diverse disfluent texts as augmented data, where the Planner decides on where to insert disfluent segments and the Generator follows the prediction to generate corresponding disfluent segments. We further utilize this augmented data for pretraining and leverage it for the task of disfluency detection. Experiments demonstrated that our two-stage disfluency generation model outperforms existing baselines; those disfluent sentences generated significantly aided the task of disfluency detection and led to state-of-the-art performance on Switchboard corpus. %R 10.18653/v1/2020.emnlp-main.113 %U https://aclanthology.org/2020.emnlp-main.113 %U https://doi.org/10.18653/v1/2020.emnlp-main.113 %P 1450-1460