Wanzheng Zhu
2022
“Slow Service” ↛ “Great Food”: Enhancing Content Preservation in Unsupervised Text Style Transfer
Wanzheng Zhu
|
Suma Bhat
Proceedings of the 15th International Conference on Natural Language Generation
2021
Generate, Prune, Select: A Pipeline for Counterspeech Generation against Online Hate Speech
Wanzheng Zhu
|
Suma Bhat
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
Euphemistic Phrase Detection by Masked Language Model
Wanzheng Zhu
|
Suma Bhat
Findings of the Association for Computational Linguistics: EMNLP 2021
It is a well-known approach for fringe groups and organizations to use euphemisms—ordinary-sounding and innocent-looking words with a secret meaning—to conceal what they are discussing. For instance, drug dealers often use “pot” for marijuana and “avocado” for heroin. From a social media content moderation perspective, though recent advances in NLP have enabled the automatic detection of such single-word euphemisms, no existing work is capable of automatically detecting multi-word euphemisms, such as “blue dream” (marijuana) and “black tar” (heroin). Our paper tackles the problem of euphemistic phrase detection without human effort for the first time, as far as we are aware. We first perform phrase mining on a raw text corpus (e.g., social media posts) to extract quality phrases. Then, we utilize word embedding similarities to select a set of euphemistic phrase candidates. Finally, we rank those candidates by a masked language model—SpanBERT. Compared to strong baselines, we report 20-50% higher detection accuracies using our algorithm for detecting euphemistic phrases.
2020
GRUEN for Evaluating Linguistic Quality of Generated Text
Wanzheng Zhu
|
Suma Bhat
Findings of the Association for Computational Linguistics: EMNLP 2020
Automatic evaluation metrics are indispensable for evaluating generated text. To date, these metrics have focused almost exclusively on the content selection aspect of the system output, ignoring the linguistic quality aspect altogether. We bridge this gap by proposing GRUEN for evaluating Grammaticality, non-Redundancy, focUs, structure and coherENce of generated text. GRUEN utilizes a BERT-based model and a class of syntactic, semantic, and contextual features to examine the system output. Unlike most existing evaluation metrics which require human references as an input, GRUEN is reference-less and requires only the system output. Besides, it has the advantage of being unsupervised, deterministic, and adaptable to various tasks. Experiments on seven datasets over four language generation tasks show that the proposed metric correlates highly with human judgments.