Minbeom Kim
2024
LifeTox: Unveiling Implicit Toxicity in Life Advice
Minbeom Kim
|
Jahyun Koo
|
Hwanhee Lee
|
Joonsuk Park
|
Hwaran Lee
|
Kyomin Jung
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)
As large language models become increasingly integrated into daily life, detecting implicit toxicity across diverse contexts is crucial. To this end, we introduce LifeTox, a dataset designed for identifying implicit toxicity within a broad range of advice-seeking scenarios. Unlike existing safety datasets, LifeTox comprises diverse contexts derived from personal experiences through open-ended questions. Our experiments demonstrate that RoBERTa fine-tuned on LifeTox matches or surpasses the zero-shot performance of large language models in toxicity classification tasks. These results underscore the efficacy of LifeTox in addressing the complex challenges inherent in implicit toxicity. We open-sourced the dataset and the LifeTox moderator family; 350M, 7B, and 13B.
2023
Critic-Guided Decoding for Controlled Text Generation
Minbeom Kim
|
Hwanhee Lee
|
Kang Min Yoo
|
Joonsuk Park
|
Hwaran Lee
|
Kyomin Jung
Findings of the Association for Computational Linguistics: ACL 2023
Steering language generation towards objectives or away from undesired content has been a long-standing goal in utilizing language models (LM). Recent work has demonstrated reinforcement learning and weighted decoding as effective approaches to achieve a higher level of language control and quality with pros and cons. In this work, we propose a novel critic decoding method for controlled language generation (CriticControl) that combines the strengths of reinforcement learning and weighted decoding. Specifically, we adopt the actor-critic framework and train an LM-steering critic from reward models. Similar to weighted decoding, our method freezes the language model and manipulates the output token distribution using a critic to improve training efficiency and stability. Evaluation of our method on three controlled generation tasks, topic control, sentiment control, and detoxification, shows that our approach generates more coherent and well-controlled texts than previous methods. In addition, CriticControl demonstrates superior generalization ability in zero-shot settings. Human evaluation studies also corroborate our findings.
Search
Co-authors
- Hwanhee Lee 2
- Joonsuk Park 2
- Hwaran Lee 2
- Kyomin Jung 2
- Kang Min Yoo 1
- show all...