GrounDial: Human-norm Grounded Safe Dialog Response Generation

Siwon Kim, Shuyang Dai, Mohammad Kachuee, Shayan Ray, Tara Taghavi, Sungroh Yoon


Abstract
Current conversational AI systems based on large language models (LLMs) are known to generate unsafe responses agreeing to offensive user input or including toxic content. Previous research aimed to alleviate the toxicity by fine-tuning LLM with manually annotated safe dialogue histories. However, the dependency on additional tuning requires substantial costs. To remove the dependency, we propose GrounDial, where response safety is achieved by grounding responses to commonsense social rules without requiring fine-tuning. A hybrid approach of in-context learning and human-norm-guided decoding of GrounDial enables the response to be quantitatively and qualitatively safer even without additional data or tuning.
Anthology ID:
2024.findings-eacl.109
Volume:
Findings of the Association for Computational Linguistics: EACL 2024
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1582–1588
Language:
URL:
https://aclanthology.org/2024.findings-eacl.109
DOI:
Bibkey:
Cite (ACL):
Siwon Kim, Shuyang Dai, Mohammad Kachuee, Shayan Ray, Tara Taghavi, and Sungroh Yoon. 2024. GrounDial: Human-norm Grounded Safe Dialog Response Generation. In Findings of the Association for Computational Linguistics: EACL 2024, pages 1582–1588, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
GrounDial: Human-norm Grounded Safe Dialog Response Generation (Kim et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-eacl.109.pdf