SafeText: A Benchmark for Exploring Physical Safety in Language Models

Sharon Levy, Emily Allaway, Melanie Subbiah, Lydia Chilton, Desmond Patton, Kathleen McKeown, William Yang Wang


Abstract
Understanding what constitutes safe text is an important issue in natural language processing and can often prevent the deployment of models deemed harmful and unsafe. One such type of safety that has been scarcely studied is commonsense physical safety, i.e. text that is not explicitly violent and requires additional commonsense knowledge to comprehend that it leads to physical harm. We create the first benchmark dataset, SafeText, comprising real-life scenarios with paired safe and physically unsafe pieces of advice. We utilize SafeText to empirically study commonsense physical safety across various models designed for text generation and commonsense reasoning tasks. We find that state-of-the-art large language models are susceptible to the generation of unsafe text and have difficulty rejecting unsafe advice. As a result, we argue for further studies of safety and the assessment of commonsense physical safety in models before release.
Anthology ID:
2022.emnlp-main.154
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2407–2421
Language:
URL:
https://aclanthology.org/2022.emnlp-main.154
DOI:
10.18653/v1/2022.emnlp-main.154
Bibkey:
Cite (ACL):
Sharon Levy, Emily Allaway, Melanie Subbiah, Lydia Chilton, Desmond Patton, Kathleen McKeown, and William Yang Wang. 2022. SafeText: A Benchmark for Exploring Physical Safety in Language Models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2407–2421, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
SafeText: A Benchmark for Exploring Physical Safety in Language Models (Levy et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.154.pdf