Safety and Ethical Concerns of Large Language Models

Xi Zhiheng, Zheng Rui, Gui Tao


Abstract
“Recent months have witnessed significant progress in the field of large language models (LLMs).Represented by ChatGPT and GPT-4, LLMs perform well in various natural language process-ing tasks and have been applied to many downstream applications to facilitate people’s lives. However, there still exist safety and ethical concerns. Specifically, LLMs suffer from social bias,robustness problems, and poisoning issues, all of which may induce LLMs to spew harmful con-tents. We propose this tutorial as a gentle introduction to the safety and ethical issues of LLMs.”
Anthology ID:
2023.ccl-4.2
Volume:
Proceedings of the 22nd Chinese National Conference on Computational Linguistics (Volume 4: Tutorial Abstracts)
Month:
August
Year:
2023
Address:
Harbin, China
Editors:
Maosong Sun, Bing Qin, Xipeng Qiu, Jing Jiang, Xianpei Han
Venue:
CCL
SIG:
Publisher:
Chinese Information Processing Society of China
Note:
Pages:
9–16
Language:
English
URL:
https://aclanthology.org/2023.ccl-4.2
DOI:
Bibkey:
Cite (ACL):
Xi Zhiheng, Zheng Rui, and Gui Tao. 2023. Safety and Ethical Concerns of Large Language Models. In Proceedings of the 22nd Chinese National Conference on Computational Linguistics (Volume 4: Tutorial Abstracts), pages 9–16, Harbin, China. Chinese Information Processing Society of China.
Cite (Informal):
Safety and Ethical Concerns of Large Language Models (Zhiheng et al., CCL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.ccl-4.2.pdf