Do LLMs Know to Respect Copyright Notice?

Jialiang Xu, Shenglan Li, Zhaozhuo Xu, Denghui Zhang


Abstract
Prior study shows that LLMs sometimes generate content that violates copyright. In this paper, we study another important yet underexplored problem, i.e., will LLMs respect copyright information in user input, and behave accordingly? The research problem is critical, as a negative answer would imply that LLMs will become the primary facilitator and accelerator of copyright infringement behavior. We conducted a series of experiments using a diverse set of language models, user prompts, and copyrighted materials, including books, news articles, API documentation, and movie scripts. Our study offers a conservative evaluation of the extent to which language models may infringe upon copyrights when processing user input containing protected material. This research emphasizes the need for further investigation and the importance of ensuring LLMs respect copyright regulations when handling user input to prevent unauthorized use or reproduction of protected content. We also release a benchmark dataset serving as a test bed for evaluating infringement behaviors by LLMs and stress the need for future alignment.
Anthology ID:
2024.emnlp-main.1147
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
20604–20619
Language:
URL:
https://aclanthology.org/2024.emnlp-main.1147
DOI:
Bibkey:
Cite (ACL):
Jialiang Xu, Shenglan Li, Zhaozhuo Xu, and Denghui Zhang. 2024. Do LLMs Know to Respect Copyright Notice?. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 20604–20619, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Do LLMs Know to Respect Copyright Notice? (Xu et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.1147.pdf