Citation: A Key to Building Responsible and Accountable Large Language Models

Jie Huang, Kevin Chang


Abstract
Large Language Models (LLMs) bring transformative benefits alongside unique challenges, including intellectual property (IP) and ethical concerns. This position paper explores a novel angle to mitigate these risks, drawing parallels between LLMs and established web systems. We identify “citation”—the acknowledgement or reference to a source or evidence—as a crucial yet missing component in LLMs. Incorporating citation could enhance content transparency and verifiability, thereby confronting the IP and ethical issues in the deployment of LLMs. We further propose that a comprehensive citation mechanism for LLMs should account for both non-parametric and parametric content. Despite the complexity of implementing such a citation mechanism, along with the potential pitfalls, we advocate for its development. Building on this foundation, we outline several research problems in this area, aiming to guide future explorations towards building more responsible and accountable LLMs.
Anthology ID:
2024.findings-naacl.31
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
464–473
Language:
URL:
https://aclanthology.org/2024.findings-naacl.31
DOI:
Bibkey:
Cite (ACL):
Jie Huang and Kevin Chang. 2024. Citation: A Key to Building Responsible and Accountable Large Language Models. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 464–473, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Citation: A Key to Building Responsible and Accountable Large Language Models (Huang & Chang, Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-naacl.31.pdf
Copyright:
 2024.findings-naacl.31.copyright.pdf