GoldCoin: Grounding Large Language Models in Privacy Laws via Contextual Integrity Theory

Wei Fan, Haoran Li, Zheye Deng, Weiqi Wang, Yangqiu Song


Abstract
Privacy issues arise prominently during the inappropriate transmission of information between entities. Existing research primarily studies privacy by exploring various privacy attacks, defenses, and evaluations within narrowly predefined patterns, while neglecting that privacy is not an isolated, context-free concept limited to traditionally sensitive data (e.g., social security numbers), but intertwined with intricate social contexts that complicate the identification and analysis of potential privacy violations. The advent of Large Language Models (LLMs) offers unprecedented opportunities for incorporating the nuanced scenarios outlined in privacy laws to tackle these complex privacy issues. However, the scarcity of open-source relevant case studies restricts the efficiency of LLMs in aligning with specific legal statutes. To address this challenge, we introduce a novel framework, GoldCoin, designed to efficiently ground LLMs in privacy laws for judicial assessing privacy violations. Our framework leverages the theory of contextual integrity as a bridge, creating numerous synthetic scenarios grounded in relevant privacy statutes (e.g., HIPAA), to assist LLMs in comprehending the complex contexts for identifying privacy risks in the real world. Extensive experimental results demonstrate that GoldCoin markedly enhances LLMs’ capabilities in recognizing privacy risks across real court cases, surpassing the baselines on different judicial tasks.
Anthology ID:
2024.emnlp-main.195
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3321–3343
Language:
URL:
https://aclanthology.org/2024.emnlp-main.195
DOI:
10.18653/v1/2024.emnlp-main.195
Bibkey:
Cite (ACL):
Wei Fan, Haoran Li, Zheye Deng, Weiqi Wang, and Yangqiu Song. 2024. GoldCoin: Grounding Large Language Models in Privacy Laws via Contextual Integrity Theory. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 3321–3343, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
GoldCoin: Grounding Large Language Models in Privacy Laws via Contextual Integrity Theory (Fan et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.195.pdf
Data:
 2024.emnlp-main.195.data.zip