Generalizing Clinical De-identification Models by Privacy-safe Data Augmentation using GPT-4

Woojin Kim, Sungeun Hahm, Jaejin Lee


Abstract
De-identification (de-ID) refers to removing the association between a set of identifying data and the data subject. In clinical data management, the de-ID of Protected Health Information (PHI) is critical for patient confidentiality. However, state-of-the-art de-ID models show poor generalization on a new dataset. This is due to the difficulty of retaining training corpora. Additionally, labeling standards and the formats of patient records vary across different institutions. Our study addresses these issues by exploiting GPT-4 for data augmentation through one-shot and zero-shot prompts. Our approach effectively circumvents the problem of PHI leakage, ensuring privacy by redacting PHI before processing. To evaluate the effectiveness of our proposal, we conduct cross-dataset testing. The experimental result demonstrates significant improvements across three types of F1 scores.
Anthology ID:
2024.emnlp-main.1181
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
21204–21218
Language:
URL:
https://aclanthology.org/2024.emnlp-main.1181
DOI:
Bibkey:
Cite (ACL):
Woojin Kim, Sungeun Hahm, and Jaejin Lee. 2024. Generalizing Clinical De-identification Models by Privacy-safe Data Augmentation using GPT-4. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 21204–21218, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Generalizing Clinical De-identification Models by Privacy-safe Data Augmentation using GPT-4 (Kim et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.1181.pdf
Data:
 2024.emnlp-main.1181.data.zip