Watermarking PLMs on Classification Tasks by Combining Contrastive Learning with Weight Perturbation

Chenxi Gu, Xiaoqing Zheng, Jianhan Xu, Muling Wu, Cenyuan Zhang, Chengsong Huang, Hua Cai, Xuanjing Huang


Abstract
Large pre-trained language models (PLMs) have achieved remarkable success, making them highly valuable intellectual property due to their expensive training costs. Consequently, model watermarking, a method developed to protect the intellectual property of neural models, has emerged as a crucial yet underexplored technique. The problem of watermarking PLMs has remained unsolved since the parameters of PLMs will be updated when fine-tuned on downstream datasets, and then embedded watermarks could be removed easily due to the catastrophic forgetting phenomenon. This study investigates the feasibility of watermarking PLMs by embedding backdoors that can be triggered by specific inputs. We employ contrastive learning during the watermarking phase, allowing the representations of specific inputs to be isolated from others and mapped to a particular label after fine-tuning. Moreover, we demonstrate that by combining weight perturbation with the proposed method, watermarks can be embedded in a flatter region of the loss landscape, thereby increasing their robustness to watermark removal. Extensive experiments on multiple datasets demonstrate that the embedded watermarks can be robustly extracted without any knowledge about downstream tasks, and with a high success rate.
Anthology ID:
2023.findings-emnlp.239
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3685–3694
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.239
DOI:
10.18653/v1/2023.findings-emnlp.239
Bibkey:
Cite (ACL):
Chenxi Gu, Xiaoqing Zheng, Jianhan Xu, Muling Wu, Cenyuan Zhang, Chengsong Huang, Hua Cai, and Xuanjing Huang. 2023. Watermarking PLMs on Classification Tasks by Combining Contrastive Learning with Weight Perturbation. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 3685–3694, Singapore. Association for Computational Linguistics.
Cite (Informal):
Watermarking PLMs on Classification Tasks by Combining Contrastive Learning with Weight Perturbation (Gu et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.239.pdf