Unsupervised Non-transferable Text Classification

Guangtao Zeng, Wei Lu


Abstract
Training a good deep learning model requires substantial data and computing resources, which makes the resulting neural model a valuable intellectual property. To prevent the neural network from being undesirably exploited, non-transferable learning has been proposed to reduce the model generalization ability in specific target domains. However, existing approaches require labeled data for the target domain which can be difficult to obtain. Furthermore, they do not have the mechanism to still recover the model’s ability to access the target domain. In this paper, we propose a novel unsupervised non-transferable learning method for the text classification task that does not require annotated target domain data. We further introduce a secret key component in our approach for recovering the access to the target domain, where we design both an explicit and an implicit method for doing so. Extensive experiments demonstrate the effectiveness of our approach.
Anthology ID:
2022.emnlp-main.685
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10071–10084
Language:
URL:
https://aclanthology.org/2022.emnlp-main.685
DOI:
10.18653/v1/2022.emnlp-main.685
Bibkey:
Cite (ACL):
Guangtao Zeng and Wei Lu. 2022. Unsupervised Non-transferable Text Classification. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10071–10084, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Unsupervised Non-transferable Text Classification (Zeng & Lu, EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.685.pdf