Chee Seng Chan
2024
MalayMMLU: A Multitask Benchmark for the Low-Resource Malay Language
Soon Chang Poh
|
Sze Jue Yang
|
Jeraelyn Ming Li Tan
|
Lawrence Leroy Tze Yao Chieng
|
Jia Xuan Tan
|
Zhenyu Yu
|
Foong Chee Mun
|
Chee Seng Chan
Findings of the Association for Computational Linguistics: EMNLP 2024
2022
An Embarrassingly Simple Approach for Intellectual Property Rights Protection on Recurrent Neural Networks
Zhi Qin Tan
|
Hao Shan Wong
|
Chee Seng Chan
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Capitalise on deep learning models, offering Natural Language Processing (NLP) solutions as a part of the Machine Learning as a Service (MLaaS) has generated handsome revenues. At the same time, it is known that the creation of these lucrative deep models is non-trivial. Therefore, protecting these inventions’ intellectual property rights (IPR) from being abused, stolen and plagiarized is vital. This paper proposes a practical approach for the IPR protection on recurrent neural networks (RNN) without all the bells and whistles of existing IPR solutions. Particularly, we introduce the Gatekeeper concept that resembles the recurrent nature in RNN architecture to embed keys. Also, we design the model training scheme in a way such that the protected RNN model will retain its original performance iff a genuine key is presented. Extensive experiments showed that our protection scheme is robust and effective against ambiguity and removal attacks in both white-box and black-box protection schemes on different RNN variants. Code is available at https://github.com/zhiqin1998/RecurrentIPR.
Search