A Study of the Attention Abnormality in Trojaned BERTs

Weimin Lyu, Songzhu Zheng, Tengfei Ma, Chao Chen


Abstract
Trojan attacks raise serious security concerns. In this paper, we investigate the underlying mechanism of Trojaned BERT models. We observe the attention focus drifting behavior of Trojaned models, i.e., when encountering an poisoned input, the trigger token hijacks the attention focus regardless of the context. We provide a thorough qualitative and quantitative analysis of this phenomenon, revealing insights into the Trojan mechanism. Based on the observation, we propose an attention-based Trojan detector to distinguish Trojaned models from clean ones. To the best of our knowledge, we are the first to analyze the Trojan mechanism and develop a Trojan detector based on the transformer’s attention.
Anthology ID:
2022.naacl-main.348
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4727–4741
Language:
URL:
https://aclanthology.org/2022.naacl-main.348
DOI:
10.18653/v1/2022.naacl-main.348
Bibkey:
Cite (ACL):
Weimin Lyu, Songzhu Zheng, Tengfei Ma, and Chao Chen. 2022. A Study of the Attention Abnormality in Trojaned BERTs. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4727–4741, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
A Study of the Attention Abnormality in Trojaned BERTs (Lyu et al., NAACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.naacl-main.348.pdf
Video:
 https://aclanthology.org/2022.naacl-main.348.mp4
Code
 weimin17/attention_abnormality_in_trojaned_berts
Data
IMDb Movie ReviewsSSTSST-2