AMIA: Automatic Masking and Joint Intention Analysis Makes LVLMs Robust Jailbreak Defenders

Yuqi Zhang, Yuchun Miao, Zuchao Li, Liang Ding


Abstract
We introduce AMIA, a lightweight, inference-only defense for Large Vision–Language Models (LVLMs) that (1) Automatically Masks a small set of text-irrelevant image patches to disrupt adversarial perturbations, and (2) conducts joint Intention Analysis to uncover and mitigate hidden harmful intents before response generation. Without any retraining, AMIA improves defense success rates across diverse LVLMs and jailbreak benchmarks from an average of 52.4% to 81.7%, preserves general utility with only a 2% average accuracy drop, and incurs only modest inference overhead. Ablation confirms that both masking and intention analysis are essential for robust safety–utility trade-off. Our code will be released.
Anthology ID:
2025.findings-emnlp.651
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12189–12199
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.651/
DOI:
Bibkey:
Cite (ACL):
Yuqi Zhang, Yuchun Miao, Zuchao Li, and Liang Ding. 2025. AMIA: Automatic Masking and Joint Intention Analysis Makes LVLMs Robust Jailbreak Defenders. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 12189–12199, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
AMIA: Automatic Masking and Joint Intention Analysis Makes LVLMs Robust Jailbreak Defenders (Zhang et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.651.pdf
Checklist:
 2025.findings-emnlp.651.checklist.pdf