Yuepeng Hu
2026
Jailbreaking Safeguarded Text-to-Image Models via Large Language Models
Zhengyuan Jiang | Yuepeng Hu | Yuchen Yang | Yinzhi Cao | Neil Zhenqiang Gong
Findings of the Association for Computational Linguistics: EACL 2026
Zhengyuan Jiang | Yuepeng Hu | Yuchen Yang | Yinzhi Cao | Neil Zhenqiang Gong
Findings of the Association for Computational Linguistics: EACL 2026
Text-to-Image models may generate harmful content, such as pornographic images, particularly when unsafe prompts are submitted. To address this issue, safety filters are often added on top of text-to-image models, or the models themselves are aligned to reduce harmful outputs. However, these defenses remain vulnerable when an attacker strategically designs adversarial prompts to bypass these safety guardrails. In this work, we propose PromptTune, a method to jailbreak text-to-image models with safety guardrails using a fine-tuned large language model. Unlike other query-based jailbreak attacks that require repeated queries to the target model, our attack generates adversarial prompts efficiently after fine-tuning our AttackLLM. We evaluate our method on three datasets of unsafe prompts and against five safety guardrails. Our results demonstrate that our approach effectively bypasses safety guardrails, outperforms existing no-box attacks, and also facilitates other query-based attacks. Our code is available at https://github.com/zhengyuan-jiang/PromptTune.
2025
WebInject: Prompt Injection Attack to Web Agents
Xilong Wang | John Bloch | Zedian Shao | Yuepeng Hu | Shuyan Zhou | Neil Zhenqiang Gong
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Xilong Wang | John Bloch | Zedian Shao | Yuepeng Hu | Shuyan Zhou | Neil Zhenqiang Gong
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Multi-modal large language model (MLLM)-based web agents interact with webpage environments by generating actions based on screenshots of the webpages. In this work, we propose WebInject, a prompt injection attack that manipulates the webpage environment to induce a web agent to perform an attacker-specified action. Our attack adds a perturbation to the raw pixel values of the rendered webpage. After these perturbed pixels are mapped into a screenshot, the perturbation induces the web agent to perform the attacker-specified action. We formulate the task of finding the perturbation as an optimization problem. A key challenge in solving this problem is that the mapping between raw pixel values and screenshot is non-differentiable, making it difficult to backpropagate gradients to the perturbation. To overcome this, we train a neural network to approximate the mapping and apply projected gradient descent to solve the reformulated optimization problem. Extensive evaluation on multiple datasets shows that WebInject is highly effective and significantly outperforms baselines.