Yu-Ling Una Hsu

Also published as: Yu-Ling Hsu, Yu-Ling Una Hsu


2025

pdf bib
Jailbreaking with Universal Multi-Prompts
Yu-Ling Hsu | Hsuan Su | Shang-Tse Chen
Findings of the Association for Computational Linguistics: NAACL 2025

Large language models (LLMs) have seen rapid development in recent years, revolutionizing various applications and significantly enhancing convenience and productivity. However, alongside their impressive capabilities, ethical concerns and new types of attacks, such as jailbreaking, have emerged. While most prompting techniques focus on optimizing adversarial inputs for individual cases, resulting in higher computational costs when dealing with large datasets. Less research has addressed the more general setting of training a universal attacker that can transfer to unseen tasks. In this paper, we introduce JUMP, a prompt-based method designed to jailbreak LLMs using universal multi-prompts. We also adapt our approach for defense, which we term DUMP. Experimental results demonstrate that our method for optimizing universal multi-prompts outperforms existing techniques.

1997

pdf bib
Computational Tools and Resources for Linguistic Studies
Yu-Ling Una Hsu | Jing-Shin Chang | Keh-Yih Su
International Journal of Computational Linguistics & Chinese Language Processing, Volume 2, Number 1, February 1997: Special Issue on Computational Resources for Research in Chinese Linguistics

1995

pdf bib
A Corpus-based Two-Way Design for Parameterized MT Systems: Rationale, Architecture and Training Issues
Keh-Yih Su | Jing-Shin Chang | Yu-Ling Una Hsu
Proceedings of the Sixth Conference on Theoretical and Methodological Issues in Machine Translation of Natural Languages

pdf bib
The New Generation BehaviorTran: Design Philosophy And System Architecture
Yu-Ling Una Hsu | Keh-Yih Su
Proceedings of Rocling VIII Computational Linguistics Conference VIII

1991

pdf bib
Constructing A Phrase Structure Grammar By Incorporating Linguistic Knowledge And Statistical Log-Likelihood Ratio
Keh-Yih Su | Yu-Ling Hsu | Claire Saillard
Proceedings of Rocling IV Computational Linguistics Conference IV