2025
pdf
bib
abs
Iterative Self-Tuning LLMs for Enhanced Jailbreaking Capabilities
Chung-En Sun
|
Xiaodong Liu
|
Weiwei Yang
|
Tsui-Wei Weng
|
Hao Cheng
|
Aidan San
|
Michel Galley
|
Jianfeng Gao
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Recent research has shown that Large Language Models (LLMs) are vulnerable to automated jailbreak attacks, where adversarial suffixes crafted by algorithms appended to harmful queries bypass safety alignment and trigger unintended responses. Current methods for generating these suffixes are computationally expensive and have low Attack Success Rates (ASR), especially against well-aligned models like Llama2 and Llama3. To overcome these limitations, we introduce **ADV-LLM**, an iterative self-tuning process that crafts adversarial LLMs with enhanced jailbreak ability. Our framework significantly reduces the computational cost of generating adversarial suffixes while achieving nearly 100% ASR on various open-source LLMs. Moreover, it exhibits strong attack transferability to closed-source models, achieving 99% ASR on GPT-3.5 and 49% ASR on GPT-4, despite being optimized solely on Llama3. Beyond improving jailbreak ability, ADV-LLM provides valuable insights for future safety alignment research through its ability to generate large datasets for studying LLM safety.
2019
pdf
bib
abs
A Multilingual Topic Model for Learning Weighted Topic Links Across Corpora with Low Comparability
Weiwei Yang
|
Jordan Boyd-Graber
|
Philip Resnik
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Multilingual topic models (MTMs) learn topics on documents in multiple languages. Past models align topics across languages by implicitly assuming the documents in different languages are highly comparable, often a false assumption. We introduce a new model that does not rely on this assumption, particularly useful in important low-resource language scenarios. Our MTM learns weighted topic links and connects cross-lingual topics only when the dominant words defining them are similar, outperforming LDA and previous MTMs in classification tasks using documents’ topic posteriors as features. It also learns coherent topics on documents with low comparability.
2017
pdf
bib
abs
Adapting Topic Models using Lexical Associations with Tree Priors
Weiwei Yang
|
Jordan Boyd-Graber
|
Philip Resnik
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Models work best when they are optimized taking into account the evaluation criteria that people care about. For topic models, people often care about interpretability, which can be approximated using measures of lexical association. We integrate lexical association into topic optimization using tree priors, which provide a flexible framework that can take advantage of both first order word associations and the higher-order associations captured by word embeddings. Tree priors improve topic interpretability without hurting extrinsic performance.
2016
pdf
bib
A Discriminative Topic Model using Document Network Structure
Weiwei Yang
|
Jordan Boyd-Graber
|
Philip Resnik
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
2015
pdf
bib
Birds of a Feather Linked Together: A Discriminative Topic Model using Link-based Priors
Weiwei Yang
|
Jordan Boyd-Graber
|
Philip Resnik
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing