Marcel Zalmanovici
2023
Unveiling Safety Vulnerabilities of Large Language Models
George Kour
|
Marcel Zalmanovici
|
Naama Zwerdling
|
Esther Goldbraich
|
Ora Fandina
|
Ateret Anaby Tavor
|
Orna Raz
|
Eitan Farchi
Proceedings of the Third Workshop on Natural Language Generation, Evaluation, and Metrics (GEM)
As large language models become more prevalent, their possible harmful or inappropriate responses are a cause for concern. This paper introduces a unique dataset containing adversarial examples in the form of questions, we call AttaQ, designed to provoke such harmful or inappropriate responses. We assess the efficacy of our dataset by analyzing the vulnerabilities of various models when subjected to it. Additionally, we introduce a novel automatic approach for identifying and naming vulnerable semantic regions — input semantic areas for which the model is likely to produce harmful outputs. This is achieved through the application of specialized clustering techniques that consider both the semantic similarity of the input attacks and the harmfulness of the model’s responses.Automatically identifying vulnerable semantic regions enhances the evaluation of model weaknesses, facilitating targeted improvements to its safety mechanisms and overall reliability.
Search
Co-authors
- George Kour 1
- Naama Zwerdling 1
- Esther Goldbraich 1
- Ora Fandina 1
- Ateret Anaby Tavor 1
- show all...