Kai Konen
2024
Style Vectors for Steering Generative Large Language Models
Kai Konen
|
Sophie Jentzsch
|
Diaoulé Diallo
|
Peer Schütt
|
Oliver Bensch
|
Roxanne El Baff
|
Dominik Opitz
|
Tobias Hecking
Findings of the Association for Computational Linguistics: EACL 2024
This research explores strategies for steering the output of large language models (LLMs) towards specific styles, such as sentiment, emotion, or writing style, by adding style vectors to the activations of hidden layers during text generation. We show that style vectors can be simply computed from recorded layer activations for input texts in a specific style in contrast to more complex training-based approaches. Through a series of experiments, we demonstrate the effectiveness of activation engineering using such style vectors to influence the style of generated text in a nuanced and parameterisable way, distinguishing it from prompt engineering. The presented research constitutes a significant step towards developing more adaptive and effective AI-empowered interactive systems.
Improving Argument Effectiveness Across Ideologies using Instruction-tuned Large Language Models
Roxanne El Baff
|
Khalid Al Khatib
|
Milad Alshomary
|
Kai Konen
|
Benno Stein
|
Henning Wachsmuth
Findings of the Association for Computational Linguistics: EMNLP 2024
Different political ideologies (e.g., liberal and conservative Americans) hold different worldviews, which leads to opposing stances on different issues (e.g., gun control) and, thereby, fostering societal polarization. Arguments are a means of bringing the perspectives of people with different ideologies closer together, depending on how well they reach their audience. In this paper, we study how to computationally turn ineffective arguments into effective arguments for people with certain ideologies by using instruction-tuned large language models (LLMs), looking closely at style features. For development and evaluation, we collect ineffective arguments per ideology from debate.org, and we generate about 30k, which we rewrite using three LLM methods tailored to our task: zero-shot prompting, few-shot prompting, and LLM steering. Our experiments provide evidence that LLMs naturally improve argument effectiveness for liberals. Our LLM-based and human evaluation show a clear preference towards the rewritten arguments. Code and link to the data are available here: https://github.com/roxanneelbaff/emnlp2024-iesta.
Search
Co-authors
- Roxanne El Baff 2
- Sophie Jentzsch 1
- Diaoulé Diallo 1
- Peer Schütt 1
- Oliver Bensch 1
- show all...