OpinionGPT: Modelling Explicit Biases in Instruction-Tuned LLMs

Patrick Haller, Ansar Aynetdinov, Alan Akbik


Abstract
Instruction-tuned Large Language Models (LLMs) have recently showcased remarkable ability to generate fitting responses to natural language instructions. However, an open research question concerns the inherent biases of trained models and their responses. For instance, if the data used to tune an LLM is dominantly written by persons with a specific political bias, we might expect generated answers to share this bias. Current research work seeks to de-bias such models, or suppress potentially biased answers.With this demonstration, we take a different view on biases in instruction-tuning: Rather than aiming to suppress them, we aim to make them explicit and transparent. To this end, we present OpinionGPT, a web demo in which users can ask questions and select all biases they wish to investigate. The demo will answer this question using a model fine-tuned on text representing each of the selected biases, allowing side-by-side comparison. To train the underlying model, we identified 11 different biases (political, geographic, gender, age) and derived an instruction-tuning corpus in which each answer was written by members of one of these demographics. This paper presents OpinionGPT, illustrates how we trained the bias-aware model and showcases the web application (available at https://opiniongpt.informatik.hu-berlin.de).
Anthology ID:
2024.naacl-demo.8
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kai-Wei Chang, Annie Lee, Nazneen Rajani
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
78–86
Language:
URL:
https://aclanthology.org/2024.naacl-demo.8
DOI:
10.18653/v1/2024.naacl-demo.8
Bibkey:
Cite (ACL):
Patrick Haller, Ansar Aynetdinov, and Alan Akbik. 2024. OpinionGPT: Modelling Explicit Biases in Instruction-Tuned LLMs. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations), pages 78–86, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
OpinionGPT: Modelling Explicit Biases in Instruction-Tuned LLMs (Haller et al., NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-demo.8.pdf