Skin-in-the-Game: Decision Making via Multi-Stakeholder Alignment in LLMs

Bilgehan Sel, Priya Shanmugasundaram, Mohammad Kachuee, Kun Zhou, Ruoxi Jia, Ming Jin


Abstract
Large Language Models (LLMs) have shown remarkable capabilities in tasks such as summarization, arithmetic reasoning, and question answering. However, they encounter significant challenges in the domain of moral reasoning and ethical decision-making, especially in complex scenarios with multiple stakeholders. This paper introduces the Skin-in-the-Game (SKIG) framework, aimed at enhancing moral reasoning in LLMs by exploring decisions’ consequences from multiple stakeholder perspectives. The core components of the framework consist of simulating accountability for decisions, conducting empathy exercises on different stakeholders, and evaluating the risks associated with the impacts of potential actions. We study SKIG’s performance across various moral reasoning benchmarks with proprietary and open-source LLMs, and investigate its crucial components through extensive ablation analyses. Our framework exhibits marked improvements in performance compared to baselines across different language models and benchmarks.
Anthology ID:
2024.acl-long.751
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13921–13959
Language:
URL:
https://aclanthology.org/2024.acl-long.751
DOI:
Bibkey:
Cite (ACL):
Bilgehan Sel, Priya Shanmugasundaram, Mohammad Kachuee, Kun Zhou, Ruoxi Jia, and Ming Jin. 2024. Skin-in-the-Game: Decision Making via Multi-Stakeholder Alignment in LLMs. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13921–13959, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Skin-in-the-Game: Decision Making via Multi-Stakeholder Alignment in LLMs (Sel et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.751.pdf