Arslan Basharat


2024

pdf bib
Defending Against Social Engineering Attacks in the Age of LLMs
Lin Ai | Tharindu Sandaruwan Kumarage | Amrita Bhattacharjee | Zizhou Liu | Zheng Hui | Michael S. Davinroy | James Cook | Laura Cassani | Kirill Trapeznikov | Matthias Kirchner | Arslan Basharat | Anthony Hoogs | Joshua Garland | Huan Liu | Julia Hirschberg
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

pdf bib
Language Models are Alignable Decision-Makers: Dataset and Application to the Medical Triage Domain
Brian Hu | Bill Ray | Alice Leung | Amy Summerville | David Joy | Christopher Funk | Arslan Basharat
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)

In difficult decision-making scenarios, it is common to have conflicting opinions among expert human decision-makers as there may not be a single right answer. Such decisions may be guided by different attributes that can be used to characterize an individual’s decision. We introduce a novel dataset for medical triage decision-making, labeled with a set of decision-maker attributes (DMAs). This dataset consists of 62 scenarios, covering six different DMAs, including ethical principles such as fairness and moral desert. We present a novel software framework for human-aligned decision-making by utilizing these DMAs, paving the way for trustworthy AI with better guardrails. Specifically, we demonstrate how large language models (LLMs) can serve as ethical decision-makers, and how their decisions can be aligned to different DMAs using zero-shot prompting. Our experiments focus on different open-source models with varying sizes and training techniques, such as Falcon, Mistral, and Llama 2. Finally, we also introduce a new form of weighted self-consistency that improves the overall quantified performance. Our results provide new research directions in the use of LLMs as alignable decision-makers. The dataset and open-source software are publicly available at: https://github.com/ITM-Kitware/llm-alignable-dm.