Manish Nagireddy
2024
Value Alignment from Unstructured Text
Inkit Padhi
|
Karthikeyan Natesan Ramamurthy
|
Prasanna Sattigeri
|
Manish Nagireddy
|
Pierre Dognin
|
Kush R. Varshney
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track
Aligning large language models (LLMs) to value systems has emerged as a significant area of research within the fields of AI and NLP. Currently, this alignment process relies on the availability of high-quality supervised and preference data, which can be both time-consuming and expensive to curate or annotate. In this paper, we introduce a systematic end-to-end methodology for aligning LLMs to the implicit and explicit values represented in unstructured text data. Our proposed approach leverages the use of scalable synthetic data generation techniques to effectively align the model to the values present in the unstructured data. Through two distinct use-cases, we demonstrate the efficiency of our methodology on the Mistral-7B-Instruct model. Our approach credibly aligns LLMs to the values embedded within documents, and shows improved performance against other approaches, as quantified through the use of automatic metrics and win rates.
Language Models in Dialogue: Conversational Maxims for Human-AI Interactions
Erik Miehling
|
Manish Nagireddy
|
Prasanna Sattigeri
|
Elizabeth M. Daly
|
David Piorkowski
|
John T. Richards
Findings of the Association for Computational Linguistics: EMNLP 2024
Modern language models, while sophisticated, exhibit some inherent shortcomings, particularly in conversational settings. We claim that many of the observed shortcomings can be attributed to violation of one or more conversational principles. By drawing upon extensive research from both the social science and AI communities, we propose a set of maxims – quantity, quality, relevance, manner, benevolence, and transparency – for describing effective human-AI conversation. We first justify the applicability of the first four maxims (from Grice) in the context of human-AI interactions. We then argue that two new maxims, benevolence (concerning the generation of, and engagement with, harmful content) and transparency (concerning recognition of one’s knowledge boundaries, operational constraints, and intents), are necessary for addressing behavior unique to modern human-AI interactions. We evaluate the degree to which various language models are able to understand these maxims and find that models possess an internal prioritization of principles that can significantly impact accurate interpretability of the maxims.
Search