2024
pdf
bib
abs
Value Alignment from Unstructured Text
Inkit Padhi
|
Karthikeyan Natesan Ramamurthy
|
Prasanna Sattigeri
|
Manish Nagireddy
|
Pierre Dognin
|
Kush R. Varshney
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track
Aligning large language models (LLMs) to value systems has emerged as a significant area of research within the fields of AI and NLP. Currently, this alignment process relies on the availability of high-quality supervised and preference data, which can be both time-consuming and expensive to curate or annotate. In this paper, we introduce a systematic end-to-end methodology for aligning LLMs to the implicit and explicit values represented in unstructured text data. Our proposed approach leverages the use of scalable synthetic data generation techniques to effectively align the model to the values present in the unstructured data. Through two distinct use-cases, we demonstrate the efficiency of our methodology on the Mistral-7B-Instruct model. Our approach credibly aligns LLMs to the values embedded within documents, and shows improved performance against other approaches, as quantified through the use of automatic metrics and win rates.
pdf
bib
abs
Language Models in Dialogue: Conversational Maxims for Human-AI Interactions
Erik Miehling
|
Manish Nagireddy
|
Prasanna Sattigeri
|
Elizabeth M. Daly
|
David Piorkowski
|
John T. Richards
Findings of the Association for Computational Linguistics: EMNLP 2024
Modern language models, while sophisticated, exhibit some inherent shortcomings, particularly in conversational settings. We claim that many of the observed shortcomings can be attributed to violation of one or more conversational principles. By drawing upon extensive research from both the social science and AI communities, we propose a set of maxims – quantity, quality, relevance, manner, benevolence, and transparency – for describing effective human-AI conversation. We first justify the applicability of the first four maxims (from Grice) in the context of human-AI interactions. We then argue that two new maxims, benevolence (concerning the generation of, and engagement with, harmful content) and transparency (concerning recognition of one’s knowledge boundaries, operational constraints, and intents), are necessary for addressing behavior unique to modern human-AI interactions. We evaluate the degree to which various language models are able to understand these maxims and find that models possess an internal prioritization of principles that can significantly impact accurate interpretability of the maxims.
2023
pdf
bib
abs
Reliable Gradient-free and Likelihood-free Prompt Tuning
Maohao Shen
|
Soumya Ghosh
|
Prasanna Sattigeri
|
Subhro Das
|
Yuheng Bu
|
Gregory Wornell
Findings of the Association for Computational Linguistics: EACL 2023
Due to privacy or commercial constraints, large pre-trained language models (PLMs) are often offered as black-box APIs. Fine-tuning such models to downstream tasks is challenging because one can neither access the model’s internal representations nor propagate gradients through it. This paper addresses these challenges by developing techniques for adapting PLMs with only API access. Building on recent work on soft prompt tuning, we develop methods to tune the soft prompts without requiring gradient computation. Further, we develop extensions that in addition to not requiring gradients also do not need to access any internal representation of the PLM beyond the input embeddings. Moreover, instead of learning a single prompt, our methods learn a distribution over prompts allowing us to quantify predictive uncertainty. Ours is the first work to consider uncertainty in prompts when only having API access to the PLM. Finally, through extensive experiments, we carefully vet the proposed methods and find them competitive with (and sometimes even improving on) gradient-based approaches with full access to the PLM.
2016
pdf
bib
Sparsifying Word Representations for Deep Unordered Sentence Modeling
Prasanna Sattigeri
|
Jayaraman J. Thiagarajan
Proceedings of the 1st Workshop on Representation Learning for NLP