Miryung Kim
2024
Measuring Psychological Depth in Language Models
Fabrice Y Harel-Canada
|
Hanyu Zhou
|
Sreya Muppalla
|
Zeynep Senahan Yildiz
|
Miryung Kim
|
Amit Sahai
|
Nanyun Peng
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Evaluations of creative stories generated by large language models (LLMs) often focus on objective properties of the text, such as its style, coherence, and diversity. While these metrics are indispensable, they do not speak to a story’s subjective, psychological impact from a reader’s perspective. We introduce the Psychological Depth Scale (PDS), a novel framework rooted in literary theory that measures an LLM’s ability to produce authentic and narratively complex stories that provoke emotion, empathy, and engagement. We empirically validate our framework by showing that humans can consistently evaluate stories based on PDS (0.72 Krippendorff’s alpha). We also explore techniques for automating the PDS to easily scale future analyses. GPT-4o, combined with a novel Mixture-of-Personas (MoP) prompting strategy, achieves an average Spearman correlation of 0.51 with human judgment while Llama-3-70B with constrained decoding scores as high as 0.68 for empathy. Finally, we compared the depth of stories authored by both humans and LLMs. Surprisingly, GPT-4 stories either surpassed or were statistically indistinguishable from highly-rated human-written stories sourced from Reddit. By shifting the focus from text to reader, the Psychological Depth Scale is a validated, automated, and systematic means of measuring the capacity of LLMs to connect with humans through the stories they tell.
Human-in-the-Loop Synthetic Text Data Inspection with Provenance Tracking
Hong Jin Kang
|
Fabrice Harel-Canada
|
Muhammad Ali Gulzar
|
Nanyun Peng
|
Miryung Kim
Findings of the Association for Computational Linguistics: NAACL 2024
2022
Sibylvariant Transformations for Robust Text Classification
Fabrice Harel-Canada
|
Muhammad Ali Gulzar
|
Nanyun Peng
|
Miryung Kim
Findings of the Association for Computational Linguistics: ACL 2022
The vast majority of text transformation techniques in NLP are inherently limited in their ability to expand input space coverage due to an implicit constraint to preserve the original class label. In this work, we propose the notion of sibylvariance (SIB) to describe the broader set of transforms that relax the label-preserving constraint, knowably vary the expected class, and lead to significantly more diverse input distributions. We offer a unified framework to organize all data transformations, including two types of SIB: (1) Transmutations convert one discrete kind into another, (2) Mixture Mutations blend two or more classes together. To explore the role of sibylvariance within NLP, we implemented 41 text transformations, including several novel techniques like Concept2Sentence and SentMix. Sibylvariance also enables a unique form of adaptive training that generates new input mixtures for the most confused class pairs, challenging the learner to differentiate with greater nuance. Our experiments on six benchmark datasets strongly support the efficacy of sibylvariance for generalization performance, defect detection, and adversarial robustness.
Search
Co-authors
- Nanyun Peng 3
- Fabrice Harel-Canada 2
- Muhammad Ali Gulzar 2
- Fabrice Y Harel-Canada 1
- Hanyu Zhou 1
- show all...