Justin Grimmer


2024

pdf bib
AutoPersuade: A Framework for Evaluating and Explaining Persuasive Arguments
Till Raphael Saenger | Musashi Hinck | Justin Grimmer | Brandon M. Stewart
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

We introduce a three-part framework for constructing persuasive messages, AutoPersuade. First, we curate a large collection of arguments and gather human evaluations of their persuasiveness. Next, we introduce a novel topic model to identify the features of these arguments that influence persuasion. Finally, we use the model to predict the persuasiveness of new arguments and to assess the causal effects of argument components, offering an explanation of the results. We demonstrate the effectiveness of AutoPersuade in an experimental study on arguments for veganism, validating our findings through human studies and out-of-sample predictions.

2022

pdf bib
Causal Inference in Natural Language Processing: Estimation, Prediction, Interpretation and Beyond
Amir Feder | Katherine A. Keith | Emaad Manzoor | Reid Pryzant | Dhanya Sridhar | Zach Wood-Doughty | Jacob Eisenstein | Justin Grimmer | Roi Reichart | Margaret E. Roberts | Brandon M. Stewart | Victor Veitch | Diyi Yang
Transactions of the Association for Computational Linguistics, Volume 10

A fundamental goal of scientific research is to learn about causal relationships. However, despite its critical role in the life and social sciences, causality has not had the same importance in Natural Language Processing (NLP), which has traditionally placed more emphasis on predictive tasks. This distinction is beginning to fade, with an emerging area of interdisciplinary research at the convergence of causal inference and language processing. Still, research on causality in NLP remains scattered across domains without unified definitions, benchmark datasets and clear articulations of the challenges and opportunities in the application of causal inference to the textual domain, with its unique properties. In this survey, we consolidate research across academic areas and situate it in the broader NLP landscape. We introduce the statistical challenge of estimating causal effects with text, encompassing settings where text is used as an outcome, treatment, or to address confounding. In addition, we explore potential uses of causal inference to improve the robustness, fairness, and interpretability of NLP models. We thus provide a unified overview of causal inference for the NLP community.1

2021

pdf bib
Proceedings of the First Workshop on Causal Inference and NLP
Amir Feder | Katherine Keith | Emaad Manzoor | Reid Pryzant | Dhanya Sridhar | Zach Wood-Doughty | Jacob Eisenstein | Justin Grimmer | Roi Reichart | Molly Roberts | Uri Shalit | Brandon Stewart | Victor Veitch | Diyi Yang
Proceedings of the First Workshop on Causal Inference and NLP

2016

pdf bib
Discovery of Treatments from Text Corpora
Christian Fong | Justin Grimmer
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2015

pdf bib
TopicCheck: Interactive Alignment for Assessing Topic Model Stability
Jason Chuang | Margaret E. Roberts | Brandon M. Stewart | Rebecca Weiss | Dustin Tingley | Justin Grimmer | Jeffrey Heer
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2014

pdf bib
Creating and Destroying Party Brands
Justin Grimmer
Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science