2024
pdf
bib
abs
The Grid: A semi-automated tool to support expert-driven modeling
Allegra A. Beal Cohen
|
Maria Alexeeva
|
Keith Alcock
|
Mihai Surdeanu
Proceedings of the 1st Workshop on NLP for Science (NLP4Science)
When building models of human behavior, we often struggle to find data that capture important factors at the right level of granularity. In these cases, we must rely on expert knowledge to build models. To help partially automate the organization of expert knowledge for modeling, we combine natural language processing (NLP) and machine learning (ML) methods in a tool called the Grid. The Grid helps users organize textual knowledge into clickable cells aLong two dimensions using iterative, collaborative clustering. We conduct a user study to explore participants’ reactions to the Grid, as well as to investigate whether its clustering feature helps participants organize a corpus of expert knowledge. We find that participants using the Grid’s clustering feature appeared to work more efficiently than those without it, but written feedback about the clustering was critical. We conclude that the general design of the Grid was positively received and that some of the user challenges can likely be mitigated through the use of LLMs.
2023
pdf
bib
abs
Annotating and Training for Population Subjective Views
Maria Alexeeva
|
Caroline Hyland
|
Keith Alcock
|
Allegra A. Beal Cohen
|
Hubert Kanyamahanga
|
Isaac Kobby Anni
|
Mihai Surdeanu
Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, & Social Media Analysis
In this paper, we present a dataset of subjective views (beliefs and attitudes) held by individuals or groups. We analyze the usefulness of the dataset by training a neural classifier that identifies belief-containing sentences that are relevant for our broader project of interest—scientific modeling of complex systems. We also explore and discuss difficulties related to annotation of subjective views and propose ways of addressing them.
2022
pdf
bib
abs
Combining Extraction and Generation for Constructing Belief-Consequence Causal Links
Maria Alexeeva
|
Allegra A. Beal Cohen
|
Mihai Surdeanu
Proceedings of the Third Workshop on Insights from Negative Results in NLP
In this paper, we introduce and justify a new task—causal link extraction based on beliefs—and do a qualitative analysis of the ability of a large language model—InstructGPT-3—to generate implicit consequences of beliefs. With the language model-generated consequences being promising, but not consistent, we propose directions of future work, including data collection, explicit consequence extraction using rule-based and language modeling-based approaches, and using explicitly stated consequences of beliefs to fine-tune or prompt the language model to produce outputs suitable for the task.