2024
pdf
bib
abs
Do *they* mean ‘us’? Interpreting Referring Expression variation under Intergroup Bias
Venkata S Govindarajan
|
Matianyu Zang
|
Kyle Mahowald
|
David Beaver
|
Junyi Jessy Li
Findings of the Association for Computational Linguistics: EMNLP 2024
The variations between in-group and out-group speech (intergroup bias) are subtle and could underlie many social phenomena like stereotype perpetuation and implicit bias. In this paper, we model intergroup bias as a tagging task on English sports comments from forums dedicated to fandom for NFL teams. We curate a dataset of over 6 million game-time comments from opposing perspectives (the teams in the game), each comment grounded in a non-linguistic description of the events that precipitated these comments (live win probabilities for each team). Expert and crowd annotations justify modeling the bias through tagging of implicit and explicit referring expressions and reveal the rich, contextual understanding of language and the world required for this task. For large-scale analysis of intergroup variation, we use LLMs for automated tagging, and discover that LLMs occasionally perform better when prompted with linguistic descriptions of the win probability at the time of the comment, rather than numerical probability. Further, large-scale tagging of comments using LLMs uncovers linear variations in the form of referent across win probabilities that distinguish in-group and out-group utterances.
2023
pdf
bib
Lil-Bevo: Explorations of Strategies for Training Language Models in More Humanlike Ways
Venkata S Govindarajan
|
Juan Diego Rodriguez
|
Kaj Bostrom
|
Kyle Mahowald
Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning
2022
pdf
bib
abs
longhorns at DADC 2022: How many linguists does it take to fool a Question Answering model? A systematic approach to adversarial attacks.
Venelin Kovatchev
|
Trina Chatterjee
|
Venkata S Govindarajan
|
Jifan Chen
|
Eunsol Choi
|
Gabriella Chronis
|
Anubrata Das
|
Katrin Erk
|
Matthew Lease
|
Junyi Jessy Li
|
Yating Wu
|
Kyle Mahowald
Proceedings of the First Workshop on Dynamic Adversarial Data Collection
Developing methods to adversarially challenge NLP systems is a promising avenue for improving both model performance and interpretability. Here, we describe the approach of the team “longhorns” on Task 1 of the The First Workshop on Dynamic Adversarial Data Collection (DADC), which asked teams to manually fool a model on an Extractive Question Answering task. Our team finished first (pending validation), with a model error rate of 62%. We advocate for a systematic, linguistically informed approach to formulating adversarial questions, and we describe the results of our pilot experiments, as well as our official submission.