Hadas Kotek


2023

pdf bib
DELPHI: Data for Evaluating LLMs’ Performance in Handling Controversial Issues
David Sun | Artem Abzaliev | Hadas Kotek | Christopher Klein | Zidi Xiu | Jason Williams
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track

Controversy is a reflection of our zeitgeist, and an important aspect to any discourse. The rise of large language models (LLMs) as conversational systems has increased public reliance on these systems for answers to their various questions. Consequently, it is crucial to systematically examine how these models respond to questions that pertaining to ongoing debates. However, few such datasets exist in providing human-annotated labels reflecting the contemporary discussions. To foster research in this area, we propose a novel construction of a controversial questions dataset, expanding upon the publicly released Quora Question Pairs Dataset. This dataset presents challenges concerning knowledge recency, safety, fairness, and bias. We evaluate different LLMs using a subset of this dataset, illuminating how they handle controversial issues and the stances they adopt. This research ultimately contributes to our understanding of LLMs’ interaction with controversial issues, paving the way for improvements in their comprehension and handling of complex societal debates.

2020

pdf bib
Improving Human-Labeled Data through Dynamic Automatic Conflict Resolution
David Q. Sun | Hadas Kotek | Christopher Klein | Mayank Gupta | William Li | Jason D. Williams
Proceedings of the 28th International Conference on Computational Linguistics

This paper develops and implements a scalable methodology for (a) estimating the noisiness of labels produced by a typical crowdsourcing semantic annotation task, and (b) reducing the resulting error of the labeling process by as much as 20-30% in comparison to other common labeling strategies. Importantly, this new approach to the labeling process, which we name Dynamic Automatic Conflict Resolution (DACR), does not require a ground truth dataset and is instead based on inter-project annotation inconsistencies. This makes DACR not only more accurate but also available to a broad range of labeling tasks. In what follows we present results from a text classification task performed at scale for a commercial personal assistant, and evaluate the inherent ambiguity uncovered by this annotation strategy as compared to other common labeling strategies.