Andrew Mao
2024
Improving the TENOR of Labeling: Re-evaluating Topic Models for Content Analysis
Zongxia Li
|
Andrew Mao
|
Daniel Stephens
|
Pranav Goel
|
Emily Walpole
|
Alden Dima
|
Juan Fung
|
Jordan Boyd-Graber
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Topic models are a popular tool for understanding text collections, but their evaluation has been a point of contention. Automated evaluation metrics such as coherence are often used, however, their validity has been questioned for neural topic models (NTMs) and can overlook a model’s benefits in real-world applications. To this end, we conduct the first evaluation of neural, supervised and classical topic models in an interactive task-based setting. We combine topic models with a classifier and test their ability to help humans conduct content analysis and document annotation. From simulated, real user and expert pilot studies, the Contextual Neural Topic Model does the best on cluster evaluation metrics and human evaluations; however, LDA is competitive with two other NTMs under our simulated experiment and user study results, contrary to what coherence scores suggest. We show that current automated metrics do not provide a complete picture of topic modeling capabilities, but the right choice of NTMs can be better than classical models on practical tasks.
2022
Cheater’s Bowl: Human vs. Computer Search Strategies for Open-Domain QA
Wanrong He
|
Andrew Mao
|
Jordan Boyd-Graber
Findings of the Association for Computational Linguistics: EMNLP 2022
For humans and computers, the first step in answering an open-domain question is retrieving a set of relevant documents from a large corpus. However, the strategies that computers use fundamentally differ from those of humans. To better understand these differences, we design a gamified interface for data collection—Cheater’s Bowl—where a human answers complex questions with access to both traditional and modern search tools. We collect a dataset of human search sessions, analyze human search strategies, and compare them to state-of-the-art multi-hop QA models. Humans query logically, apply dynamic search chains, and use world knowledge to boost searching. We demonstrate how human queries can improve the accuracy of existing systems and propose improving the future design of QA models.
2021
Eliciting Bias in Question Answering Models through Ambiguity
Andrew Mao
|
Naveen Raman
|
Matthew Shu
|
Eric Li
|
Franklin Yang
|
Jordan Boyd-Graber
Proceedings of the 3rd Workshop on Machine Reading for Question Answering
Question answering (QA) models use retriever and reader systems to answer questions. Reliance on training data by QA systems can amplify or reflect inequity through their responses. Many QA models, such as those for the SQuAD dataset, are trained and tested on a subset of Wikipedia articles which encode their own biases and also reproduce real-world inequality. Understanding how training data affects bias in QA systems can inform methods to mitigate inequity. We develop two sets of questions for closed and open domain questions respectively, which use ambiguous questions to probe QA models for bias. We feed three deep-learning-based QA systems with our question sets and evaluate responses for bias via the metrics. Using our metrics, we find that open-domain QA models amplify biases more than their closed-domain counterparts and propose that biases in the retriever surface more readily due to greater freedom of choice.
Search
Co-authors
- Jordan Boyd-Graber 3
- Wanrong He 1
- Naveen Raman 1
- Matthew Shu 1
- Eric Li 1
- show all...