%0 Conference Proceedings %T AmbigQA: Answering Ambiguous Open-domain Questions %A Min, Sewon %A Michael, Julian %A Hajishirzi, Hannaneh %A Zettlemoyer, Luke %Y Webber, Bonnie %Y Cohn, Trevor %Y He, Yulan %Y Liu, Yang %S Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) %D 2020 %8 November %I Association for Computational Linguistics %C Online %F min-etal-2020-ambigqa %X Ambiguity is inherent to open-domain question answering; especially when exploring new topics, it can be difficult to ask questions that have a single, unambiguous answer. In this paper, we introduce AmbigQA, a new open-domain question answering task which involves finding every plausible answer, and then rewriting the question for each one to resolve the ambiguity. To study this task, we construct AmbigNQ, a dataset covering 14,042 questions from NQ-open, an existing open-domain QA benchmark. We find that over half of the questions in NQ-open are ambiguous, with diverse sources of ambiguity such as event and entity references. We also present strong baseline models for AmbigQA which we show benefit from weakly supervised learning that incorporates NQ-open, strongly suggesting our new task and data will support significant future research effort. Our data and baselines are available at https://nlp.cs.washington.edu/ambigqa. %R 10.18653/v1/2020.emnlp-main.466 %U https://aclanthology.org/2020.emnlp-main.466 %U https://doi.org/10.18653/v1/2020.emnlp-main.466 %P 5783-5797