%0 Conference Proceedings %T Ultra-Fine Entity Typing %A Choi, Eunsol %A Levy, Omer %A Choi, Yejin %A Zettlemoyer, Luke %Y Gurevych, Iryna %Y Miyao, Yusuke %S Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) %D 2018 %8 July %I Association for Computational Linguistics %C Melbourne, Australia %F choi-etal-2018-ultra %X We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict ultra-fine types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. %R 10.18653/v1/P18-1009 %U https://aclanthology.org/P18-1009 %U https://doi.org/10.18653/v1/P18-1009 %P 87-96