%0 Conference Proceedings %T Massively Multilingual Neural Grapheme-to-Phoneme Conversion %A Peters, Ben %A Dehdari, Jon %A van Genabith, Josef %Y Bender, Emily %Y Daumé III, Hal %Y Ettinger, Allyson %Y Rao, Sudha %S Proceedings of the First Workshop on Building Linguistically Generalizable NLP Systems %D 2017 %8 September %I Association for Computational Linguistics %C Copenhagen, Denmark %F peters-etal-2017-massively %X Grapheme-to-phoneme conversion (g2p) is necessary for text-to-speech and automatic speech recognition systems. Most g2p systems are monolingual: they require language-specific data or handcrafting of rules. Such systems are difficult to extend to low resource languages, for which data and handcrafted rules are not available. As an alternative, we present a neural sequence-to-sequence approach to g2p which is trained on spelling–pronunciation pairs in hundreds of languages. The system shares a single encoder and decoder across all languages, allowing it to utilize the intrinsic similarities between different writing systems. We show an 11% improvement in phoneme error rate over an approach based on adapting high-resource monolingual g2p models to low-resource languages. Our model is also much more compact relative to previous approaches. %R 10.18653/v1/W17-5403 %U https://aclanthology.org/W17-5403 %U https://doi.org/10.18653/v1/W17-5403 %P 19-26