Taishi Ikeda


2016

pdf bib
Japanese Text Normalization with Encoder-Decoder Model
Taishi Ikeda | Hiroyuki Shindo | Yuji Matsumoto
Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT)

Text normalization is the task of transforming lexical variants to their canonical forms. We model the problem of text normalization as a character-level sequence to sequence learning problem and present a neural encoder-decoder model for solving it. To train the encoder-decoder model, many sentences pairs are generally required. However, Japanese non-standard canonical pairs are scarce in the form of parallel corpora. To address this issue, we propose a method of data augmentation to increase data size by converting existing resources into synthesized non-standard forms using handcrafted rules. We conducted an experiment to demonstrate that the synthesized corpus contributes to stably train an encoder-decoder model and improve the performance of Japanese text normalization.