%0 Conference Proceedings %T Learning with Contrastive Examples for Data-to-Text Generation %A Uehara, Yui %A Ishigaki, Tatsuya %A Aoki, Kasumi %A Noji, Hiroshi %A Goshima, Keiichi %A Kobayashi, Ichiro %A Takamura, Hiroya %A Miyao, Yusuke %Y Scott, Donia %Y Bel, Nuria %Y Zong, Chengqing %S Proceedings of the 28th International Conference on Computational Linguistics %D 2020 %8 December %I International Committee on Computational Linguistics %C Barcelona, Spain (Online) %F uehara-etal-2020-learning %X Existing models for data-to-text tasks generate fluent but sometimes incorrect sentences e.g., “Nikkei gains” is generated when “Nikkei drops” is expected. We investigate models trained on contrastive examples i.e., incorrect sentences or terms, in addition to correct ones to reduce such errors. We first create rules to produce contrastive examples from correct ones by replacing frequent crucial terms such as “gain” or “drop”. We then use learning methods with several losses that exploit contrastive examples. Experiments on the market comment generation task show that 1) exploiting contrastive examples improves the capability of generating sentences with better lexical choice, without degrading the fluency, 2) the choice of the loss function is an important factor because the performances on different metrics depend on the types of loss functions, and 3) the use of the examples produced by some specific rules further improves performance. Human evaluation also supports the effectiveness of using contrastive examples. %R 10.18653/v1/2020.coling-main.213 %U https://aclanthology.org/2020.coling-main.213 %U https://doi.org/10.18653/v1/2020.coling-main.213 %P 2352-2362