As the quality of Machine Translation (MT) improves, research on improving discourse in automatic translations becomes more viable. This has resulted in an increase in the amount of work on discourse in MT. However many of the existing models and metrics have yet to integrate these insights. Part of this is due to the evaluation methodology, based as it is largely on matching to a single reference. At a time when MT is increasingly being used in a pipeline for other tasks, the semantic element of the translation process needs to be properly integrated into the task. Moreover, in order to take MT to another level, it will need to judge output not based on a single reference translation, but based on notions of fluency and of adequacy – ideally with reference to the source text.
We describe COHERE, our coherence toolkit which incorporates various complementary models for capturing and measuring different aspects of text coherence. In addition to the traditional entity grid model (Lapata, 2005) and graph-based metric (Guinaudeau and Strube, 2013), we provide an implementation of a state-of-the-art syntax-based model (Louis and Nenkova, 2012), as well as an adaptation of this model which shows significant performance improvements in our experiments. We benchmark these models using the standard setting for text coherence: original documents and versions of the document with sentences in shuffled order.