Kazunori Yamaguchi


2019

pdf bib
Argument Component Classification by Relation Identification by Neural Network and TextRank
Mamoru Deguchi | Kazunori Yamaguchi
Proceedings of the 6th Workshop on Argument Mining

In recent years, argumentation mining, which automatically extracts the structure of argumentation from unstructured documents such as essays and debates, is gaining attention. For argumentation mining applications, argument-component classification is an important subtask. The existing methods can be classified into supervised methods and unsupervised methods. Supervised document classification performs classification using a single sentence without relying on the whole document. On the other hand, unsupervised document classification has the advantage of being able to use the whole document, but accuracy of these methods is not so high. In this paper, we propose a method for argument-component classification that combines relation identification by neural networks and TextRank to integrate relation informations (i.e. the strength of the relation). This method can use argumentation-specific knowledge by employing a supervised learning on a corpus while maintaining the advantage of using the whole document. Experiments on two corpora, one consisting of student essay and the other of Wikipedia articles, show the effectiveness of this method.

pdf bib
Compact and Robust Models for Japanese-English Character-level Machine Translation
Jinan Dai | Kazunori Yamaguchi
Proceedings of the 6th Workshop on Asian Translation

Character-level translation has been proved to be able to achieve preferable translation quality without explicit segmentation, but training a character-level model needs a lot of hardware resources. In this paper, we introduced two character-level translation models which are mid-gated model and multi-attention model for Japanese-English translation. We showed that the mid-gated model achieved the better performance with respect to BLEU scores. We also showed that a relatively narrow beam of width 4 or 5 was sufficient for the mid-gated model. As for unknown words, we showed that the mid-gated model could somehow translate the one containing Katakana by coining out a close word. We also showed that the model managed to produce tolerable results for heavily noised sentences, even though the model was trained with the dataset without noise.