Kugatsu Sadamitsu


2017

pdf bib
Automatically Extracting Variant-Normalization Pairs for Japanese Text Normalization
Itsumi Saito | Kyosuke Nishida | Kugatsu Sadamitsu | Kuniko Saito | Junji Tomita
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Social media texts, such as tweets from Twitter, contain many types of non-standard tokens, and the number of normalization approaches for handling such noisy text has been increasing. We present a method for automatically extracting pairs of a variant word and its normal form from unsegmented text on the basis of a pair-wise similarity approach. We incorporated the acquired variant-normalization pairs into Japanese morphological analysis. The experimental results show that our method can extract widely covered variants from large Twitter data and improve the recall of normalization without degrading the overall accuracy of Japanese morphological analysis.

pdf bib
Hyperspherical Query Likelihood Models with Word Embeddings
Ryo Masumura | Taichi Asami | Hirokazu Masataki | Kugatsu Sadamitsu | Kyosuke Nishida | Ryuichiro Higashinaka
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

This paper presents an initial study on hyperspherical query likelihood models (QLMs) for information retrieval (IR). Our motivation is to naturally utilize pre-trained word embeddings for probabilistic IR. To this end, key idea is to directly leverage the word embeddings as random variables for directional probabilistic models based on von Mises-Fisher distributions which are familiar to cosine distances. The proposed method enables us to theoretically take semantic similarities between document and target queries into consideration without introducing heuristic expansion techniques. In addition, this paper reveals relationships between hyperspherical QLMs and conventional QLMs. Experiments show document retrieval evaluation results in which a hyperspherical QLM is compared to conventional QLMs and document distance metrics using word or document embeddings.

pdf bib
Improving Neural Text Normalization with Data Augmentation at Character- and Morphological Levels
Itsumi Saito | Jun Suzuki | Kyosuke Nishida | Kugatsu Sadamitsu | Satoshi Kobashikawa | Ryo Masumura | Yuji Matsumoto | Junji Tomita
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

In this study, we investigated the effectiveness of augmented data for encoder-decoder-based neural normalization models. Attention based encoder-decoder models are greatly effective in generating many natural languages. % such as machine translation or machine summarization. In general, we have to prepare for a large amount of training data to train an encoder-decoder model. Unlike machine translation, there are few training data for text-normalization tasks. In this paper, we propose two methods for generating augmented data. The experimental results with Japanese dialect normalization indicate that our methods are effective for an encoder-decoder model and achieve higher BLEU score than that of baselines. We also investigated the oracle performance and revealed that there is sufficient room for improving an encoder-decoder model.

2016

pdf bib
A Hierarchical Neural Network for Information Extraction of Product Attribute and Condition Sentences
Yukinori Homma | Kugatsu Sadamitsu | Kyosuke Nishida | Ryuichiro Higashinaka | Hisako Asano | Yoshihiro Matsuo
Proceedings of the Open Knowledge Base and Question Answering Workshop (OKBQA 2016)

This paper describes a hierarchical neural network we propose for sentence classification to extract product information from product documents. The network classifies each sentence in a document into attribute and condition classes on the basis of word sequences and sentence sequences in the document. Experimental results showed the method using the proposed network significantly outperformed baseline methods by taking semantic representation of word and sentence sequential data into account. We also evaluated the network with two different product domains (insurance and tourism domains) and found that it was effective for both the domains.

pdf bib
Name Translation based on Fine-grained Named Entity Recognition in a Single Language
Kugatsu Sadamitsu | Itsumi Saito | Taichi Katayama | Hisako Asano | Yoshihiro Matsuo
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

We propose named entity abstraction methods with fine-grained named entity labels for improving statistical machine translation (SMT). The methods are based on a bilingual named entity recognizer that uses a monolingual named entity recognizer with transliteration. Through experiments, we demonstrate that incorporating fine-grained named entities into statistical machine translation improves the accuracy of SMT with more adequate granularity compared with the standard SMT, which is a non-named entity abstraction method.

2014

pdf bib
Extraction of Daily Changing Words for Question Answering
Kugatsu Sadamitsu | Ryuichiro Higashinaka | Yoshihiro Matsuo
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

This paper proposes a method for extracting Daily Changing Words (DCWs), words that indicate which questions are real-time dependent. Our approach is based on two types of template matching using time and named entity slots from large size corpora and adding simple filtering methods from news corpora. Extracted DCWs are utilized for detecting and sorting real-time dependent questions. Experiments confirm that our DCW method achieves higher accuracy in detecting real-time dependent questions than existing word classes and a simple supervised machine learning approach.

pdf bib
Morphological Analysis for Japanese Noisy Text based on Character-level and Word-level Normalization
Itsumi Saito | Kugatsu Sadamitsu | Hisako Asano | Yoshihiro Matsuo
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers

2012

pdf bib
Grammar Error Correction Using Pseudo-Error Sentences and Domain Adaptation
Kenji Imamura | Kuniko Saito | Kugatsu Sadamitsu | Hitoshi Nishikawa
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf bib
Entity Set Expansion using Interactive Topic Information
Kugatsu Sadamitsu | Kuniko Saito | Kenji Imamura | Yoshihiro Matsuo
Proceedings of the 26th Pacific Asia Conference on Language, Information, and Computation

pdf bib
Creating an Extended Named Entity Dictionary from Wikipedia
Ryuichiro Higashinaka | Kugatsu Sadamitsu | Kuniko Saito | Toshiro Makino | Yoshihiro Matsuo
Proceedings of COLING 2012

pdf bib
Constructing a Class-Based Lexical Dictionary using Interactive Topic Models
Kugatsu Sadamitsu | Kuniko Saito | Kenji Imamura | Yoshihiro Matsuo
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

This paper proposes a new method of constructing arbitrary class-based related word dictionaries on interactive topic models; we assume that each class is described by a topic. We propose a new semi-supervised method that uses the simplest topic model yielded by the standard EM algorithm; model calculation is very rapid. Furthermore our approach allows a dictionary to be modified interactively and the final dictionary has a hierarchical structure. This paper makes three contributions. First, it proposes a word-based semi-supervised topic model. Second, we apply the semi-supervised topic model to interactive learning; this approach is called the Interactive Topic Model. Third, we propose a score function; it extracts the related words that occupy the middle layer of the hierarchical structure. Experiments show that our method can appropriately retrieve the words belonging to an arbitrary class.

2011

pdf bib
Entity Set Expansion using Topic information
Kugatsu Sadamitsu | Kuniko Saito | Kenji Imamura | Genichiro Kikui
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

2008

pdf bib
Sentiment Analysis Based on Probabilistic Models Using Inter-Sentence Information
Kugatsu Sadamitsu | Satoshi Sekine | Mikio Yamamoto
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

This paper proposes a new method of the sentiment analysis utilizing inter-sentence structures especially for coping with reversal phenomenon of word polarity such as quotation of other’s opinions on an opposite side. We model these phenomenon using Hidden Conditional Random Fields(HCRFs) with three kinds of features: transition features, polarity features and reversal (of polarity) features. Polarity features and reversal features are doubly added to each word, and each weight of the features are trained by the common structure of positive and negative corpus in, for example, assuming that reversal phenomenon occured for the same reason (features) in both polarity corpus. Our method achieved better accuracy than the Naive Bayes method and as good as SVMs.