2020
pdf
bib
abs
Aspect-Similarity-Aware Historical Influence Modeling for Rating Prediction
Ryo Shimura
|
Shotaro Misawa
|
Masahiro Sato
|
Tomoki Taniguchi
|
Tomoko Ohkuma
Proceedings of Workshop on Natural Language Processing in E-Commerce
Many e-commerce services provide customer review systems. Previous laboratory studies have indicated that the ratings recorded by these systems differ from the actual evaluations of the users, owing to the influence of historical ratings in the system. Some studies have proposed using real-world datasets to model rating prediction. Herein, we propose an aspect-similarity-aware historical influence model for rating prediction using natural language processing techniques. In general, each user provides a rating considering different aspects. Thus, it can be assumed that historical ratings provided considering similar aspects to those of later ones will influence evaluations of users more. By focusing on the review-topic similarities, we show that our method predicts ratings more accurately than the previous historical-inference-aware model. In addition, we examine whether our model can predict “intrinsic rating,” which is given if users were not influenced by historical ratings. We performed an intrinsic rating prediction task, and showed that our model achieved improved performance. Our method can be useful to debias user ratings collected by customer review systems. The debiased ratings help users to make decision properly and systems to provide helpful recommendations. This might improve the user experience of e-commerce services.
pdf
bib
abs
Distinctive Slogan Generation with Reconstruction
Shotaro Misawa
|
Yasuhide Miura
|
Tomoki Taniguchi
|
Tomoko Ohkuma
Proceedings of Workshop on Natural Language Processing in E-Commerce
E-commerce sites include advertising slogans along with information regarding an item. Slogans can attract viewers’ attention to increase sales or visits by emphasizing advantages of an item. The aim of this study is to generate a slogan from a description of an item. To generate a slogan, we apply an encoder–decoder model which has shown effectiveness in many kinds of natural language generation tasks, such as abstractive summarization. However, slogan generation task has three characteristics that distinguish it from other natural language generation tasks: distinctiveness, topic emphasis, and style difference. To handle these three characteristics, we propose a compressed representation–based reconstruction model with refer–attention and conversion layers. The results of the experiments indicate that, based on automatic and human evaluation, our method achieves higher performance than conventional methods.
2019
pdf
bib
abs
Keeping Consistency of Sentence Generation and Document Classification with Multi-Task Learning
Toru Nishino
|
Shotaro Misawa
|
Ryuji Kano
|
Tomoki Taniguchi
|
Yasuhide Miura
|
Tomoko Ohkuma
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
The automated generation of information indicating the characteristics of articles such as headlines, key phrases, summaries and categories helps writers to alleviate their workload. Previous research has tackled these tasks using neural abstractive summarization and classification methods. However, the outputs may be inconsistent if they are generated individually. The purpose of our study is to generate multiple outputs consistently. We introduce a multi-task learning model with a shared encoder and multiple decoders for each task. We propose a novel loss function called hierarchical consistency loss to maintain consistency among the attention weights of the decoders. To evaluate the consistency, we employ a human evaluation. The results show that our model generates more consistent headlines, key phrases and categories. In addition, our model outperforms the baseline model on the ROUGE scores, and generates more adequate and fluent headlines.
2018
pdf
bib
abs
Integrating Tree Structures and Graph Structures with Neural Networks to Classify Discussion Discourse Acts
Yasuhide Miura
|
Ryuji Kano
|
Motoki Taniguchi
|
Tomoki Taniguchi
|
Shotaro Misawa
|
Tomoko Ohkuma
Proceedings of the 27th International Conference on Computational Linguistics
We proposed a model that integrates discussion structures with neural networks to classify discourse acts. Several attempts have been made in earlier works to analyze texts that are used in various discussions. The importance of discussion structures has been explored in those works but their methods required a sophisticated design to combine structural features with a classifier. Our model introduces tree learning approaches and a graph learning approach to directly capture discussion structures without structural features. In an evaluation to classify discussion discourse acts in Reddit, the model achieved improvements of 1.5% in accuracy and 2.2 in FB1 score compared to the previous best model. We further analyzed the model using an attention mechanism to inspect interactions among different learning approaches.
2017
pdf
bib
abs
Character-based Bidirectional LSTM-CRF with words and characters for Japanese Named Entity Recognition
Shotaro Misawa
|
Motoki Taniguchi
|
Yasuhide Miura
|
Tomoko Ohkuma
Proceedings of the First Workshop on Subword and Character Level Models in NLP
Recently, neural models have shown superior performance over conventional models in NER tasks. These models use CNN to extract sub-word information along with RNN to predict a tag for each word. However, these models have been tested almost entirely on English texts. It remains unclear whether they perform similarly in other languages. We worked on Japanese NER using neural models and discovered two obstacles of the state-of-the-art model. First, CNN is unsuitable for extracting Japanese sub-word information. Secondly, a model predicting a tag for each word cannot extract an entity when a part of a word composes an entity. The contributions of this work are (1) verifying the effectiveness of the state-of-the-art NER model for Japanese, (2) proposing a neural model for predicting a tag for each character using word and character information. Experimentally obtained results demonstrate that our model outperforms the state-of-the-art neural English NER model in Japanese.
pdf
bib
abs
Using Social Networks to Improve Language Variety Identification with Neural Networks
Yasuhide Miura
|
Tomoki Taniguchi
|
Motoki Taniguchi
|
Shotaro Misawa
|
Tomoko Ohkuma
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
We propose a hierarchical neural network model for language variety identification that integrates information from a social network. Recently, language variety identification has enjoyed heightened popularity as an advanced task of language identification. The proposed model uses additional texts from a social network to improve language variety identification from two perspectives. First, they are used to introduce the effects of homophily. Secondly, they are used as expanded training data for shared layers of the proposed model. By introducing information from social networks, the model improved its accuracy by 1.67-5.56. Compared to state-of-the-art baselines, these improved performances are better in English and comparable in Spanish. Furthermore, we analyzed the cases of Portuguese and Arabic when the model showed weak performances, and found that the effect of homophily is likely to be weak due to sparsity and noises compared to languages with the strong performances.