2021
pdf
bib
abs
Quantifying Appropriateness of Summarization Data for Curriculum Learning
Ryuji Kano
|
Takumi Takahashi
|
Toru Nishino
|
Motoki Taniguchi
|
Tomoki Taniguchi
|
Tomoko Ohkuma
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Much research has reported the training data of summarization models are noisy; summaries often do not reflect what is written in the source texts. We propose an effective method of curriculum learning to train summarization models from such noisy data. Curriculum learning is used to train sequence-to-sequence models with noisy data. In translation tasks, previous research quantified noise of the training data using two models trained with noisy and clean corpora. Because such corpora do not exist in summarization fields, we propose a model that can quantify noise from a single noisy corpus. We conduct experiments on three summarization models; one pretrained model and two non-pretrained models, and verify our method improves the performance. Furthermore, we analyze how different curricula affect the performance of pretrained and non-pretrained summarization models. Our result on human evaluation also shows our method improves the performance of summarization models.
2020
pdf
bib
abs
A Large-Scale Corpus of E-mail Conversations with Standard and Two-Level Dialogue Act Annotations
Motoki Taniguchi
|
Yoshihiro Ueda
|
Tomoki Taniguchi
|
Tomoko Ohkuma
Proceedings of the 28th International Conference on Computational Linguistics
We present a large-scale corpus of e-mail conversations with domain-agnostic and two-level dialogue act (DA) annotations towards the goal of a better understanding of asynchronous conversations. We annotate over 6,000 messages and 35,000 sentences from more than 2,000 threads. For a domain-independent and application-independent DA annotations, we choose ISO standard 24617-2 as the annotation scheme. To assess the difficulty of DA recognition on our corpus, we evaluate several models, including a pre-trained contextual representation model, as our baselines. The experimental results show that BERT outperforms other neural network models, including previous state-of-the-art models, but falls short of a human performance. We also demonstrate that DA tags of two-level granularity enable a DA recognition model to learn efficiently by using multi-task learning. An evaluation of a model trained on our corpus against other domains of asynchronous conversation reveals the domain independence of our DA annotations.
pdf
bib
abs
Reinforcement Learning with Imbalanced Dataset for Data-to-Text Medical Report Generation
Toru Nishino
|
Ryota Ozaki
|
Yohei Momoki
|
Tomoki Taniguchi
|
Ryuji Kano
|
Norihisa Nakano
|
Yuki Tagawa
|
Motoki Taniguchi
|
Tomoko Ohkuma
|
Keigo Nakamura
Findings of the Association for Computational Linguistics: EMNLP 2020
Automated generation of medical reports that describe the findings in the medical images helps radiologists by alleviating their workload. Medical report generation system should generate correct and concise reports. However, data imbalance makes it difficult to train models accurately. Medical datasets are commonly imbalanced in their finding labels because incidence rates differ among diseases; moreover, the ratios of abnormalities to normalities are significantly imbalanced. We propose a novel reinforcement learning method with a reconstructor to improve the clinical correctness of generated reports to train the data-to-text module with a highly imbalanced dataset. Moreover, we introduce a novel data augmentation strategy for reinforcement learning to additionally train the model on infrequent findings. From the perspective of a practical use, we employ a Two-Stage Medical Report Generator (TS-MRGen) for controllable report generation from input images. TS-MRGen consists of two separated stages: an image diagnosis module and a data-to-text module. Radiologists can modify the image diagnosis module results to control the reports that the data-to-text module generates. We conduct an experiment with two medical datasets to assess the data-to-text module and the entire two-stage model. Results demonstrate that the reports generated by our model describe the findings in the input image more correctly.
2019
pdf
bib
abs
Relation Prediction for Unseen-Entities Using Entity-Word Graphs
Yuki Tagawa
|
Motoki Taniguchi
|
Yasuhide Miura
|
Tomoki Taniguchi
|
Tomoko Ohkuma
|
Takayuki Yamamoto
|
Keiichi Nemoto
Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13)
Knowledge graphs (KGs) are generally used for various NLP tasks. However, as KGs still miss some information, it is necessary to develop Knowledge Graph Completion (KGC) methods. Most KGC researches do not focus on the Out-of-KGs entities (Unseen-entities), we need a method that can predict the relation for the entity pairs containing Unseen-entities to automatically add new entities to the KGs. In this study, we focus on relation prediction and propose a method to learn entity representations via a graph structure that uses Seen-entities, Unseen-entities and words as nodes created from the descriptions of all entities. In the experiments, our method shows a significant improvement in the relation prediction for the entity pairs containing Unseen-entities.
pdf
bib
abs
CLER: Cross-task Learning with Expert Representation to Generalize Reading and Understanding
Takumi Takahashi
|
Motoki Taniguchi
|
Tomoki Taniguchi
|
Tomoko Ohkuma
Proceedings of the 2nd Workshop on Machine Reading for Question Answering
This paper describes our model for the reading comprehension task of the MRQA shared task. We propose CLER, which stands for Cross-task Learning with Expert Representation for the generalization of reading and understanding. To generalize its capabilities, the proposed model is composed of three key ideas: multi-task learning, mixture of experts, and ensemble. In-domain datasets are used to train and validate our model, and other out-of-domain datasets are used to validate the generalization of our model’s performances. In a submission run result, the proposed model achieved an average F1 score of 66.1 % in the out-of-domain setting, which is a 4.3 percentage point improvement over the official BERT baseline model.
2018
pdf
bib
abs
Integrating Tree Structures and Graph Structures with Neural Networks to Classify Discussion Discourse Acts
Yasuhide Miura
|
Ryuji Kano
|
Motoki Taniguchi
|
Tomoki Taniguchi
|
Shotaro Misawa
|
Tomoko Ohkuma
Proceedings of the 27th International Conference on Computational Linguistics
We proposed a model that integrates discussion structures with neural networks to classify discourse acts. Several attempts have been made in earlier works to analyze texts that are used in various discussions. The importance of discussion structures has been explored in those works but their methods required a sophisticated design to combine structural features with a classifier. Our model introduces tree learning approaches and a graph learning approach to directly capture discussion structures without structural features. In an evaluation to classify discussion discourse acts in Reddit, the model achieved improvements of 1.5% in accuracy and 2.2 in FB1 score compared to the previous best model. We further analyzed the model using an attention mechanism to inspect interactions among different learning approaches.
pdf
bib
abs
Joint Modeling for Query Expansion and Information Extraction with Reinforcement Learning
Motoki Taniguchi
|
Yasuhide Miura
|
Tomoko Ohkuma
Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)
Information extraction about an event can be improved by incorporating external evidence. In this study, we propose a joint model for pseudo-relevance feedback based query expansion and information extraction with reinforcement learning. Our model generates an event-specific query to effectively retrieve documents relevant to the event. We demonstrate that our model is comparable or has better performance than the previous model in two publicly available datasets. Furthermore, we analyzed the influences of the retrieval effectiveness in our model on the extraction performance.
pdf
bib
abs
Integrating Entity Linking and Evidence Ranking for Fact Extraction and Verification
Motoki Taniguchi
|
Tomoki Taniguchi
|
Takumi Takahashi
|
Yasuhide Miura
|
Tomoko Ohkuma
Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)
We describe here our system and results on the FEVER shared task. We prepared a pipeline system which composes of a document selection, a sentence retrieval, and a recognizing textual entailment (RTE) components. A simple entity linking approach with text match is used as the document selection component, this component identifies relevant documents for a given claim by using mentioned entities as clues. The sentence retrieval component selects relevant sentences as candidate evidence from the documents based on TF-IDF. Finally, the RTE component selects evidence sentences by ranking the sentences and classifies the claim simultaneously. The experimental results show that our system achieved the FEVER score of 0.4016 and outperformed the official baseline system.
pdf
bib
abs
Harnessing Popularity in Social Media for Extractive Summarization of Online Conversations
Ryuji Kano
|
Yasuhide Miura
|
Motoki Taniguchi
|
Yan-Ying Chen
|
Francine Chen
|
Tomoko Ohkuma
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
We leverage a popularity measure in social media as a distant label for extractive summarization of online conversations. In social media, users can vote, share, or bookmark a post they prefer. The number of these actions is regarded as a measure of popularity. However, popularity is not determined solely by content of a post, e.g., a text or an image it contains, but is highly based on its contexts, e.g., timing, and authority. We propose Disjunctive model that computes the contribution of content and context separately. For evaluation, we build a dataset where the informativeness of comments is annotated. We evaluate the results with ranking metrics, and show that our model outperforms the baseline models which directly use popularity as a measure of informativeness.
2017
pdf
bib
abs
Using Social Networks to Improve Language Variety Identification with Neural Networks
Yasuhide Miura
|
Tomoki Taniguchi
|
Motoki Taniguchi
|
Shotaro Misawa
|
Tomoko Ohkuma
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
We propose a hierarchical neural network model for language variety identification that integrates information from a social network. Recently, language variety identification has enjoyed heightened popularity as an advanced task of language identification. The proposed model uses additional texts from a social network to improve language variety identification from two perspectives. First, they are used to introduce the effects of homophily. Secondly, they are used as expanded training data for shared layers of the proposed model. By introducing information from social networks, the model improved its accuracy by 1.67-5.56. Compared to state-of-the-art baselines, these improved performances are better in English and comparable in Spanish. Furthermore, we analyzed the cases of Portuguese and Arabic when the model showed weak performances, and found that the effect of homophily is likely to be weak due to sparsity and noises compared to languages with the strong performances.
pdf
bib
abs
Character-based Bidirectional LSTM-CRF with words and characters for Japanese Named Entity Recognition
Shotaro Misawa
|
Motoki Taniguchi
|
Yasuhide Miura
|
Tomoko Ohkuma
Proceedings of the First Workshop on Subword and Character Level Models in NLP
Recently, neural models have shown superior performance over conventional models in NER tasks. These models use CNN to extract sub-word information along with RNN to predict a tag for each word. However, these models have been tested almost entirely on English texts. It remains unclear whether they perform similarly in other languages. We worked on Japanese NER using neural models and discovered two obstacles of the state-of-the-art model. First, CNN is unsuitable for extracting Japanese sub-word information. Secondly, a model predicting a tag for each word cannot extract an entity when a part of a word composes an entity. The contributions of this work are (1) verifying the effectiveness of the state-of-the-art NER model for Japanese, (2) proposing a neural model for predicting a tag for each character using word and character information. Experimentally obtained results demonstrate that our model outperforms the state-of-the-art neural English NER model in Japanese.
pdf
bib
abs
Unifying Text, Metadata, and User Network Representations with a Neural Network for Geolocation Prediction
Yasuhide Miura
|
Motoki Taniguchi
|
Tomoki Taniguchi
|
Tomoko Ohkuma
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
We propose a novel geolocation prediction model using a complex neural network. Geolocation prediction in social media has attracted many researchers to use information of various types. Our model unifies text, metadata, and user network representations with an attention mechanism to overcome previous ensemble approaches. In an evaluation using two open datasets, the proposed model exhibited a maximum 3.8% increase in accuracy and a maximum of 6.6% increase in accuracy@161 against previous models. We further analyzed several intermediate layers of our model, which revealed that their states capture some statistical characteristics of the datasets.
2016
pdf
bib
abs
A Simple Scalable Neural Networks based Model for Geolocation Prediction in Twitter
Yasuhide Miura
|
Motoki Taniguchi
|
Tomoki Taniguchi
|
Tomoko Ohkuma
Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT)
This paper describes a model that we submitted to W-NUT 2016 Shared task #1: Geolocation Prediction in Twitter. Our model classifies a tweet or a user to a city using a simple neural networks structure with fully-connected layers and average pooling processes. From the findings of previous geolocation prediction approaches, we integrated various user metadata along with message texts and trained the model with them. In the test run of the task, the model achieved the accuracy of 40.91% and the median distance error of 69.50 km in message-level prediction and the accuracy of 47.55% and the median distance error of 16.13 km in user-level prediction. These results are moderate performances in terms of accuracy and best performances in terms of distance. The results show a promising extension of neural networks based models for geolocation prediction where recent advances in neural networks can be added to enhance our current simple model.