Yutaka Sasaki


pdf bib
A Neural Edge-Editing Approach for Document-Level Relation Graph Extraction
Kohei Makino | Makoto Miwa | Yutaka Sasaki
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021


pdf bib
Ontology-Style Relation Annotation: A Case Study
Savong Bou | Naoki Suzuki | Makoto Miwa | Yutaka Sasaki
Proceedings of the 12th Language Resources and Evaluation Conference

This paper proposes an Ontology-Style Relation (OSR) annotation approach. In conventional Relation Extraction (RE) datasets, relations are annotated as links between entity mentions. In contrast, in our OSR annotation, a relation is annotated as a relation mention (i.e., not a link but a node) and domain and range links are annotated from the relation mention to its argument entity mentions. We expect the following benefits: (1) the relation annotations can be easily converted to Resource Description Framework (RDF) triples to populate an Ontology, (2) some part of conventional RE tasks can be tackled as Named Entity Recognition (NER) tasks. The relation classes are limited to several RDF properties such as domain, range, and subClassOf, and (3) OSR annotations can be clear documentations of Ontology contents. As a case study, we converted an in-house corpus of Japanese traffic rules in conventional annotations into the OSR annotations and built a novel OSR-RoR (Rules of the Road) corpus. The inter-annotator agreements of the conversion were 85-87%. We evaluated the performance of neural NER and RE tools on the conventional and OSR annotations. The experimental results showed that the OSR annotations make the RE task easier while introducing slight complexity into the NER task.

pdf bib
SC-CoMIcs: A Superconductivity Corpus for Materials Informatics
Kyosuke Yamaguchi | Ryoji Asahi | Yutaka Sasaki
Proceedings of the 12th Language Resources and Evaluation Conference

This paper describes a novel corpus tailored for the text mining of superconducting materials in Materials Informatics (MI), named SuperConductivety Corpus for Materials Informatics (SC-CoMIcs). Different from biomedical informatics, there exist very few corpora targeting Materials Science and Engineering (MSE). Especially, there is no sizable corpus which can be used to assist the search of superconducting materials. A team of materials scientists and natural language processing experts jointly designed the annotation and constructed a corpus consisting of manually-annotated 1,000 MSE abstracts related to superconductivity. We conducted experiments on the corpus with a neural Named Entity Recognition (NER) tool. The experimental results show that NER performance over the corpus is around 77% in terms of micro-F1, which is comparable to human annotator agreement rates. Using the trained NER model, we automatically annotated 9,000 abstracts and created a term retrieval tool based on the term similarity. This tool can find superconductivity terms relevant to a query term within a specified Named Entity category, which demonstrates the power of our SC-CoMIcs, efficiently providing knowledge for Materials Informatics applications from rapidly expanding publications.


pdf bib
Enhancing Drug-Drug Interaction Extraction from Texts by Molecular Structure Information
Masaki Asada | Makoto Miwa | Yutaka Sasaki
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

We propose a novel neural method to extract drug-drug interactions (DDIs) from texts using external drug molecular structure information. We encode textual drug pairs with convolutional neural networks and their molecular pairs with graph convolutional networks (GCNs), and then we concatenate the outputs of these two networks. In the experiments, we show that GCNs can predict DDIs from the molecular structures of drugs in high accuracy and the molecular information can enhance text-based DDI extraction by 2.39 percent points in the F-score on the DDIExtraction 2013 shared task data set.


pdf bib
TTI-COIN at SemEval-2017 Task 10: Investigating Embeddings for End-to-End Relation Extraction from Scientific Papers
Tomoki Tsujimura | Makoto Miwa | Yutaka Sasaki
Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)

This paper describes our TTI-COIN system that participated in SemEval-2017 Task 10. We investigated appropriate embeddings to adapt a neural end-to-end entity and relation extraction system LSTM-ER to this task. We participated in the full task setting of the entity segmentation, entity classification and relation classification (scenario 1) and the setting of relation classification only (scenario 3). The system was directly applied to the scenario 1 without modifying the codes thanks to its generality and flexibility. Our evaluation results show that the choice of appropriate pre-trained embeddings affected the performance significantly. With the best embeddings, our system was ranked third in the scenario 1 with the micro F1 score of 0.38. We also confirm that our system can produce the micro F1 score of 0.48 for the scenario 3 on the test data, and this score is close to the score of the 3rd ranked system in the task.

pdf bib
Analyzing Well-Formedness of Syllables in Japanese Sign Language
Satoshi Yawata | Makoto Miwa | Yutaka Sasaki | Daisuke Hara
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

This paper tackles a problem of analyzing the well-formedness of syllables in Japanese Sign Language (JSL). We formulate the problem as a classification problem that classifies syllables into well-formed or ill-formed. We build a data set that contains hand-coded syllables and their well-formedness. We define a fine-grained feature set based on the hand-coded syllables and train a logistic regression classifier on labeled syllables, expecting to find the discriminative features from the trained classifier. We also perform pseudo active learning to investigate the applicability of active learning in analyzing syllables. In the experiments, the best classifier with our combinatorial features achieved the accuracy of 87.0%. The pseudo active learning is also shown to be effective showing that it could reduce about 84% of training instances to achieve the accuracy of 82.0% when compared to the model without active learning.

pdf bib
Utilizing Visual Forms of Japanese Characters for Neural Review Classification
Yota Toyama | Makoto Miwa | Yutaka Sasaki
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

We propose a novel method that exploits visual information of ideograms and logograms in analyzing Japanese review documents. Our method first converts font images of Japanese characters into character embeddings using convolutional neural networks. It then constructs document embeddings from the character embeddings based on Hierarchical Attention Networks, which represent the documents based on attention mechanisms from a character level to a sentence level. The document embeddings are finally used to predict the labels of documents. Our method provides a way to exploit visual features of characters in languages with ideograms and logograms. In the experiments, our method achieved an accuracy comparable to a character embedding-based model while our method has much fewer parameters since it does not need to keep embeddings of thousands of characters.

pdf bib
Bib2vec: Embedding-based Search System for Bibliographic Information
Takuma Yoneda | Koki Mori | Makoto Miwa | Yutaka Sasaki
Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics

We propose a novel embedding model that represents relationships among several elements in bibliographic information with high representation ability and flexibility. Based on this model, we present a novel search system that shows the relationships among the elements in the ACL Anthology Reference Corpus. The evaluation results show that our model can achieve a high prediction ability and produce reasonable search results.

pdf bib
Extracting Drug-Drug Interactions with Attention CNNs
Masaki Asada | Makoto Miwa | Yutaka Sasaki
BioNLP 2017

We propose a novel attention mechanism for a Convolutional Neural Network (CNN)-based Drug-Drug Interaction (DDI) extraction model. CNNs have been shown to have a great potential on DDI extraction tasks; however, attention mechanisms, which emphasize important words in the sentence of a target-entity pair, have not been investigated with the CNNs despite the fact that attention mechanisms are shown to be effective for a general domain relation classification task. We evaluated our model on the Task 9.2 of the DDIExtraction-2013 shared task. As a result, our attention mechanism improved the performance of our base CNN-based DDI model, and the model achieved an F-score of 69.12%, which is competitive with the state-of-the-art models.


pdf bib
Distributional Hypernym Generation by Jointly Learning Clusters and Projections
Josuke Yamane | Tomoya Takatani | Hitoshi Yamada | Makoto Miwa | Yutaka Sasaki
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

We propose a novel word embedding-based hypernym generation model that jointly learns clusters of hyponym-hypernym relations, i.e., hypernymy, and projections from hyponym to hypernym embeddings. Most of the recent hypernym detection models focus on a hypernymy classification problem that determines whether a pair of words is in hypernymy or not. These models do not directly deal with a hypernym generation problem in that a model generates hypernyms for a given word. Differently from previous studies, our model jointly learns the clusters and projections with adjusting the number of clusters so that the number of clusters can be determined depending on the learned projections and vice versa. Our model also boosts the performance by incorporating inner product-based similarity measures and negative examples, i.e., sampled non-hypernyms, into our objectives in learning. We evaluated our joint learning models on the task of Japanese and English hypernym generation and showed a significant improvement over an existing pipeline model. Our model also compared favorably to existing distributed hypernym detection models on the English hypernym classification task.


pdf bib
Word Embedding-based Antonym Detection using Thesauri and Distributional Information
Masataka Ono | Makoto Miwa | Yutaka Sasaki
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies


pdf bib
Modeling Joint Entity and Relation Extraction with Table Representation
Makoto Miwa | Yutaka Sasaki
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)


pdf bib
Three BioNLP Tools Powered by a Biological Lexicon
Yutaka Sasaki | Paul Thompson | John McNaught | Sophia Ananiadou
Proceedings of the Demonstrations Session at EACL 2009


pdf bib
Event Frame Extraction Based on a Gene Regulation Corpus
Yutaka Sasaki | Paul Thompson | Philip Cotter | John McNaught | Sophia Ananiadou
Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008)

pdf bib
How to Make the Most of NE Dictionaries in Statistical NER
Yutaka Sasaki | Yoshimasa Tsuruoka | John McNaught | Sophia Ananiadou
Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing


pdf bib
Question Answering as Question-Biased Term Extraction: A New Approach toward Multilingual QA
Yutaka Sasaki
Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05)

pdf bib
Context-Dependent SMT Model using Bilingual Verb-Noun Collocation
Young-Sook Hwang | Yutaka Sasaki
Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05)

pdf bib
Empirical Study of Utilizing Morph-Syntactic Information in SMT
Young-Sook Hwang | Taro Watanabe | Yutaka Sasaki
Second International Joint Conference on Natural Language Processing: Full Papers


pdf bib
Bilingual Knowledge Extraction Using Chunk Alignment
Young-Sook Hwang | Kyonghee Paik | Yutaka Sasaki
Proceedings of the 18th Pacific Asia Conference on Language, Information and Computation


pdf bib
Hierarchical Directed Acyclic Graph Kernel: Methods for Structured Natural Language Data
Jun Suzuki | Tsutomu Hirao | Yutaka Sasaki | Eisaku Maeda
Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics

pdf bib
Spoken Interactive ODQA System: SPIQA
Chiori Hori | Takaaki Hori | Hajime Tsukada | Hideki Isozaki | Yutaka Sasaki | Eisaku Maeda
The Companion Volume to the Proceedings of 41st Annual Meeting of the Association for Computational Linguistics

pdf bib
Question Classification using HDAG Kernel
Jun Suzuki | Hirotoshi Taira | Yutaka Sasaki | Eisaku Maeda
Proceedings of the ACL 2003 Workshop on Multilingual Summarization and Question Answering


pdf bib
SVM Answer Selection for Open-Domain Question Answering
Jun Suzuki | Yutaka Sasaki | Eisaku Maeda
COLING 2002: The 19th International Conference on Computational Linguistics


pdf bib
Learning Semantic-Level Information Extraction Rules by Type-Oriented ILP
Yutaka Sasaki | Yoshihiro Matsuo
COLING 2000 Volume 2: The 18th International Conference on Computational Linguistics