Masashi Shimbo


2024

pdf bib
Rethinking Loss Functions for Fact Verification
Yuta Mukobara | Yutaro Shigeto | Masashi Shimbo
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 2: Short Papers)

We explore loss functions for fact verification in the FEVER shared task. While the cross-entropy loss is a standard objective for training verdict predictors, it fails to capture the heterogeneity among the FEVER verdict classes. In this paper, we develop two task-specific objectives tailored to FEVER. Experimental results confirm that the proposed objective functions outperform the standard cross-entropy. Performance is further improved when these objectives are combined with simple class weighting, which effectively overcomes the imbalance in the training data. The source code is available (https://github.com/yuta-mukobara/RLF-KGAT).

2020

pdf bib
A Greedy Bit-flip Training Algorithm for Binarized Knowledge Graph Embeddings
Katsuhiko Hayashi | Koki Kishimoto | Masashi Shimbo
Findings of the Association for Computational Linguistics: EMNLP 2020

This paper presents a simple and effective discrete optimization method for training binarized knowledge graph embedding model B-CP. Unlike the prior work using a SGD-based method and quantization of real-valued vectors, the proposed method directly optimizes binary embedding vectors by a series of bit flipping operations. On the standard knowledge graph completion tasks, the B-CP model trained with the proposed method achieved comparable performance with that trained with SGD as well as state-of-the-art real-valued models with similar embedding dimensions.

2019

pdf bib
A Non-commutative Bilinear Model for Answering Path Queries in Knowledge Graphs
Katsuhiko Hayashi | Masashi Shimbo
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Bilinear diagonal models for knowledge graph embedding (KGE), such as DistMult and ComplEx, balance expressiveness and computational efficiency by representing relations as diagonal matrices. Although they perform well in predicting atomic relations, composite relations (relation paths) cannot be modeled naturally by the product of relation matrices, as the product of diagonal matrices is commutative and hence invariant with the order of relations. In this paper, we propose a new bilinear KGE model, called BlockHolE, based on block circulant matrices. In BlockHolE, relation matrices can be non-commutative, allowing composite relations to be modeled by matrix product. The model is parameterized in a way that covers a spectrum ranging from diagonal to full relation matrices. A fast computation technique can be developed on the basis of the duality of the Fourier transform of circulant matrices.

2018

pdf bib
Neural Tensor Networks with Diagonal Slice Matrices
Takahiro Ishihara | Katsuhiko Hayashi | Hitoshi Manabe | Masashi Shimbo | Masaaki Nagata
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)

Although neural tensor networks (NTNs) have been successful in many NLP tasks, they require a large number of parameters to be estimated, which often leads to overfitting and a long training time. We address these issues by applying eigendecomposition to each slice matrix of a tensor to reduce its number of paramters. First, we evaluate our proposed NTN models on knowledge graph completion. Second, we extend the models to recursive NTNs (RNTNs) and evaluate them on logical reasoning tasks. These experiments show that our proposed models learn better and faster than the original (R)NTNs.

pdf bib
Ranking-Based Automatic Seed Selection and Noise Reduction for Weakly Supervised Relation Extraction
Van-Thuy Phi | Joan Santoso | Masashi Shimbo | Yuji Matsumoto
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

This paper addresses the tasks of automatic seed selection for bootstrapping relation extraction, and noise reduction for distantly supervised relation extraction. We first point out that these tasks are related. Then, inspired by ranking relation instances and patterns computed by the HITS algorithm, and selecting cluster centroids using the K-means, LSA, or NMF method, we propose methods for selecting the initial seeds from an existing resource, or reducing the level of noise in the distantly labeled data. Experiments show that our proposed methods achieve a better performance than the baseline systems in both tasks.

2017

pdf bib
On the Equivalence of Holographic and Complex Embeddings for Link Prediction
Katsuhiko Hayashi | Masashi Shimbo
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

We show the equivalence of two state-of-the-art models for link prediction/knowledge graph completion: Nickel et al’s holographic embeddings and Trouillon et al.’s complex embeddings. We first consider a spectral version of the holographic embeddings, exploiting the frequency domain in the Fourier transform for efficient computation. The analysis of the resulting model reveals that it can be viewed as an instance of the complex embeddings with a certain constraint imposed on the initial vectors upon training. Conversely, any set of complex embeddings can be converted to a set of equivalent holographic embeddings.

2015

pdf bib
Coordination-Aware Dependency Parsing (Preliminary Report)
Akifumi Yoshimoto | Kazuo Hara | Masashi Shimbo | Yuji Matsumoto
Proceedings of the 14th International Conference on Parsing Technologies

2013

pdf bib
Modeling and Learning Semantic Co-Compositionality through Prototype Projections and Neural Networks
Masashi Tsubaki | Kevin Duh | Masashi Shimbo | Yuji Matsumoto
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf bib
Centering Similarity Measures to Reduce Hubs
Ikumi Suzuki | Kazuo Hara | Masashi Shimbo | Marco Saerens | Kenji Fukumizu
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

2012

pdf bib
Walk-based Computation of Contextual Word Similarity
Kazuo Hara | Ikumi Suzuki | Masashi Shimbo | Yuji Matsumoto
Proceedings of COLING 2012

2011

pdf bib
HITS-based Seed Selection and Stop List Construction for Bootstrapping
Tetsuo Kiso | Masashi Shimbo | Mamoru Komachi | Yuji Matsumoto
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Using the Mutual k-Nearest Neighbor Graphs for Semi-supervised Classification on Natural Language Data
Kohei Ozaki | Masashi Shimbo | Mamoru Komachi | Yuji Matsumoto
Proceedings of the Fifteenth Conference on Computational Natural Language Learning

2009

pdf bib
Coordinate Structure Analysis with Global Structural Constraints and Alignment-Based Local Features
Kazuo Hara | Masashi Shimbo | Hideharu Okuma | Yuji Matsumoto
Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP

pdf bib
Bypassed alignment graph for learning coordination in Japanese sentences
Hideharu Okuma | Kazuo Hara | Masashi Shimbo | Yuji Matsumoto
Proceedings of the ACL-IJCNLP 2009 Conference Short Papers

2008

pdf bib
Generic Text Summarization Using Probabilistic Latent Semantic Indexing
Harendra Bhandari | Masashi Shimbo | Takahiko Ito | Yuji Matsumoto
Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-I

pdf bib
Graph-based Analysis of Semantic Drift in Espresso-like Bootstrapping Algorithms
Mamoru Komachi | Taku Kudo | Masashi Shimbo | Yuji Matsumoto
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing

2007

pdf bib
A Discriminative Learning Model for Coordinate Conjunctions
Masashi Shimbo | Kazuo Hara
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)