Ran Tian


2020

pdf bib
Local Additivity Based Data Augmentation for Semi-supervised NER
Jiaao Chen | Zhenghui Wang | Ran Tian | Zichao Yang | Diyi Yang
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Named Entity Recognition (NER) is one of the first stages in deep language understanding yet current NER models heavily rely on human-annotated data. In this work, to alleviate the dependence on labeled data, we propose a Local Additivity based Data Augmentation (LADA) method for semi-supervised NER, in which we create virtual samples by interpolating sequences close to each other. Our approach has two variations: Intra-LADA and Inter-LADA, where Intra-LADA performs interpolations among tokens within one sentence, and Inter-LADA samples different sentences to interpolate. Through linear additions between sampled training data, LADA creates an infinite amount of labeled data and improves both entity and context learning. We further extend LADA to the semi-supervised setting by designing a novel consistency loss for unlabeled data. Experiments conducted on two NER benchmarks demonstrate the effectiveness of our methods over several strong baselines. We have publicly released our code at https://github.com/GT-SALT/LADA

2018

pdf bib
Interpretable and Compositional Relation Learning by Joint Training with an Autoencoder
Ryo Takahashi | Ran Tian | Kentaro Inui
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Embedding models for entities and relations are extremely useful for recovering missing facts in a knowledge base. Intuitively, a relation can be modeled by a matrix mapping entity vectors. However, relations reside on low dimension sub-manifolds in the parameter space of arbitrary matrices – for one reason, composition of two relations M1, M2 may match a third M3 (e.g. composition of relations currency_of_country and country_of_film usually matches currency_of_film_budget), which imposes compositional constraints to be satisfied by the parameters (i.e. M1*M2=M3). In this paper we investigate a dimension reduction technique by training relations jointly with an autoencoder, which is expected to better capture compositional constraints. We achieve state-of-the-art on Knowledge Base Completion tasks with strongly improved Mean Rank, and show that joint training with an autoencoder leads to interpretable sparse codings of relations, helps discovering compositional constraints and benefits from compositional training. Our source code is released at github.com/tianran/glimvec.

2017

pdf bib
The Challenge of Composition in Distributional and Formal Semantics
Ran Tian | Koji Mineshima | Pascual Martínez-Gómez
Proceedings of the IJCNLP 2017, Tutorial Abstracts

This is tutorial proposal. Abstract is as follows: The principle of compositionality states that the meaning of a complete sentence must be explained in terms of the meanings of its subsentential parts; in other words, each syntactic operation should have a corresponding semantic operation. In recent years, it has been increasingly evident that distributional and formal semantics are complementary in addressing composition; while the distributional/vector-based approach can naturally measure semantic similarity (Mitchell and Lapata, 2010), the formal/symbolic approach has a long tradition within logic-based semantic frameworks (Montague, 1974) and can readily be connected to theorem provers or databases to perform complicated tasks. In this tutorial, we will cover recent efforts in extending word vectors to account for composition and reasoning, the various challenging phenomena observed in composition and addressed by formal semantics, and a hybrid approach that combines merits of the two. Outline and introduction to instructors are found in the submission. Ran Tian has taught a tutorial at the Annual Meeting of the Association for Natural Language Processing in Japan, 2015. The estimated audience size was about one hundred. Only a limited part of the contents in this tutorial is drawn from the previous one. Koji Mineshima has taught a one-week course at the 28th European Summer School in Logic, Language and Information (ESSLLI2016), together with Prof. Daisuke Bekki. Only a few contents are the same with this tutorial. Tutorials on “CCG Semantic Parsing” have been given in ACL2013, EMNLP2014, and AAAI2015. A coming tutorial on “Deep Learning for Semantic Composition” will be given in ACL2017. Contents in these tutorials are somehow related to but not overlapping with our proposal.

2016

pdf bib
Dynamic Entity Representation with Max-pooling Improves Machine Reading
Sosuke Kobayashi | Ran Tian | Naoaki Okazaki | Kentaro Inui
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Question-Answering with Logic Specific to Video Games
Corentin Dumont | Ran Tian | Kentaro Inui
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

We present a corpus and a knowledge database aiming at developing Question-Answering in a new context, the open world of a video game. We chose a popular game called ‘Minecraft’, and created a QA corpus with a knowledge database related to this game and the ontology of a meaning representation that will be used to structure this database. We are interested in the logic rules specific to the game, which may not exist in the real world. The ultimate goal of this research is to build a QA system that can answer natural language questions from players by using inference on these game-specific logic rules. The QA corpus is partially composed of online quiz questions and partially composed of manually written variations of the most relevant ones. The knowledge database is extracted from several wiki-like websites about Minecraft. It is composed of unstructured data, such as text, that will be structured using the meaning representation we defined, and already structured data such as infoboxes. A preliminary examination of the data shows that players are asking creative questions about the game, and that the QA corpus can be used for clustering verbs and linking them to predefined actions in the game.

pdf bib
Learning Semantically and Additively Compositional Distributional Representations
Ran Tian | Naoaki Okazaki | Kentaro Inui
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2015

pdf bib
Reducing Lexical Features in Parsing by Word Embeddings
Hiroya Komatsu | Ran Tian | Naoaki Okazaki | Kentaro Inui
Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation

2014

pdf bib
Efficient Logical Inference for Semantic Processing
Ran Tian | Yusuke Miyao | Takuya Matsuzaki
Proceedings of the ACL 2014 Workshop on Semantic Parsing

pdf bib
Encoding Generalized Quantifiers in Dependency-based Compositional Semantics
Yubing Dong | Ran Tian | Yusuke Miyao
Proceedings of the 28th Pacific Asia Conference on Language, Information and Computing

pdf bib
Logical Inference on Dependency-based Compositional Semantics
Ran Tian | Yusuke Miyao | Takuya Matsuzaki
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)