Dongyu Ru


pdf bib
Learning Logic Rules for Document-Level Relation Extraction
Dongyu Ru | Changzhi Sun | Jiangtao Feng | Lin Qiu | Hao Zhou | Weinan Zhang | Yong Yu | Lei Li
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Document-level relation extraction aims to identify relations between entities in a whole document. Prior efforts to capture long-range dependencies have relied heavily on implicitly powerful representations learned through (graph) neural networks, which makes the model less transparent. To tackle this challenge, in this paper, we propose LogiRE, a novel probabilistic model for document-level relation extraction by learning logic rules. LogiRE treats logic rules as latent variables and consists of two modules: a rule generator and a relation extractor. The rule generator is to generate logic rules potentially contributing to final predictions, and the relation extractor outputs final predictions based on the generated logic rules. Those two modules can be efficiently optimized with the expectation-maximization (EM) algorithm. By introducing logic rules into neural networks, LogiRE can explicitly capture long-range dependencies as well as enjoy better interpretation. Empirical results show that significantly outperforms several strong baselines in terms of relation performance and logical consistency. Our code is available at


pdf bib
Active Sentence Learning by Adversarial Uncertainty Sampling in Discrete Space
Dongyu Ru | Jiangtao Feng | Lin Qiu | Hao Zhou | Mingxuan Wang | Weinan Zhang | Yong Yu | Lei Li
Findings of the Association for Computational Linguistics: EMNLP 2020

Active learning for sentence understanding aims at discovering informative unlabeled data for annotation and therefore reducing the demand for labeled data. We argue that the typical uncertainty sampling method for active learning is time-consuming and can hardly work in real-time, which may lead to ineffective sample selection. We propose adversarial uncertainty sampling in discrete space (AUSDS) to retrieve informative unlabeled samples more efficiently. AUSDS maps sentences into latent space generated by the popular pre-trained language models, and discover informative unlabeled text samples for annotation via adversarial attack. The proposed approach is extremely efficient compared with traditional uncertainty sampling with more than 10x speedup. Experimental results on five datasets show that AUSDS outperforms strong baselines on effectiveness.