Komal Teru


2023

pdf bib
Semi-supervised Relation Extraction via Data Augmentation and Consistency-training
Komal Teru
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics

Due to the semantic complexity of the Relation extraction (RE) task, obtaining high-quality human labelled data is an expensive and noisy process. To improve the sample efficiency of the models, semi-supervised learning (SSL) methods aim to leverage unlabelled data in addition to learning from limited labelled data points. Recently, strong data augmentation combined with consistency-based semi-supervised learning methods have advanced the state of the art in several SSL tasks. However, adapting these methods to the RE task has been challenging due to the difficulty of data augmentation for RE. In this work, we leverage the recent advances in controlled text generation to perform high-quality data augmentation for the RE task. We further introduce small but significant changes to model architecture that allows for generation of more training data by interpolating different data points in their latent space. These data augmentations along with consistency training result in very competitive results for semi-supervised relation extraction on four benchmark datasets.

2021

pdf bib
Exploring the Limits of Few-Shot Link Prediction in Knowledge Graphs
Dora Jambor | Komal Teru | Joelle Pineau | William L. Hamilton
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

Real-world knowledge graphs are often characterized by low-frequency relations—a challenge that has prompted an increasing interest in few-shot link prediction methods. These methods perform link prediction for a set of new relations, unseen during training, given only a few example facts of each relation at test time. In this work, we perform a systematic study on a spectrum of models derived by generalizing the current state of the art for few-shot link prediction, with the goal of probing the limits of learning in this few-shot setting. We find that a simple, zero-shot baseline — which ignores any relation-specific information — achieves surprisingly strong performance. Moreover, experiments on carefully crafted synthetic datasets show that having only a few examples of a relation fundamentally limits models from using fine-grained structural information and only allows for exploiting the coarse-grained positional information of entities. Together, our findings challenge the implicit assumptions and inductive biases of prior work and highlight new directions for research in this area.