%0 Conference Proceedings %T Gradient Imitation Reinforcement Learning for Low Resource Relation Extraction %A Hu, Xuming %A Zhang, Chenwei %A Yang, Yawen %A Li, Xiaohe %A Lin, Li %A Wen, Lijie %A Yu, Philip S. %Y Moens, Marie-Francine %Y Huang, Xuanjing %Y Specia, Lucia %Y Yih, Scott Wen-tau %S Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing %D 2021 %8 November %I Association for Computational Linguistics %C Online and Punta Cana, Dominican Republic %F hu-etal-2021-gradient %X Low-resource Relation Extraction (LRE) aims to extract relation facts from limited labeled corpora when human annotation is scarce. Existing works either utilize self-training scheme to generate pseudo labels that will cause the gradual drift problem, or leverage meta-learning scheme which does not solicit feedback explicitly. To alleviate selection bias due to the lack of feedback loops in existing LRE learning paradigms, we developed a Gradient Imitation Reinforcement Learning method to encourage pseudo label data to imitate the gradient descent direction on labeled data and bootstrap its optimization capability through trial and error. We also propose a framework called GradLRE, which handles two major scenarios in low-resource relation extraction. Besides the scenario where unlabeled data is sufficient, GradLRE handles the situation where no unlabeled data is available, by exploiting a contextualized augmentation method to generate data. Experimental results on two public datasets demonstrate the effectiveness of GradLRE on low resource relation extraction when comparing with baselines. %R 10.18653/v1/2021.emnlp-main.216 %U https://aclanthology.org/2021.emnlp-main.216 %U https://doi.org/10.18653/v1/2021.emnlp-main.216 %P 2737-2746