TERL: Transformer Enhanced Reinforcement Learning for Relation Extraction

Wang Yashen, Shi Tuo, Ouyang Xiaoye, Guo Dayu


Abstract
“Relation Extraction (RE) task aims to discover the semantic relation that holds between two entitiesand contributes to many applications such as knowledge graph construction and completion. Reinforcement Learning (RL) has been widely used for RE task and achieved SOTA results, whichare mainly designed with rewards to choose the optimal actions during the training procedure,to improve RE’s performance, especially for low-resource conditions. Recent work has shownthat offline or online RL can be flexibly formulated as a sequence understanding problem andsolved via approaches similar to large-scale pre-training language modeling. To strengthen theability for understanding the semantic signals interactions among the given text sequence, thispaper leverages Transformer architecture for RL-based RE methods, and proposes a genericframework called Transformer Enhanced RL (TERL) towards RE task. Unlike prior RL-basedRE approaches that usually fit value functions or compute policy gradients, TERL only outputsthe best actions by utilizing a masked Transformer. Experimental results show that the proposedTERL framework can improve many state-of-the-art RL-based RE methods.”
Anthology ID:
2023.ccl-1.58
Volume:
Proceedings of the 22nd Chinese National Conference on Computational Linguistics
Month:
August
Year:
2023
Address:
Harbin, China
Editors:
Maosong Sun, Bing Qin, Xipeng Qiu, Jing Jiang, Xianpei Han
Venue:
CCL
SIG:
Publisher:
Chinese Information Processing Society of China
Note:
Pages:
677–688
Language:
English
URL:
https://aclanthology.org/2023.ccl-1.58
DOI:
Bibkey:
Cite (ACL):
Wang Yashen, Shi Tuo, Ouyang Xiaoye, and Guo Dayu. 2023. TERL: Transformer Enhanced Reinforcement Learning for Relation Extraction. In Proceedings of the 22nd Chinese National Conference on Computational Linguistics, pages 677–688, Harbin, China. Chinese Information Processing Society of China.
Cite (Informal):
TERL: Transformer Enhanced Reinforcement Learning for Relation Extraction (Yashen et al., CCL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.ccl-1.58.pdf