Bootstrapping Neural Relation and Explanation Classifiers

Zheng Tang, Mihai Surdeanu


Abstract
We introduce a method that self trains (or bootstraps) neural relation and explanation classifiers. Our work expands the supervised approach of CITATION, which jointly trains a relation classifier with an explanation classifier that identifies context words important for the relation at hand, to semi-supervised scenarios. In particular, our approach iteratively converts the explainable models’ outputs to rules and applies them to unlabeled text to produce new annotations. Our evaluation on the TACRED dataset shows that our method outperforms the rule-based model we started from by 15 F1 points, outperforms traditional self-training that relies just on the relation classifier by 5 F1 points, and performs comparatively with the prompt-based approach of CITATION (without requiring an additional natural language inference component).
Anthology ID:
2023.acl-short.5
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
48–56
Language:
URL:
https://aclanthology.org/2023.acl-short.5
DOI:
10.18653/v1/2023.acl-short.5
Bibkey:
Cite (ACL):
Zheng Tang and Mihai Surdeanu. 2023. Bootstrapping Neural Relation and Explanation Classifiers. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 48–56, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Bootstrapping Neural Relation and Explanation Classifiers (Tang & Surdeanu, ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-short.5.pdf
Video:
 https://aclanthology.org/2023.acl-short.5.mp4