Unsupervised Information Extraction: Regularizing Discriminative Approaches with Relation Distribution Losses

Étienne Simon, Vincent Guigue, Benjamin Piwowarski


Abstract
Unsupervised relation extraction aims at extracting relations between entities in text. Previous unsupervised approaches are either generative or discriminative. In a supervised setting, discriminative approaches, such as deep neural network classifiers, have demonstrated substantial improvement. However, these models are hard to train without supervision, and the currently proposed solutions are unstable. To overcome this limitation, we introduce a skewness loss which encourages the classifier to predict a relation with confidence given a sentence, and a distribution distance loss enforcing that all relations are predicted in average. These losses improve the performance of discriminative based models, and enable us to train deep neural networks satisfactorily, surpassing current state of the art on three different datasets.
Anthology ID:
P19-1133
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1378–1387
Language:
URL:
https://aclanthology.org/P19-1133
DOI:
10.18653/v1/P19-1133
Bibkey:
Cite (ACL):
Étienne Simon, Vincent Guigue, and Benjamin Piwowarski. 2019. Unsupervised Information Extraction: Regularizing Discriminative Approaches with Relation Distribution Losses. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1378–1387, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Unsupervised Information Extraction: Regularizing Discriminative Approaches with Relation Distribution Losses (Simon et al., ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1133.pdf
Data
New York Times Annotated CorpusT-REx