Learning Concept Abstractness Using Weak Supervision

Ella Rabinovich, Benjamin Sznajder, Artem Spector, Ilya Shnayderman, Ranit Aharonov, David Konopnicki, Noam Slonim


Abstract
We introduce a weakly supervised approach for inferring the property of abstractness of words and expressions in the complete absence of labeled data. Exploiting only minimal linguistic clues and the contextual usage of a concept as manifested in textual data, we train sufficiently powerful classifiers, obtaining high correlation with human labels. The results imply the applicability of this approach to additional properties of concepts, additional languages, and resource-scarce scenarios.
Anthology ID:
D18-1522
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4854–4859
Language:
URL:
https://aclanthology.org/D18-1522
DOI:
10.18653/v1/D18-1522
Bibkey:
Cite (ACL):
Ella Rabinovich, Benjamin Sznajder, Artem Spector, Ilya Shnayderman, Ranit Aharonov, David Konopnicki, and Noam Slonim. 2018. Learning Concept Abstractness Using Weak Supervision. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4854–4859, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Learning Concept Abstractness Using Weak Supervision (Rabinovich et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1522.pdf