Variational Pretraining for Semi-supervised Text Classification

Suchin Gururangan, Tam Dang, Dallas Card, Noah A. Smith


Abstract
We introduce VAMPIRE, a lightweight pretraining framework for effective text classification when data and computing resources are limited. We pretrain a unigram document model as a variational autoencoder on in-domain, unlabeled data and use its internal states as features in a downstream classifier. Empirically, we show the relative strength of VAMPIRE against computationally expensive contextual embeddings and other popular semi-supervised baselines under low resource settings. We also find that fine-tuning to in-domain data is crucial to achieving decent performance from contextual embeddings when working with limited supervision. We accompany this paper with code to pretrain and use VAMPIRE embeddings in downstream tasks.
Anthology ID:
P19-1590
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5880–5894
Language:
URL:
https://aclanthology.org/P19-1590
DOI:
10.18653/v1/P19-1590
Bibkey:
Cite (ACL):
Suchin Gururangan, Tam Dang, Dallas Card, and Noah A. Smith. 2019. Variational Pretraining for Semi-supervised Text Classification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5880–5894, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Variational Pretraining for Semi-supervised Text Classification (Gururangan et al., ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1590.pdf
Video:
 https://aclanthology.org/P19-1590.mp4
Code
 allenai/vampire
Data
AG NewsIMDb Movie Reviews