Adversarial Contrastive Estimation

Avishek Joey Bose, Huan Ling, Yanshuai Cao


Abstract
Learning by contrasting positive and negative samples is a general strategy adopted by many methods. Noise contrastive estimation (NCE) for word embeddings and translating embeddings for knowledge graphs are examples in NLP employing this approach. In this work, we view contrastive learning as an abstraction of all such methods and augment the negative sampler into a mixture distribution containing an adversarially learned sampler. The resulting adaptive sampler finds harder negative examples, which forces the main model to learn a better representation of the data. We evaluate our proposal on learning word embeddings, order embeddings and knowledge graph embeddings and observe both faster convergence and improved results on multiple metrics.
Anthology ID:
P18-1094
Volume:
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2018
Address:
Melbourne, Australia
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1021–1032
Language:
URL:
https://aclanthology.org/P18-1094
DOI:
10.18653/v1/P18-1094
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/P18-1094.pdf
Video:
 https://vimeo.com/285802169
Presentation:
 P18-1094.Presentation.pdf