MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models

Qianchu Liu, Fangyu Liu, Nigel Collier, Anna Korhonen, Ivan Vulić


Abstract
Recent work indicated that pretrained language models (PLMs) such as BERT and RoBERTa can be transformed into effective sentence and word encoders even via simple self-supervised techniques. Inspired by this line of work, in this paper we propose a fully unsupervised approach to improving word-in-context (WiC) representations in PLMs, achieved via a simple and efficient WiC-targeted fine-tuning procedure: MirrorWiC. The proposed method leverages only raw texts sampled from Wikipedia, assuming no sense-annotated data, and learns context-aware word representations within a standard contrastive learning setup. We experiment with a series of standard and comprehensive WiC benchmarks across multiple languages. Our proposed fully unsupervised MirrorWiC models obtain substantial gains over off-the-shelf PLMs across all monolingual, multilingual and cross-lingual setups. Moreover, on some standard WiC benchmarks, MirrorWiC is even on-par with supervised models fine-tuned with in-task data and sense labels.
Anthology ID:
2021.conll-1.44
Volume:
Proceedings of the 25th Conference on Computational Natural Language Learning
Month:
November
Year:
2021
Address:
Online
Editors:
Arianna Bisazza, Omri Abend
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
562–574
Language:
URL:
https://aclanthology.org/2021.conll-1.44
DOI:
10.18653/v1/2021.conll-1.44
Bibkey:
Cite (ACL):
Qianchu Liu, Fangyu Liu, Nigel Collier, Anna Korhonen, and Ivan Vulić. 2021. MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models. In Proceedings of the 25th Conference on Computational Natural Language Learning, pages 562–574, Online. Association for Computational Linguistics.
Cite (Informal):
MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models (Liu et al., CoNLL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.conll-1.44.pdf
Video:
 https://aclanthology.org/2021.conll-1.44.mp4
Code
 cambridgeltl/mirrorwic
Data
AM2iCoWiCWiC-TSVWord Sense Disambiguation: a Unified Evaluation Framework and Empirical ComparisonXL-WiC