Exploiting Language Model Prompts Using Similarity Measures: A Case Study on the Word-in-Context Task

Mohsen Tabasi, Kiamehr Rezaee, Mohammad Taher Pilehvar


Abstract
As a recent development in few-shot learning, prompt-based techniques have demonstrated promising potential in a variety of natural language processing tasks. However, despite proving competitive on most tasks in the GLUE and SuperGLUE benchmarks, existing prompt-based techniques fail on the semantic distinction task of the Word-in-Context (WiC) dataset. Specifically, none of the existing few-shot approaches (including the in-context learning of GPT-3) can attain a performance that is meaningfully different from the random baseline. Trying to fill this gap, we propose a new prompting technique, based on similarity metrics, which boosts few-shot performance to the level of fully supervised methods. Our simple adaptation shows that the failure of existing prompt-based techniques in semantic distinction is due to their improper configuration, rather than lack of relevant knowledge in the representations. We also show that this approach can be effectively extended to other downstream tasks for which a single prompt is sufficient.
Anthology ID:
2022.acl-short.36
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
325–332
Language:
URL:
https://aclanthology.org/2022.acl-short.36
DOI:
10.18653/v1/2022.acl-short.36
Bibkey:
Cite (ACL):
Mohsen Tabasi, Kiamehr Rezaee, and Mohammad Taher Pilehvar. 2022. Exploiting Language Model Prompts Using Similarity Measures: A Case Study on the Word-in-Context Task. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 325–332, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Exploiting Language Model Prompts Using Similarity Measures: A Case Study on the Word-in-Context Task (Tabasi et al., ACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.acl-short.36.pdf
Video:
 https://aclanthology.org/2022.acl-short.36.mp4
Data
SICKSSTSST-2SuperGLUEWiC