On the Curious Case of l2 norm of Sense Embeddings

Yi Zhou, Danushka Bollegala


Abstract
We show that the l2 norm of a static sense embedding encodes information related to the frequency of that sense in the training corpus used to learn the sense embeddings. This finding can be seen as an extension of a previously known relationship for word embeddings to sense embeddings. Our experimental results show that in spite of its simplicity, the l2 norm of sense embeddings is a surprisingly effective feature for several word sense related tasks such as (a) most frequent sense prediction, (b) word-in-context (WiC), and (c) word sense disambiguation (WSD). In particular, by simply including the l2 norm of a sense embedding as a feature in a classifier, we show that we can improve WiC and WSD methods that use static sense embeddings.
Anthology ID:
2022.findings-emnlp.190
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2593–2602
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.190
DOI:
10.18653/v1/2022.findings-emnlp.190
Bibkey:
Cite (ACL):
Yi Zhou and Danushka Bollegala. 2022. On the Curious Case of l2 norm of Sense Embeddings. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2593–2602, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
On the Curious Case of l2 norm of Sense Embeddings (Zhou & Bollegala, Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.190.pdf
Video:
 https://aclanthology.org/2022.findings-emnlp.190.mp4