%0 Conference Proceedings %T UnClE: Explicitly Leveraging Semantic Similarity to Reduce the Parameters of Word Embeddings %A Li, Zhi %A Zhai, Yuchen %A Wang, Chengyu %A Qiu, Minghui %A Li, Kailiang %A Zhang, Yin %Y Moens, Marie-Francine %Y Huang, Xuanjing %Y Specia, Lucia %Y Yih, Scott Wen-tau %S Findings of the Association for Computational Linguistics: EMNLP 2021 %D 2021 %8 November %I Association for Computational Linguistics %C Punta Cana, Dominican Republic %F li-etal-2021-uncle-explicitly %X Natural language processing (NLP) models often require a massive number of parameters for word embeddings, which limits their application on mobile devices. Researchers have employed many approaches, e.g. adaptive inputs, to reduce the parameters of word embeddings. However, existing methods rarely pay attention to semantic information. In this paper, we propose a novel method called Unique and Class Embeddings (UnClE), which explicitly leverages semantic similarity with weight sharing to reduce the dimensionality of word embeddings. Inspired by the fact that words with similar semantic can share a part of weights, we divide the embeddings of words into two parts: unique embedding and class embedding. The former is one-to-one mapping like traditional embedding, while the latter is many-to-one mapping and learn the representation of class information. Our method is suitable for both word-level and sub-word level models and can be used to reduce both input and output embeddings. Experimental results on the standard WMT 2014 English-German dataset show that our method is able to reduce the parameters of word embeddings by more than 11x, with about 93% performance retaining in BLEU metrics. For language modeling task, our model can reduce word embeddings by 6x or 11x on PTB/WT2 dataset at the cost of a certain degree of performance degradation. %R 10.18653/v1/2021.findings-emnlp.156 %U https://aclanthology.org/2021.findings-emnlp.156 %U https://doi.org/10.18653/v1/2021.findings-emnlp.156 %P 1815-1828