On Randomized Classification Layers and Their Implications in Natural Language Generation

Gal-Lev Shalev, Gabi Shalev, Joseph Keshet


Abstract
In natural language generation tasks, a neural language model is used for generating a sequence of words forming a sentence. The topmost weight matrix of the language model, known as the classification layer, can be viewed as a set of vectors, each representing a target word from the target dictionary. The target word vectors, along with the rest of the model parameters, are learned and updated during training. In this paper, we analyze the properties encoded in the target vectors and question the necessity of learning these vectors. We suggest to randomly draw the target vectors and set them as fixed so that no weights updates are being made during training. We show that by excluding the vectors from the optimization, the number of parameters drastically decreases with a marginal effect on the performance. We demonstrate the effectiveness of our method in image-captioning and machine-translation.
Anthology ID:
2021.maiworkshop-1.2
Volume:
Proceedings of the Third Workshop on Multimodal Artificial Intelligence
Month:
June
Year:
2021
Address:
Mexico City, Mexico
Editors:
Amir Zadeh, Louis-Philippe Morency, Paul Pu Liang, Candace Ross, Ruslan Salakhutdinov, Soujanya Poria, Erik Cambria, Kelly Shi
Venue:
maiworkshop
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6–11
Language:
URL:
https://aclanthology.org/2021.maiworkshop-1.2
DOI:
10.18653/v1/2021.maiworkshop-1.2
Bibkey:
Cite (ACL):
Gal-Lev Shalev, Gabi Shalev, and Joseph Keshet. 2021. On Randomized Classification Layers and Their Implications in Natural Language Generation. In Proceedings of the Third Workshop on Multimodal Artificial Intelligence, pages 6–11, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
On Randomized Classification Layers and Their Implications in Natural Language Generation (Shalev et al., maiworkshop 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.maiworkshop-1.2.pdf
Data
Multi30K