Mads Guldborg Kjeldgaard Kongsbak


2021

pdf bib
Hyperparameter Power Impact in Transformer Language Model Training
Lucas Høyberg Puvis de Chavannes | Mads Guldborg Kjeldgaard Kongsbak | Timmie Rantzau | Leon Derczynski
Proceedings of the Second Workshop on Simple and Efficient Natural Language Processing

Training large language models can consume a large amount of energy. We hypothesize that the language model’s configuration impacts its energy consumption, and that there is room for power consumption optimisation in modern large language models. To investigate these claims, we introduce a power consumption factor to the objective function, and explore the range of models and hyperparameter configurations that affect power. We identify multiple configuration factors that can reduce power consumption during language model training while retaining model quality.