Scaling Properties of Speech Language Models

Santiago Cuervo, Ricard Marxer


Abstract
Speech Language Models (SLMs) aim to learn language from raw audio, without textual resources. Despite significant advances, our current models exhibit weak syntax and semantic abilities. However, if the scaling properties of neural language models hold for the speech modality, these abilities will improve as the amount of compute used for training increases. In this paper, we use models of this scaling behavior to estimate the scale at which our current methods will yield a SLM with the English proficiency of text-based Large Language Models (LLMs). We establish a strong correlation between pre-training loss and downstream syntactic and semantic performance in SLMs and LLMs, which results in predictable scaling of linguistic performance. We show that the linguistic performance of SLMs scales up to three orders of magnitude more slowly than that of text-based LLMs. Additionally, we study the benefits of synthetic data designed to boost semantic understanding and the effects of coarser speech tokenization.
Anthology ID:
2024.emnlp-main.21
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
351–361
Language:
URL:
https://aclanthology.org/2024.emnlp-main.21
DOI:
Bibkey:
Cite (ACL):
Santiago Cuervo and Ricard Marxer. 2024. Scaling Properties of Speech Language Models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 351–361, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Scaling Properties of Speech Language Models (Cuervo & Marxer, EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.21.pdf