WhisBERT: Multimodal Text-Audio Language Modeling on 100M Words

Training on multiple modalities of input can augment the capabilities of a language model. Here, we ask whether such a training regime can improve the quality and efficiency of these systems as well. We focus on text--audio and introduce Whisbert, which is inspired by the text--image approach of FLAVA (Singh et al., 2022). In accordance with Babylm guidelines (Warstadt et al., 2023), we pretrain Whisbert on a dataset comprising only 100 million words plus their corresponding speech from the word-aligned version of the People's Speech dataset (Galvez et al., 2021). To assess the impact of multimodality, we compare versions of the model that are trained on text only and on both audio and text simultaneously. We find that while Whisbert is able to perform well on multimodal masked modeling and surpasses the Babylm baselines in most benchmark tasks, it struggles to optimize its complex objective and outperform its text-only Whisbert baseline.


Introduction
Recent advances in language modeling and their downstream applications have been driven, in large part, by bigger models, with respect to both model size and amounts of training data.Larger and larger pretraining datasets highlight the gap between humans and deep learning models in terms of learning efficiency -while state-of-the-art language models need billions of examples to approach human-level language performance, people learn their language from experience with about 100 million words or less (Warstadt and Bowman, 2022;Frank, 2023).
We hypothesize that one major reason for this data efficiency gap is the different inputs that humans and current deep learning systems receive.Human language learning involves multiple modalities, including both visual and auditory input.In contrast, typical language models are trained on representations of text alone.For this BabyLM submission, we ask whether training on inputs of multiple modalities can increase language models' training efficiency, with a focus on text-audio multimodal input.We conjecture that multimodal data sources have the potential to enrich the language learning process, enabling models to leverage complementary information from different modalities and thus augment their learning capacity (Baltrušaitis et al., 2017).
Multimodal language modeling has experienced a noteworthy surge in research productivity lately, in applications such as image retrieval, semantic embeddings, and image generation (Driess et al., 2023;Koh et al., 2023;Yasunaga et al., 2023) However, text-audio multimodal language modeling (e.g.(Chuang et al., 2019;Lakhotia et al., 2021)) remains largely unexplored, especially in low-resource settings such as the 100 million training regime we employ here.As a first step towards a text-audio language model, we introduce Whis-BERT, a novel masked language model (MLM) architecture inspired by vision-text models such as FLAVA (Singh et al., 2022).The core idea is that WhisBERT is trained in a multitask setting on both unimodal (i.e.text-or audio-only) and multimodal objectives.In multimodal objectives, the model receives matched text-audio segments, and it can use information from one modality to learn representations for the other.
To accommodate the specific requirements of the BabyLM challenge (Warstadt et al., 2023), we pretrain WhisBert on a dataset of matched audio and text transcripts comprising 100 million words sampled from the People's Speech dataset (Galvez et al., 2021).We use an improved version of the audio-text-aligned training data, a subset of an upcoming speech production dataset release (see Section 3).This commitment to using high-quality pretraining data is in line with the data efficiency objectives of the BabyLM challenge.
We carry out a rigorous evaluation of the performance of the audio, text, and multimodal encoders within this new framework.We find that even though the optimization problem in the multimodal setting is much harder compared to a unimodal setting, the multimodal WhisBERT model outperforms the text-only baseline in a majority of the BabyLM challenge tasks, which address several aspects of language understanding, even when trained for only a single iteration over the dataset.

WhisBERT
WhisBERT is a multimodal audio and text model that is inspired by OpenAI's Whisper model (Radford et al., 2022) for speech recognition and BERT (Devlin et al., 2019) for bidirectional language encoding.WhisBERT contains two separate input streams, one of audio and of its corresponding text (i.e., a transcription).The model is trained using a combination of two unimodal and three multimodal masked training objectives.In the unimodal setting, the model must predict either a masked word or a masked patch of audio.In the multimodal training setting, the model must predict pairs of matched word/audio patches.This multi-objective training setup is inspired by the visual-audio model FLAVA (Singh et al., 2022).

Architecture details
Audio encoder To create audio patches that we can process with Whisper's bidirectional transformer encoder (Vaswani et al., 2017), the audio stream is first passed through the Whisper Feature Extractor available on Hugging Face.
All audio data is re-sampled to a rate of 16,000 Hz, and an 80-channel log-magnitude Mel spectrogram representation is computed using 25millisecond windows with a 10-millisecond stride.We then pass the audio spectrogram through a patch embedding layer: a convolutional encoder processes the extracted frequency features using a stem of two 1-dimensional convolution layers (along the time dimension, filters cover all input frequencies), both with a filter width of 16 and incorporating the GELU activation function.The second convolution layer employs a stride of 10.This patch embedding layer creates overlapping 1-dimensional audio patches covering 100ms of the audio signal as input to the transformer.
After preprocessing and patch embedding, sinusoidal position embeddings are added to the stem's output, which is then processed by Whisper's transformer encoder blocks.A notable difference to the standard Whisper encoder is that we prepend a learnable classification (henceforth, CLS) token at the beginning of the audio patch sequence.Therefore, the audio encoder produces a list of audio hidden states {h A } each corresponding to a contextualized audio patch, as well as an additional audio classification state h CLS,A .
Text encoder In order to encode the text input, we choose a standard bidirectional transformer architecture following the BERT (Devlin et al., 2019) model.We train a WordPiece (Wu et al., 2016) tokenizer on the 100M words in our People's speech (Galvez et al., 2021) subset (see Section 3).The WordPiece tokenizer automatically prepends a CLS token to the token sequence which is contextualized with the rest of the sequence.The text encoder produces a list of text hidden states {h T } corresponding to a text token, as well as an additional text CLS token h CLS,T .

Multimodal encoder
The multimodal encoder is a standard transformer encoder that gets as input the concatenated contextualized audio and text sequences.Additionally, we prepend a learnable multimodal CLS token and employ sinusoidal positional embeddings.The multimodal encoder contextualizes the multimodal sequence and outputs a list of multimodal hidden states {h M } each corresponding to an unimodal vector from {h A } or {h T }, as well as an additional multimodal CLS token h CLS,M .
Adapting to downstream tasks The WhisBert model can be readily applied to both unimodal and multimodal tasks.For audio recognition tasks (e.g., speaker identification or speech recognition), we apply a classifier head (e.g., a linear layer or a multi-layer perceptron) on top of the unimodal classification token, h CLS,A , from the audio encoder.Similarly, for language understanding and multimodal reasoning tasks, we can apply a classifier head on top of the classification token, h CLS,T , from the text encoder or h CLS,M from the multimodal encoder, respectively.

Pretraining objectives
Our goal is to pretrain models to have robust contextual representations for both text and audio on their own as well as for aligned text-audio pairs.We use the approach from FLAVA (Singh et al., 2022) of multitask training over a selection of unimodal and multimodal training objectives that have been demonstrated to facilitate joint learning on images and text.We adapt the five objectives used by FLAVA for the audio domain.

Unimodal pretraining objectives
Masked Language Modeling Masked Language Modeling (MLM) is a pretraining objective that encourages the model to learn a deep understanding of the language.In MLM, a portion of the input tokens is masked and the model is trained to predict the original identity of the masked tokens based on their context.
Given an input sequence of tokens x = [x 1 , x 2 , ..., x T ], for MLM, a subset M of these tokens is selected to be masked.The objective is to minimize the negative log-likelihood of the masked tokens: Here, x t is a masked token, x ¬t represents the sequence with the token x t masked, and p model is the model's probability distribution over possible tokens.|M | is the size of the subset of masked tokens, and the sum is taken over all masked positions t.The goal is to adjust the model's parameters to minimize this loss.We obtain a probability distribution over the vocabulary by applying a linear prediction head on the text hidden states {h T }.

Masked Audio Modeling
We introduce the Masked Audio Modeling (MAM) objective L M AM which follows the principles of Contrastive Predictive Coding (van den Oord et al., 2019).In MAM, we randomly mask audio patches in the input sequence to the audio encoder.The encoder is expected to generate outputs that are most similar to the unmasked input at a particular masked position t.The self-supervised loss function, which aims to encourage the model to align masked tokens with their unmasked identities given the context, is defined for a masked token localized at t as: Here, c t is the output of the transformer at position t, and b i is the audio representation vector of the (unmasked) patch at some offset i. B D is a set of 20 uniformly selected negative samples from the same sequence, plus b t , and sim() is a similarity function.For our implementation, we use the cosine similarity function, adjusted by a temperature function, κ, which is set to 0.1.The loss function operates by adjusting the output of the transformer at position t to be most similar to the encoded representation at t, despite the fact that this input to the transformer is masked.In this way, the model is encouraged to predict the content of the masked spans based on the unmasked context.

Multimodal Pretraining Objectives
Multimodal Contrastive Loss Contrastive loss (Gutmann and Hyvärinen, 2010) has been successfully applied to image-text representation learning in approaches such as CLIP (Radford et al., 2021).Our audio-text contrastive loss L M M C aims to maximize the cosine similarities between matched audio and text pairs and minimize those for the unmatched pairs across a given batch of audio clips and corresponding text.This is achieved by linearly projecting the classification token of each audio sequence h CLS,A and text sequence h CLS,T into a common embedding space, followed by L2-normalization, dot-product, and a softmax loss scaled by temperature.
The goal of this process is to ensure that the audio and text representations for the same data point are brought closer together in the embedding space, while representations for different data points are pushed apart.This encourages the model to learn meaningful representations that capture the shared information between the audio and text modalities.

Masked Multimodal Modeling (MMM)
We introduce a Masked Multimodal Modeling (MMM) pretraining objective L M M M , that uses the output of the multimodal encoder {h M } to attempt to reconstruct the masked tokens from both the audio and text sequences.For the multimodal contextualized audio tokens, we employ the Contrastive Predictive Coding strategy introduced in Section 2.2.1.For the multimodal text tokens, we add a multimodal masked language modeling head we compute the MLM loss as introduced in Section 2.2.1.
The MMM pretraining objective is designed to encourage the model to understand the interdependencies between audio and text modalities, which in addition to the MMC loss has been found to improve performance on multimodal downstream tasks (Singh et al., 2022).It is computed separately from the contrastive loss, which is applied on audio and text tokens without any masking.
Audio-Text Matching (ATM) Finally, we incorporate an Audio-Text Matching loss, L AT M , in which we feed a batch of samples that include both matched and unmatched audio-text pairs.We apply a classifier on top of the output from the multimodal encode to decide if the input audio and text match each other.

Pretraining WhisBERT
We pretrain WhisBERT on both text and audio samples from the dataset introduced in Section 3 for five epochs with stochastic gradient descent.Although WhisBERT is able to learn both from paired and unpaired examples, in our pretraining dataset we only encounter text-audio pairs.This allows us to always apply all unimodal and multimodal objective functions.For further details and hyperparameters we refer to this GitHub repository.

People's Speech Dataset
The People's Speech dataset (Galvez et al., 2021) is a free-to-download, 30k hour English speech recognition dataset.The dataset is collected from appropriately licensed internet audio data with existing transcriptions, consisting of a clean and a dirty subset.We re-transcribed and re-aligned the People's Speech dataset using recently-released automatic speech recognition toolkits (Radford et al., 2022;Bain et al., 2023), which may provide better alignment than the baseline, publically available alignments.For this step we transcribe speech the Whisper large-v2 model from OpenAI (Radford et al., 2022).Numerals and non-standard characters were suppressed in the transcriptions, such that numbers were represented as words and non-standard characters were omitted.Otherwise, default parameters were used.The transcriptions were force-aligned to match the audio files using the WhisperX pipeline (Bain et al., 2023;Bredin et al., 2019;Baevski et al., 2020).We excluded very short transcripts (fewer than 100 words) or transcripts that contained more than 0.1% of words that could not be transcribed.The remaining files were sorted according to mean word-level transcription confidence (Whisper estimates a value between 0 and 1 that denotes the transcription confidence per word).We selected the files containing the first 100M words in this ordering.The average confidence of these final 100M words was 0.78 with 47M words from the clean audio subset and 53M words from the dirty audio subset.The transcribed, word-aligned dataset will

Experimental Results
The main question we are interested in is whether pretraining on audio-text data can improve model performance.We assess this by comparing the textencoder only version of WhisBERT compared to the exact same architecture trained with the multimodal objectives introduced in Section 2.2.(This is the MLM (text) vs. MMM (multi-modal) comparison in Table 1.)Our results suggest that the answer is mixed.The MLM (text-only) version of the model achieves higher scores on 12 out of the 17 test suites, with the multi-modal model performing higher for Ellipsis, Island Effects, Quantifiers, Hypernym, and Question/Answer Congruence (tricky) tests.Interestingly, the three of these that were in the original BLiMP paper (Ellipsis, Island Effects and Quantifiers), were three of the four lowestscoring tests for human accuracy, suggesting that where multi-modality does help, it is in processing particularly syntactically difficult material.Both of our trained models outperform the OPT-125M, RoBERTa and T5 baselines, averaging across tasks.

Discussion
Limitations We begin our discussion by noting the limitations of the current work.First, the People's Voice dataset presents a unique set of challenges, which likely resulted in limitations of the WhisBERT model.The most significant of these is that it is primarily comprised of audio from movies, and thus includes things like background noise, music and audio effects that accompanied the dialog.This could have resulted in lower text-audio alignment accuracy, and likely made the audiomodeling challenge more difficult than for an instudio recorded dataset.Second, the requirements of the BabyLM challenge presented us with additional restrictions.Most notably, we were not allowed to use pretrained audio encoders, and thus had to train these from scratch.Likely, this contributed to suboptimal performance and requires further exploration.Furthermore, due to time limitations, we did not fully explore the space of the model's hyperparameters; it is well known that changes in hyperparameter settings can have large impacts on a model's performance.
Our mixed results when comparing WhisBERT against a text-only model suggest that small data settings are insufficient for effectively training a text-only masked language model.Given that the architectural basis for WhisBERT, Flava, was designed and built as a large-data foundation model, we suggest that such larger-data settings serve as the basis for future development and testing of the WhisBERT model.

Future Work
We plan to train versions of Whis-BERT on more than 100M words and their corresponding audio.This would enable investigations of the full capacity of the WhisBERT model and make it more comparable to similar vision-text models such as FLAVA (Singh et al., 2022).On the architecture level, one could replace the bidirectional transformer in the WhisBERT architecture with an autoregressive language model, allowing the use of the standard Whisper pretraining objectives in addition to the multi-modal ones.

Contribution Statement
LW, EH, TIR, EGW, and AW conceived of the ideas presented in this work.KK and GT provided the dataset used in pretraining WhisBERT.LW implemented the model and carried out the experiments.LW, KK, GT, EGW, AW, and TIR wrote the manuscript.All authors edited the manuscript and reviewed the work.

Figure 1 :
Figure 1: Text-only baseline vs WhisBERT on masked language modeling task during the first epoch.Interestingly, during the first epoch WhisBERT seems to perform better (outperforming the text-only baseline in 11 out of 17 tasks), but after five epochs does not outperform the text-only baseline across all benchmark tasks

Table 1 :
Evaluation scores of text-only (MLM), multimodal WhisBERT (MM), and the BabyLM baselines on BLiMP tasks.The BabyLM baselines were trained on the 100M words BabyLM dataset.