Automatic Song Translation for Tonal Languages

This paper develops automatic song translation (AST) for tonal languages and addresses the unique challenge of aligning words’ tones with melody of a song in addition to conveying the original meaning. We propose three criteria for effective AST—preserving meaning, singability and intelligibility—and design metrics for these criteria. We develop a new benchmark for English–Mandarin song translation and develop an unsupervised AST system, Guided AliGnment for Automatic Song Translation (GagaST), which combines pre-training with three decoding constraints. Both automatic and human evaluations show GagaST successfully balances semantics and singability.


Introduction
Suppose you are asked to translate the lyrics "Let it go" from the Disney musical Frozen into Mandarin Chinese.Some good, literal translations of this would be A) "fàng shǒu", B) "fàng shǒu ba" or C) "ràng tā qù ba" (Figure 1); these get the meaning across and are the domain of traditional machine translation.However, what if you needed to sing this song in Mandarin?These literal translations simply do not work: Translations A and C do not match the number of notes and break the original rhythm; while the tones of Translation B does not match the pitch flow of the original melody.
Song translation, unlike translation lyrics for understanding (subtitling), aims to translate the lyrics so that it can be sung with the original melody.Therefore, the translated lyrics must match the prosody of the pre-existing music in addition to retaining the original meaning.In Singable Translations of Songs, Low (2003) says, this is an uncommon Of these, only the official human song translation is something a singer could actually sing: it fits the length of the notes and matches the tones with the pitch of notes.GagaST finds translations that satisfy these constraints.
and an unusually complex task, a translator consider rhythm, notes' pitches, phrasing, and stress.Nonetheless, there are cultural and commercial incentives for more efficient song translation; Frozen alone made over a half a billion dollars in non-English box office receipts1 and the musical Les Misérables has been performed in over a dozen languages on stage.
As we discuss in Section 2, while translating Western songs resembles poetry translation, translating into tonal languages (e.g., Mandarin, Zulu and Vietnamese) introduces new problems.In tonal languages, a word's pitch contributes to its meaning (Figure 2); when singing in tonal languages, the tones of translated words must align with the "flow" of the pitches in the music (Section 2.1).For example, if "fáng shǒu" were sung instead of "fàng shǒu" (because notes are going up), a listener might hear "defensive" instead of the intended meaning.
This paper builds the first system for automatic song translation (AST) for one tonal language-Mandarin.Section 3 proposes three criteriapreserving semantics, singability and intelligibility-needed in an AST system.
Guided by those goals, we propose an unsupervised AST system, Guided AliGnment for Automatic Song Translation (GagaST).GagaST begins with an out-of-domain translation system (Section 4.1) and adds song alignment constraints that favor translations that are the appropriate length and whose tones match the underlying music (Section 4.2).Naturally, such constraints trade-off between semantic meaning and singability/intelligibility.Section 5.4 discusses this trade-off between song alignment scores and the standard machine translation metric, BLEU.
These criteria also form the evaluation for our initial evaluation (Section 5.3).However, we go beyond an automatic evaluation through a humancentered evaluation from musicology students.
GagaST creates singable songs that make sense given the original text, and our proposed alignment scores correlate with human judgements (Section 5.4.3).2 2 Background: Prose, Poetry, and Song Translation A spoken language can be divided into two forms: prose, which corresponds to natural conversa-  tion and conventional grammatical structure; and verse-typically rhythmic and broken into stanzassuch as poetry and song lyrics.
The vast majority of machine translation research has been focused on prose translation and has made huge progress; in contrast verse translation is more difficult as it must obey the rhythmic constraints and is less developed.In his tour de force work Le Ton Beau de Marot, Douglas Hofstadter presents eighty-nine translations of a single poem to capture the panoply of considerations of what makes the task difficult (Hofstadter, 1997).
In western verse, the rhythmic structure are mostly defined by meter, such as the iambic pentameter for sonnets, which defines the length of each line, the patterns of long syllables versus short ones and the stressed ones versus weak ones.Existing work (Greene et al., 2010;Ghazvininejad et al., 2018) use finite-state constraints to encode both meter and rhyme.
Song translation, on the other hand, can be viewed as a translation where the melody defines the constraints.Reproducing all of the essential values of a song-perfectly matching the meaning, perfectly singable, and perfectly understandable-is an impossible ideal (Franzon, 2008).Thus, tradeoffs are unavoidable.Low (2003) argues for prioritizing singability over other qualities such as sense and rhyme since "effectiveness on stage" is a practical necessity.Tonal languages (e.g., Mandarin, Zulu and Vietnamese) dramatically increases the complexity of singability, and introduces a new factor that could hamper intelligibility.

Song Translation for Tonal Languages
For tonal languages, pitch contributes to the meaning of words.In a conservative estimation, fifty to sixty percent of the world's languages are tonal (Yip, 2002) and cover over 1.5 billion people.
For the lyrics to be intelligible, the speech tone and music tone should be correlated (Schneider, 1961).If not, the pitch contour could override the intended tone, which could produce different meanings.This is not just a theoretical consideration; Figure 3 shows how lyrics can be and have been misunderstood. 3

Mandarin Tones and how to Sing them
Schellenberg ( 2013) summarizes the rules of singing with tone with a focus on Chinese dialects.
The tonal system of Mandarin has two components: • The pitch level and shape of tones.Four Mandarin tones are used since the 19 th century (Figure 2).We denote tones with a diacritic over the vowel whose shape roughly matches the shape of the tone.The four tones are a high level (tone 1, e.g., shūo), rising (tone 2, yú), fallingrising (tone 3, wǒ) and falling (tone 4, huài).• The sandhi of tones.Some combinations of tones have difficult articulatory patterns, so words that might normally have one tone might take another depending on the context.For example "n ǐ" (you) and "hǎo" (good) are typically both third tone, but when they are together it is pronounced as "ní hǎo" (hello), with the first syllable changing to a second tone.These changes are called sandhi (Xu, 1997;Hu, 2017).
Mandarin tones interact with a sung melody in two ways (Yinliu et al., 1983;Schellenberg, 2013) to ensure lyrics are intelligible.First, at a local level, the shape of tones of individual syllables should be consistent with the musical notes they are matched with; for example, in "Love Island" (Figure 4), "shàng" in the blue block has the "falling" shape and the group of notes it assigned to it also falls from an A to a E. Second, and a global level, the music's pitch contour should align with the tones of the corresponding syllables (taking sandhi into account).In practice, we align the transitions between successive syllables and successive notes (Figure 5) ensuring that the tone matches the relative pitch change (Schellenberg, 2013).
Figure 4: The output of a song translation needs to align syllables to the reference melody.There are several options, as evinced by the song "Love Island (x īn dǎo)".Orange (top): REST notes; Blue (bottom left): one syllable is assigned to a group of multiple notes (which needs tone shape alignment: the down arrow matches with falling tone of "ràng"); Green (bottom right): one syllable is assigned with one note.

AST for Tonal Languages
This section formally defines automatic song translation (AST) for tonal languages and introduce three criteria for what makes for a good song translation.These criteria form the foundation for the quantitative metrics we use in the experiment.

Criteria
There are three criteria that a singable song translation needs to fulfil.
Preserve meaning.The translated lyrics should be faithful to the original source lyrics.
Singability.Low (2003) defines singability as the phonetic compatability of translated lyrics and music.The translated song needs to be sung without too much difficulty; difficult consonant clusters, cramming too many syllables into a line, or incompatible tones all impair the singability.
Intelligibility.The translated song need to be understood by the listener.This quality has two components.First, could a listener produce any transcription of the lyrics.If the lyrics are too fast or garbled because the keywords do not fit well with the music, the lyrics are unintelligible.Beyond this basic test of recognizability, the lyrics must also be accurate: does this transcription match the intended meaning.Both aspects matter for a stage performance, since the audience should understand the content to follow the plot.For pop songs, not understanding all contents could be acceptable for some audiences; for example, Adriano Celentano's Prisencolinensinainciusol sacrifices all intelligibility for singability (Bellos, 2013).However, in more traditional media, hilarious misheard lyrics can ruin the audience's experience (Figure 3).

Task Definition
We define the AST task as follows: given an aligned pair of melody M and source lyrics X, generate translated text Y in the target language that aligns with the input melody M .
Specifically, X = [x 1 , ..., x L ] are the input lyrics with L syllables.Each syllable x i is aligned to a snippet of the melody (Table 1) represented by a sequence of notes.To represent this to our algorithm, each syllable is aligned to three components of the melody: 1.A sequence of pitch values p i ≡ [p 0 i , . . .] with |p i | ≥ 1 where an integer value of 1.0 means a semitone (e.g., between C and C-sharp).

The duration of those notes
where 1.0 is a quarter note.Because it encodes the duration of each note, the length of d i must be the same as the length of p i .3. Sometimes there is a rest (pause) before a lyric is sung.We align this to the following syllable i.
The scalar r i is the real-valued duration of the REST note before note group p i .If no REST exists before p i , r i = 0.0.

Constraints for Aligning Lyrics to Music
To make translated songs singable and intelligible, we summarize three desirable properties of that the AST lyric outputs should have if they are to match the underlying melody.Each of these induces a score function which we will use both in our objective functions for constrained translation and for our evaluation metrics.

Length Alignment
The number of syllables L y in translated lyrics Y need to match the number of groups of notes p i in the melody M , so that it can be sung with the music.Within the scope of this paper, we either keep the original grouping in the melody M and have L y = L x for reproducing the original music; or strictly produce one target syllable for each single note in the melody.

Pitch Alignment
For tonal languages, pitch of the music must match the lyrics.As in Section 2.2, there are two types of pitch alignments: 1) intra-syllable, the tone shape of each syllable (Figure 4 blue box) should align with the shape of the assigned group of notes; 2) inter-syllable, the overall pitch contour of the music phrase should align with the tones of lyrics.
Intra-syllable alignment.For an individual syllable, if it is assigned to more than one note (e.g., "love" in Table 1), those notes must be consistent with the shape of the syllable's tone (Wee, 2007).
For Mandarin, there are four tones (Xu, 1997, Figure 2).We estimate the shape of the multi-note sequence p i by least-square estimation and classify it into one of five categories: level, rising, falling, rising-falling, falling-rising.
Specifically, for each group p i that |p i | > 1, we classify it as, 1. "level", if p i max − p i min ≤ 1.0; otherwise, we fit p i into ax 2 + bx + c via least-square estimation, and compute the axis of symmetry l = −b/2a, 2. "rising", if (l ≤ p 0 i and a > 0.0) or (l ≥ p −1 i and a < 0.0); 3. "falling", if (l ≤ p 0 i and a < 0.0) or (l ≥ p −1 i and a > 0.0); 4. "rising-falling", if p 0 i < l < p −1 i and a < 0.0; 5. "falling-rising", if p 0 i < l < p −1 i and a > 0.0; We compare the shape with that of syllable y i , and compute the intra-syllable alignment score S i intra : where ϵ is a small parameter that allows for mismatches.Of the five patterns, "level" can match with any tone, "rising" matches with tone 2 (yú), "falling" matches with tone 4 (huài), "falling-rising" matches with tone 3 (wǒ) while "rising-falling" matches no Chinese tones.

Jump Down
Step Down

Level
Step Up Jump Up < l a t e x i t s h a 1 _ b a s e 6 4 = " R v q v w l l a p w 3 a w l T 3 q U h n V E I d s s M X e q U 3 7 V K 7 1 x 6 1 p 2 + q l k p y 1 u j X 0 p 6 / A A v 8 k W g = < / l a t e x i t > w i 1 Prev Tone st : fēng nd : huáng rd : wǒ th : shì 1 st : fēng 2 nd : huáng 3 rd : wǒ 4 th : shì Next Tone < l a t e x i t s h a 1 _ b a s e 6 4 = " h 9 Inter-syllable alignment.The second constraint compares the transition directions between consecutive tones (t i−1 , t i ) of successive syllables (y i−1 , y i ) that belong to the same word (see arrows in Figure 3).These must match the transition directions of music notes (p i−1 , p i ). 4 Each transition (the movement from one syllable/note to the next) can be categorized as level, step up, jump up, step down and jump down.We summarize the acceptable transitions for each pair of successive syllables in Figure 5 based on analysis by Yinliu et al. (1983) and we discuss our choices with more details in Appendix A.2.Given two syllables (y i−1 , y i ), we compute the local pitch contour S i inter : where ϵ again is a small value to allow mismatches.

Rhythmic Alignment with Word Segmentation in Mandarin
A musical REST is a silence separating music.Recall that in our setup of the data, a scalar r i denotes if a note precedes syllable i.In any language, it is uncommon for a rest to break up a word's syllables.Thus a good translation should avoid this.For Mandarin, creating metrics that capture this are slightly We compute the directions of two notes group (pi−1, pi) by the first notes (p 0 i−1 , p 0 i ) for simplicity.
more complicated because translation systems typically do not explicitly generate word boundaries.Thus, we must rely on the output of segmentation systems to know where word boundaries are.
An exception to this is punctuation (Figure 4).If a comma, period, or other punctuation is attached to the previous syllable y i−1 , then that is a clear signal that it's fine to pause between them.Thus, our rest score a syllable y i following y i−1 that are part of different words with probability P seg5 , the rest score is: ) where ϵ is a parameter that represents our tolerance of having a rest within a word.

GagaST
Ideally, we would build an AST system for English-Mandarin song translation with data-driven models from parallel data, i.e., aligned triples (M , X, Y ).However, these data are not available in the quantity or quality necessary for Mandarin: there is not enough data of any quality, and those that do exist have errors in the syllable-notes alignment.Thus, we propose an unsupervised AST system, Guided AliGnment for Automatic Song Translation (GagaST).For the pre-training, we collect non-parallel lyrics data in both English and Mandarin, as well as a small set of lyrics translation data (Section 5.1).

Song-Text Style Translation
To produce faithful translations in song-text style, we pre-train a transformer-based translation model with cross-domain data: translation data in the general domain, the collected monolingual lyrics data, and a small set of lyric translation data.We append domain tags (Figure 6) before each input example to control the model to produce translations merely in lyrics domain during song translation.For monolingual lyrics data, we adopt BART pretraining (Lewis et al., 2020).

Music Guided Alignment Constraints
Without available parallel data to learn the lyricmelody alignments, we impose constraints (Sec- x 1 < l a t e x i t s h a 1 _ b a s e 6 4 = " y J q Z U l w 9 d z F l w 0 x 4 < l a t e x i t s h a 1 _ b a s e 6 4 = " w Q 0 m C e v F e r c + l q 0 V q 5 w 5 h T + w P n 8 A e v K N 0 A = = < / l a t e x i t > y < l a t e x i t s h a 1 _ b a s e 6 4 = " B r 7 A N E 6 T s 5 4 q O W H / 3 3 6 P J t tion 3.3) in the decoding phase.Specifically, since all constraints are applied at the unigram (intrasyllable, REST) or bigram (inter-syllable, REST) level, we apply them at each step of beam search as rewards and penalties in the scoring function: where S inter , S intra , and S R refer to the alignment scores for inter-syllable pitch alignment, intrasyllable pitch alignment and the rhythm alignment by REST.We introduce three tunable parametersλ inter , λ intra , and λ R -that control the importance of each of the song-specific constraints.

Length Control in Pre-training
To meet the length constraints, we pre-define the syllable-notes assignments with two strategies:6 1) note-to-syllable, i.e., for each note, we produce one syllable; 2) syllable-to-syllable, we use the original notes grouping in the input melody, and assign one syllable to each note group.In this case, the length of target translation is known.Following Lakew et al. (2019), we use length tag "[LEN$i]" to control the length of outputs during pre-training, where $i refers to the length of the target sequence.

Generating Melody-constrained Lyrics and Validating Singability
This section details data sets, model configuration, and proposed evaluation metrics.Then we analyze the results and the trade-offs inherent in song translation.Our code and data are open-sourced at https://github.com/GagaST.

Training Datasets and Model Configuration
WMT dataset: news commentary and backtranslated news datatsets from WMT14 (29.6 million en2zh sentence pairs).No Cantonese texts included and the official Chinese texts can be pronounced in Mandarin by default.
Monolingual lyrics data: monolingual lyrics in both Mandarin and English collected from the web (12.4 million lines of lyrics for Mandarin and 109.5 million for English after removing duplicates).
Lyrics translation data: a small set of lyrics translation data crawled from the web7 (140 thousands pairs of English-to-Mandarin lines).These translations are not singable.

Evaluation Datasets
For evaluation, we need aligned triples (melody M , source lyrics X, target reference lyrics Y ), where two conditions hold: 1) M and X are syllable-tonote aligned; 2) the reference Y should be singable and intelligible.With the confluence of digitization and copyright making such resources rare, we choose fifty songs from the lyrics translation dataset that have open-source music sheets on the web and create aligned triples manually.However, the reference lyrics in this dataset are not singable (our primary goal!), we use them to validate that the translations preserve the original meaning.Twenty songs comprise the validation set (464 lines) and thirty songs comprise the test set (713 lines).

Evaluation Metrics
An AST system for tonal languages should generate translated songs that are singable and intelligible while conveying the original meaning.Evaluating such system is an intrinsically hard task since all three qualities can be qualitative.Especially for preserving meaning, the lack of gold references and the greater tolerance for a loose translation in songs make it difficult to say how much semantic divergence is acceptable.Therefore, we first establish evaluations based on the relationship between lyrics and music and then design human annotations for more qualitative evaluations.

Objective Evaluation
Section 3.3 outlines three constraints inspired by music and linguistic theory.Because these constraints are directly incorporated into the decoding objective (Equation 4), these will be better than an unconstrained translation.However, we want to understand the trade-off between these new objectives and traditional translation evaluations.
To control for the length of the sentence, we normalize the score to 0-1.0 by the length of alignment pairs L i , that is, based on Equation 1,2 and 3, For the length constraint, we compute: 1) N l , the number of samples that has length longer than the predefined length L i ; 2) N s , that are shorter than L i .For each case we compute the average error ratio of {∆l i /L i } 1 .For meaning, although we lack gold singable translations, we follow the common practice and calculate BLEU (Papineni et al., 2002) between the translated songs and the prose translation.

Trade-offs between Meaning and Melody-lyric Alignments
GagaST adds constraints in the decoding scoring functions to enforce lyric-music alignments; however, there are trade-offs between preserving meaning and adhering to these constraints.To select the importance of these constraints in decoding, we vary the value of the corresponding parameter λ (Equation 4) and analyze how much the BLEU score falls on the validation set as we increase the influence of the parameter.We set the hyper-parameters where the alignment scores increase fast while the BLEU decreases slowly.The REST constraint does not affect the BLEU (Table 2) but does alter ammount of punctuation.Working off the assumption that excessive punctuation is bad, we select a parameter that minimizes the mismatches between the REST and word boundaries.We choose (Figure 7) λ inter = 0.5; λ intra = 1.0; λ R = 1.5 for all subsequent experiments.However, these gains come at the cost of BLEU score. 8While we believe that the audience would be more accepting of a less-than-literal translation in a song if it sounds better, we need a qualitative evaluation to validate that hypothesis.

Qualitative Evaluation
The true test of whether AST works is whether the songs can be sung, understood, and enjoyed.Thus, we follow Sheng et al. (2021) and show annotator from a music school students the resulting sheet music, ask their opinion, and ask them to sing the songs.We randomly select five songs from the test set and show the music sheets (see Appendix C) of the first ten sentences of each translated song to five annotators.
Following mean opinion score (Rec, 1994, MOS) in speech synthesis, we use five-point Likert scales (1 for bad and 5 for excellent).And we evaluate the songs on four dimensions: 1) sense, fidelity to the meaning of the source lyric; 2) style, whether the translated lyric resembles song-text style; 3) listenability, whether the translated lyric sounds melodious with the given melody; 4) intelligibility, whether the audience can easily comprehend the translated lyrics if sung with provided melody.The last two dimensions require the annotators to sing the song.

Qualitative Evaluation Results
To examine whether the proposed constraints improve the singability and intelligibility, our qualitative evaluation compares GagaST with only length constraints to fully constrained GagaST (Table 3) with syllable-to-syllable assignment.While the constraints significantly improve the intelligibility and slightly improve the singability (listening experience), these constraints make it harder for the original meaning to come through.Overall, the annotators are satisfied with the translated songs by the proposed baseline GagaST.All aspects receive an average score around 3.5 out of 5.These case studies and three translated songs by GagaST sung by an amateur singer are available on https://gagast.github.io/posts/gagast.

Related Work
Verse Generation and Translation.Generating verse text began through rule-based implementations (Milic, 1970) (2010) intersect the finite state representation of the meter and rhyme scheme with the synchronous context-free grammar of the translation model under the phrase-based machine translation framework.Ghazvininejad et al. (2018) apply the finitestate constraints to neural translation model.However, these representations of the rhythmic and lexical constraints are not flexible enough to encode the real-valued representation of a song as required for translation in tonal languages.
Lyrics Generation.As one of the most important tasks in automatic songwriting, lyrics generation has received more attention recently.Sheng et al. (2021), Lee et al. (2019) and Chen and Lerch (2020) generate lyrics via pure data driven models without adding constraints based on expert knowledge.Oliveira et al. ( 2007) build a rule-based lyrics generation system to handle rhyme and rhythm with designed heuristics.Malmi et al. (2016) address rap lyrics generation via information-retrieval approach and propose a rhyme-density measure.Watanabe et al. (2018) add conditions in standard RNNLM with a featurized input melody for rhythmic alignment.Ma et al. (2021) develop a SeqGAN-based lyrics generator to address various properties, such as rhythmic alignment, theme and genre.Xue et al. (2021) use transformer-based model to generate rap lyrics with a reverse order, address rhymes with vowel embeddings and add extra beat tokens for rthymic alignment.We are the first paper that formally address the importance of aligning melody pitch with languages tones in lyrics generation for tonal languages.We introduce two vital qualities of songs, singability and intelligibility, and design three types of melody-lyric alignment scores to improve the two qualities.

Conclusion
This paper addresses automatic song translation (AST) for tonal languages and the unique challenge of aligning words' tones with melody.And we build the first English-Mandarin AST system -GagaST.Both objective and subjective evaluations demonstrate that GagaST successfully improves the singability and intelligibility of translated songs.
More constraints are left in the future work such as rhymes and style.We aim to build a systematic framework that address all constraints.With the help of newly developed singing voice synthesize tools such as X Studio,9 we can perform human evaluation with actual singing voice with a larger scale to provide more reliable analysis.Moreover, our system can also be applied in lyrics and song generation applications without translation input.

Figure 1 :
Figure1: Example Mandarin translations for "Let it go" in Frozen.Of these, only the official human song translation is something a singer could actually sing: it fits the length of the notes and matches the tones with the pitch of notes.GagaST finds translations that satisfy these constraints.

Figure 2 :
Figure 2: In total languages like Mandarin, the pitch changes the meaning of the words (left).Each of the four tones in Mandarin (right) has a different pitch profile.Figure from Xu (1997).

Figure 3 :
Figure3: If a song's music doesn't match the tones of the lyrics, it can cause the hearer to misunderstand the lyrics.In this example, someone can hear "s ǐ zài" instead of "sì zài", because the notes are going up and "sì zài" is going down.

Figure 5 :
Figure 5: For translated songs in Mandarin to be singable, music notes should align the tones of successive characters; this becomes our inter-syllable pitch alignment.The arrows show acceptable transitions in music for two successive Mandarin characters (w i−1 , w i ) based on the shape of Mandarin tones including sandhi.

s*
Pitch alignment score in each beam with constraints s Pitch alignment score in each beam w/o constraints tag] [domain tag] [length tag] input texts . . . ) Scores in this figure are not exact, merely for illustration w /o c o n s tr a in ts < l a t e x i t s h a 1 _ b a s e 6 4 = " r J O A y / 5 9 q s B g A M / w C m + O d F 6 c d + d j 3 l p w 8 p l D + A P n 8 w c S N I 2 p < / l a t e x i t >

Figure 6 :
Figure 6: Overview of GagaST for English-Mandarin song translation.We first pre-train a lyrics translation model with mixture domain data (left); and then add alignment constraints in decoding scoring function during inference (right), we use unconstrained version as our baseline in the experiment.

Figure 7 :
Figure7: Trade-off between meaning (y-axis) and lyric-music alignments (x-axis) while adjusting the tuning parameter λ on the validation set.The selected value for the tuning parameter λ for downstream experiments is shown in red (preceeded by λ =).REST constraints do not affect BLEUs, but increase the number of [punc]s, which impairs the fluency of the lyrics, thus we select its parameter based on number of [punc]s.

Table 2 :
Our song-specific constraints with two syllable alignment techniques.All results here use the same pre-training checkpoint and length tags are applied.For length score, 9 (0.09) means that 9 out of 713 samples are longer than the predefined length with an average ratio 0.09.All constraints have an effect, but inter-syllable pitch alignment has the largest.

Table 3 :
Qualitative evaluation results for GagaST w/o constraints and GagaST.