@inproceedings{schneider-etal-2024-mousai,
title = "Mo{\^u}sai: Efficient Text-to-Music Diffusion Models",
author = {Schneider, Flavio and
Kamal, Ojasv and
Jin, Zhijing and
Sch{\"o}lkopf, Bernhard},
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.luhme-long.437/",
doi = "10.18653/v1/2024.acl-long.437",
pages = "8050--8068",
abstract = "Recent years have seen the rapid development of large generative models for text; however, much less research has explored the connection between text and another {\textquotedblleft}language{\textquotedblright} of communication {--} music. Music, much like text, can convey emotions, stories, and ideas, and has its own unique structure and syntax. In our work, we bridge text and music via a text-to-music generation model that is highly efficient, expressive, and can handle long-term structure. Specifically, we develop Mo{\^u}sai, a cascading two-stage latent diffusion model that can generate multiple minutes of high-quality stereo music at 48kHz from textual descriptions. Moreover, our model features high efficiency, which enables real-time inference on a single consumer GPU with a reasonable speed. Through experiments and property analyses, we show our model`s competence over a variety of criteria compared with existing music generation models. Lastly, to promote the open-source culture, we provide a collection of open-source libraries with the hope of facilitating future work in the field. We open-source the following: Codes: https://github.com/archinetai/audio-diffusion-pytorch. Music samples for this paper: http://bit.ly/44ozWDH. Music samples for all models: https://bit.ly/audio-diffusion."
}
<?xml version="1.0" encoding="UTF-8"?>
<modsCollection xmlns="http://www.loc.gov/mods/v3">
<mods ID="schneider-etal-2024-mousai">
<titleInfo>
<title>Moûsai: Efficient Text-to-Music Diffusion Models</title>
</titleInfo>
<name type="personal">
<namePart type="given">Flavio</namePart>
<namePart type="family">Schneider</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Ojasv</namePart>
<namePart type="family">Kamal</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Zhijing</namePart>
<namePart type="family">Jin</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Bernhard</namePart>
<namePart type="family">Schölkopf</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<originInfo>
<dateIssued>2024-08</dateIssued>
</originInfo>
<typeOfResource>text</typeOfResource>
<relatedItem type="host">
<titleInfo>
<title>Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)</title>
</titleInfo>
<name type="personal">
<namePart type="given">Lun-Wei</namePart>
<namePart type="family">Ku</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Andre</namePart>
<namePart type="family">Martins</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Vivek</namePart>
<namePart type="family">Srikumar</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<originInfo>
<publisher>Association for Computational Linguistics</publisher>
<place>
<placeTerm type="text">Bangkok, Thailand</placeTerm>
</place>
</originInfo>
<genre authority="marcgt">conference publication</genre>
</relatedItem>
<abstract>Recent years have seen the rapid development of large generative models for text; however, much less research has explored the connection between text and another “language” of communication – music. Music, much like text, can convey emotions, stories, and ideas, and has its own unique structure and syntax. In our work, we bridge text and music via a text-to-music generation model that is highly efficient, expressive, and can handle long-term structure. Specifically, we develop Moûsai, a cascading two-stage latent diffusion model that can generate multiple minutes of high-quality stereo music at 48kHz from textual descriptions. Moreover, our model features high efficiency, which enables real-time inference on a single consumer GPU with a reasonable speed. Through experiments and property analyses, we show our model‘s competence over a variety of criteria compared with existing music generation models. Lastly, to promote the open-source culture, we provide a collection of open-source libraries with the hope of facilitating future work in the field. We open-source the following: Codes: https://github.com/archinetai/audio-diffusion-pytorch. Music samples for this paper: http://bit.ly/44ozWDH. Music samples for all models: https://bit.ly/audio-diffusion.</abstract>
<identifier type="citekey">schneider-etal-2024-mousai</identifier>
<identifier type="doi">10.18653/v1/2024.acl-long.437</identifier>
<location>
<url>https://aclanthology.org/2024.luhme-long.437/</url>
</location>
<part>
<date>2024-08</date>
<extent unit="page">
<start>8050</start>
<end>8068</end>
</extent>
</part>
</mods>
</modsCollection>
%0 Conference Proceedings
%T Moûsai: Efficient Text-to-Music Diffusion Models
%A Schneider, Flavio
%A Kamal, Ojasv
%A Jin, Zhijing
%A Schölkopf, Bernhard
%Y Ku, Lun-Wei
%Y Martins, Andre
%Y Srikumar, Vivek
%S Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
%D 2024
%8 August
%I Association for Computational Linguistics
%C Bangkok, Thailand
%F schneider-etal-2024-mousai
%X Recent years have seen the rapid development of large generative models for text; however, much less research has explored the connection between text and another “language” of communication – music. Music, much like text, can convey emotions, stories, and ideas, and has its own unique structure and syntax. In our work, we bridge text and music via a text-to-music generation model that is highly efficient, expressive, and can handle long-term structure. Specifically, we develop Moûsai, a cascading two-stage latent diffusion model that can generate multiple minutes of high-quality stereo music at 48kHz from textual descriptions. Moreover, our model features high efficiency, which enables real-time inference on a single consumer GPU with a reasonable speed. Through experiments and property analyses, we show our model‘s competence over a variety of criteria compared with existing music generation models. Lastly, to promote the open-source culture, we provide a collection of open-source libraries with the hope of facilitating future work in the field. We open-source the following: Codes: https://github.com/archinetai/audio-diffusion-pytorch. Music samples for this paper: http://bit.ly/44ozWDH. Music samples for all models: https://bit.ly/audio-diffusion.
%R 10.18653/v1/2024.acl-long.437
%U https://aclanthology.org/2024.luhme-long.437/
%U https://doi.org/10.18653/v1/2024.acl-long.437
%P 8050-8068
Markdown (Informal)
[Moûsai: Efficient Text-to-Music Diffusion Models](https://aclanthology.org/2024.luhme-long.437/) (Schneider et al., ACL 2024)
ACL
- Flavio Schneider, Ojasv Kamal, Zhijing Jin, and Bernhard Schölkopf. 2024. Moûsai: Efficient Text-to-Music Diffusion Models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8050–8068, Bangkok, Thailand. Association for Computational Linguistics.