Yiwen Chen
2024
Scansion-based Lyrics Generation
Yiwen Chen
|
Simone Teufel
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
We aim to generate lyrics for Mandarin songs with a good match between the melody and the tonal contour of the lyrics. Our solution relies on mBart, treating lyrics generation as a translation problem, but rather than translating directly from the melody as is common, our novelty in this paper is that we generate from scansion as an intermediate contour representation that can fit a given melody. One of the advantages of our solution is that it does not require a parallel melody-lyrics dataset. We also present a thorough automatic evaluation of our system against competitors, using several new evaluation metrics. These measure intelligibility, fit to melody, and use proxies for quantifying creativity (variation to other songs created by the same system in different settings, semantic similarity to keywords given to the system, perplexity). When comparing different implementations of scansion to competitor systems, a varied picture emerges. Our best system outperforms all others in lyric-melody fit and is in the top group of systems for two of the creativity metrics (variation and perplexity), overshadowing two large language models (LLM) specialised to this task.
2023
Unsupervised Melody-to-Lyrics Generation
Yufei Tian
|
Anjali Narayan-Chen
|
Shereen Oraby
|
Alessandra Cervone
|
Gunnar Sigurdsson
|
Chenyang Tao
|
Wenbo Zhao
|
Yiwen Chen
|
Tagyoung Chung
|
Jing Huang
|
Nanyun Peng
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Automatic melody-to-lyric generation is a task in which song lyrics are generated to go with a given melody. It is of significant practical interest and more challenging than unconstrained lyric generation as the music imposes additional constraints onto the lyrics. The training data is limited as most songs are copyrighted, resulting in models that underfit the complicated cross-modal relationship between melody and lyrics. In this work, we propose a method for generating high-quality lyrics without training on any aligned melody-lyric data. Specifically, we design a hierarchical lyric generation framework that first generates a song outline and second the complete lyrics. The framework enables disentanglement of training (based purely on text) from inference (melody-guided text generation) to circumvent the shortage of parallel data. We leverage the segmentation and rhythm alignment between melody and lyrics to compile the given melody into decoding constraints as guidance during inference. The two-step hierarchical design also enables content control via the lyric outline, a much-desired feature for democratizing collaborative song creation. Experimental results show that our model can generate high-quality lyrics that are more on-topic, singable, intelligible, and coherent than strong baselines, for example SongMASS, a SOTA model trained on a parallel dataset, with a 24% relative overall quality improvement based on human ratings. Our code is available at https://github.com/amazon-science/unsupervised-melody-to-lyrics-generation.
2021
Synthetic Textual Features for the Large-Scale Detection of Basic-level Categories in English and Mandarin
Yiwen Chen
|
Simone Teufel
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Basic-level categories (BLC) are an important psycholinguistic concept introduced by Rosch et al. (1976); they are defined as the most inclusive categories for which a concrete mental image of the category as a whole can be formed, and also as those categories which are acquired early in life. Rosch’s original algorithm for detecting BLC (called cue-validity) is based on the availability of semantic features such as “has tail” for “cat”, and has remained untested at large. An at-scale algorithm for the automatic determination of BLC exists, but it operates without Rosch-style semantic features, and is thus unable to verify Rosch’s hypothesis. We present the first method for the detection of BLC at scale that makes use of Rosch-style semantic features. For both English and Mandarin, we test three methods of generating such features for any synset within Wordnet (WN): extraction of textual features from Wikipedia pages, Distributional Memory (DM) and BART. The best of our methods outperforms the current SoA in BLC detection, with an accuracy of English BLC detection of 75.0%, and of Mandarin BLC detection 80.7% on a test set. When applied to all of WordNet, our model predicts that 1,118 synsets in English Wordnet (1.4%) are BLC, far fewer than existing methods, and with a precision improvement of over 200% over these. As well as confirming the usefulness of Rosch’s cue validity algorithm, we also developed and evaluated our own new indicator for BLC, which models the fact that BLC features tend to be BLC themselves.
Search
Co-authors
- Simone Teufel 2
- Yufei Tian 1
- Anjali Narayan-Chen 1
- Shereen Oraby 1
- Alessandra Cervone 1
- show all...