Yuan-Kuei Wu
2024
Codec-SUPERB: An In-Depth Analysis of Sound Codec Models
Haibin Wu
|
Ho-Lam Chung
|
Yi-Cheng Lin
|
Yuan-Kuei Wu
|
Xuanjun Chen
|
Yu-Chi Pai
|
Hsiu-Hsuan Wang
|
Kai-Wei Chang
|
Alexander Liu
|
Hung-yi Lee
Findings of the Association for Computational Linguistics: ACL 2024
The sound codec’s dual roles in minimizing data transmission latency and serving as tokenizers underscore its critical importance.Recent years have witnessed significant developments in codec models.The ideal sound codec should preserve content, paralinguistics, speakers, and audio information.However, the question of which codec achieves optimal sound information preservation remains unanswered, as in different papers, models are evaluated on their selected experimental settings.This study introduces Codec-SUPERB, an acronym for Codec sound processing Universal PERformance Benchmark.It is an ecosystem designed to assess codec models across representative sound applications and signal-level metrics rooted in sound domain knowledge.Codec-SUPERB simplifies result sharing through an online leaderboard, promoting collaboration within a community-driven benchmark database, thereby stimulating new development cycles for codecs.Furthermore, we undertake an in-depth analysis to offer insights into codec models from both application and signal perspectives, diverging from previous codec papers mainly concentrating on signal-level comparisons.Finally, we will release codes, the leaderboard, and data to accelerate progress within the community.
2021
Multi-accent Speech Separation with One Shot Learning
Kuan Po Huang
|
Yuan-Kuei Wu
|
Hung-yi Lee
Proceedings of the 1st Workshop on Meta Learning and Its Applications to Natural Language Processing
Speech separation is a problem in the field of speech processing that has been studied in full swing recently. However, there has not been much work studying a multi-accent speech separation scenario. Unseen speakers with new accents and noise aroused the domain mismatch problem which cannot be easily solved by conventional joint training methods. Thus, we applied MAML and FOMAML to tackle this problem and obtained higher average Si-SNRi values than joint training on almost all the unseen accents. This proved that these two methods do have the ability to generate well-trained parameters for adapting to speech mixtures of new speakers and accents. Furthermore, we found out that FOMAML obtains similar performance compared to MAML while saving a lot of time.
Search
Co-authors
- Hung-Yi Lee 2
- Haibin Wu 1
- Ho-Lam Chung 1
- Yi-Cheng Lin 1
- Xuanjun Chen 1
- show all...