Tao Jianhua
2024
Distinguishing Neural Speech Synthesis Models Through Fingerprints in Speech Waveforms
Zhang ChuYuan
|
Yi Jiangyan
|
Tao Jianhua
|
Wang Chenglong
|
Yan Xinrui
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)
“Recent advancements in neural speech synthesis technologies have brought aboutwidespread applications but have also raised concerns about potential misuse and abuse.Addressing these challenges is crucial, particularly in the realms of forensics and intellec-tual property protection. While previous research on source attribution of synthesizedspeech has its limitations, our study aims to fill these gaps by investigating the identifi-cation of sources in synthesized speech. We focus on analyzing speech synthesis modelfingerprints in generated speech waveforms, emphasizing the roles of the acoustic modeland vocoder. Our research, based on the multi-speaker LibriTTS dataset, reveals twokey insights: (1) both vocoders and acoustic models leave distinct, model-specific fin-gerprints on generated waveforms, and (2) vocoder fingerprints, being more dominant,may obscure those from the acoustic model. These findings underscore the presence ofmodel-specific fingerprints in both components, suggesting their potential significance insource identification applications.”
EmoFake: An Initial Dataset for Emotion Fake Audio Detection
Zhao Yan
|
Yi Jiangyan
|
Tao Jianhua
|
Wang Chenglong
|
Dong Yongfeng
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)
“To enhance the effectiveness of fake audio detection techniques, researchers have developed mul-tiple datasets such as those for the ASVspoof and ADD challenges. These datasets typically focuson capturing non-emotional characteristics in speech, such as the identity of the speaker and theauthenticity of the content. However, they often overlook changes in the emotional state of theaudio, which is another crucial dimension affecting the authenticity of speech. Therefore, thisstudy reports our progress in developing such an emotion fake audio detection dataset involvingchanging emotion state of the origin audio named EmoFake. The audio samples in EmoFake aregenerated using open-source emotional voice conversion models, intended to simulate potentialemotional tampering scenarios in real-world settings. We conducted a series of benchmark ex-periments on this dataset, and the results show that even advanced fake audio detection modelstrained on the ASVspoof 2019 LA dataset and the ADD 2022 track 3.2 dataset face challengeswith EmoFake. The EmoFake is publicly available1 now.”
Search
Fix data
Co-authors
- Wang Chenglong 2
- Yi Jiangyan 2
- Zhang ChuYuan 1
- Yan Xinrui 1
- Zhao Yan 1
- show all...
Venues
- ccl2