Kamala Sreepada
2024
Self-supervised speech representations display some human-like cross-linguistic perceptual abilities
Joselyn Rodriguez
|
Kamala Sreepada
|
Ruolan Leslie Famularo
|
Sharon Goldwater
|
Naomi Feldman
Proceedings of the 28th Conference on Computational Natural Language Learning
State of the art models in automatic speech recognition have shown remarkable improvements due to modern self-supervised (SSL) transformer-based architectures such as wav2vec 2.0 (Baevski et al., 2020). However, how these models encode phonetic information is still not well understood. We explore whether SSL speech models display a linguistic property that characterizes human speech perception: language specificity. We show that while wav2vec 2.0 displays an overall language specificity effect when tested on Hindi vs. English, it does not resemble human speech perception when tested on finer-grained differences in Hindi speech contrasts.
Search