2017
pdf
bib
abs
Evaluating Low-Level Speech Features Against Human Perceptual Data
Caitlin Richter
|
Naomi H. Feldman
|
Harini Salgado
|
Aren Jansen
Transactions of the Association for Computational Linguistics, Volume 5
We introduce a method for measuring the correspondence between low-level speech features and human perception, using a cognitive model of speech perception implemented directly on speech recordings. We evaluate two speaker normalization techniques using this method and find that in both cases, speech features that are normalized across speakers predict human data better than unnormalized speech features, consistent with previous research. Results further reveal differences across normalization methods in how well each predicts human data. This work provides a new framework for evaluating low-level representations of speech on their match to human perception, and lays the groundwork for creating more ecologically valid models of speech perception.
2015
pdf
bib
Using Zero-Resource Spoken Term Discovery for Ranked Retrieval
Jerome White
|
Douglas Oard
|
Aren Jansen
|
Jiaul Paik
|
Rashmi Sankepally
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2014
pdf
bib
abs
Bridging the gap between speech technology and natural language processing: an evaluation toolbox for term discovery systems
Bogdan Ludusan
|
Maarten Versteegh
|
Aren Jansen
|
Guillaume Gravier
|
Xuan-Nga Cao
|
Mark Johnson
|
Emmanuel Dupoux
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)
The unsupervised discovery of linguistic terms from either continuous phoneme transcriptions or from raw speech has seen an increasing interest in the past years both from a theoretical and a practical standpoint. Yet, there exists no common accepted evaluation method for the systems performing term discovery. Here, we propose such an evaluation toolbox, drawing ideas from both speech technology and natural language processing. We first transform the speech-based output into a symbolic representation and compute five types of evaluation metrics on this representation: the quality of acoustic matching, the quality of the clusters found, and the quality of the alignment with real words (type, token, and boundary scores). We tested our approach on two term discovery systems taking speech as input, and one using symbolic input. The latter was run using both the gold transcription and a transcription obtained from an automatic speech recognizer, in order to simulate the case when only imperfect symbolic information is available. The results obtained are analysed through the use of the proposed evaluation metrics and the implications of these metrics are discussed.
2010
pdf
bib
NLP on Spoken Documents Without ASR
Mark Dredze
|
Aren Jansen
|
Glen Coppersmith
|
Ken Church
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing