Tekla Etelka Gráczi
Also published as: Tekla Etelka Graczi
2024
Is Spoken Hungarian Low-resource?: A Quantitative Survey of Hungarian Speech Data Sets
Peter Mihajlik
|
Katalin Mády
|
Anna Kohári
|
Fruzsina Sára Fruzsina
|
Gábor Kiss
|
Tekla Etelka Gráczi
|
A. Seza Doğruöz
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Even though various speech data sets are available in Hungarian, there is a lack of a general overview about their types and sizes. To fill in this gap, we provide a survey of available data sets in spoken Hungarian in five categories (e.g., monolingual, Hungarian part of multilingual, pathological, child-related and dialectal collections). In total, the estimated size of available data is about 2800 hours (across 7500 speakers) and it represents a rich spoken language diversity. However, the distribution of the data and its alignment to real-life (e.g. speech recognition) tasks is far from optimal indicating the need for additional larger-scale natural language speech data sets. Our survey presents an overview of available data sets for Hungarian explaining their strengths and weaknesses which is useful for researchers working on Hungarian across disciplines. In addition, our survey serves as a starting point towards a unified foundational speech model specific to Hungarian.
2022
BEA-Base: A Benchmark for ASR of Spontaneous Hungarian
Peter Mihajlik
|
Andras Balog
|
Tekla Etelka Graczi
|
Anna Kohari
|
Balázs Tarján
|
Katalin Mady
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Hungarian is spoken by 15 million people, still, easily accessible Automatic Speech Recognition (ASR) benchmark datasets – especially for spontaneous speech – have been practically unavailable. In this paper, we introduce BEA-Base, a subset of the BEA spoken Hungarian database comprising mostly spontaneous speech of 140 speakers. It is built specifically to assess ASR, primarily for conversational AI applications. After defining the speech recognition subsets and task, several baselines – including classic HMM-DNN hybrid and end-to-end approaches augmented by cross-language transfer learning – are developed using open-source toolkits. The best results obtained are based on multilingual self-supervised pretraining, achieving a 45% recognition error rate reduction as compared to the classical approach – without the application of an external language model or additional supervised data. The results show the feasibility of using BEA-Base for training and evaluation of Hungarian speech recognition systems.
Search
Co-authors
- Peter Mihajlik 2
- Anna Kohári 2
- Katalin Mády 2
- Andras Balog 1
- Balázs Tarján 1
- show all...