Knot Pipatsrisawat


2024

pdf bib
Thonburian Whisper: Robust Fine-tuned and Distilled Whisper for Thai
Zaw Htet Aung | Thanachot Thavornmongkol | Atirut Boribalburephan | Vittavas Tangsriworakan | Knot Pipatsrisawat | Titipat Achakulvisut
Proceedings of the 7th International Conference on Natural Language and Speech Processing (ICNLSP 2024)

2020

pdf bib
Burmese Speech Corpus, Finite-State Text Normalization and Pronunciation Grammars with an Application to Text-to-Speech
Yin May Oo | Theeraphol Wattanavekin | Chenfang Li | Pasindu De Silva | Supheakmungkol Sarin | Knot Pipatsrisawat | Martin Jansche | Oddur Kjartansson | Alexander Gutkin
Proceedings of the Twelfth Language Resources and Evaluation Conference

This paper introduces an open-source crowd-sourced multi-speaker speech corpus along with the comprehensive set of finite-state transducer (FST) grammars for performing text normalization for the Burmese (Myanmar) language. We also introduce the open-source finite-state grammars for performing grapheme-to-phoneme (G2P) conversion for Burmese. These three components are necessary (but not sufficient) for building a high-quality text-to-speech (TTS) system for Burmese, a tonal Southeast Asian language from the Sino-Tibetan family which presents several linguistic challenges. We describe the corpus acquisition process and provide the details of our finite state-based approach to Burmese text normalization and G2P. Our experiments involve building a multi-speaker TTS system based on long short term memory (LSTM) recurrent neural network (RNN) models, which were previously shown to perform well for other languages in a low-resource setting. Our results indicate that the data and grammars that we are announcing are sufficient to build reasonably high-quality models comparable to other systems. We hope these resources will facilitate speech and language research on the Burmese language, which is considered by many to be low-resource due to the limited availability of free linguistic data.

pdf bib
Open-source Multi-speaker Speech Corpora for Building Gujarati, Kannada, Malayalam, Marathi, Tamil and Telugu Speech Synthesis Systems
Fei He | Shan-Hui Cathy Chu | Oddur Kjartansson | Clara Rivera | Anna Katanova | Alexander Gutkin | Isin Demirsahin | Cibu Johny | Martin Jansche | Supheakmungkol Sarin | Knot Pipatsrisawat
Proceedings of the Twelfth Language Resources and Evaluation Conference

We present free high quality multi-speaker speech corpora for Gujarati, Kannada, Malayalam, Marathi, Tamil and Telugu, which are six of the twenty two official languages of India spoken by 374 million native speakers. The datasets are primarily intended for use in text-to-speech (TTS) applications, such as constructing multilingual voices or being used for speaker or language adaptation. Most of the corpora (apart from Marathi, which is a female-only database) consist of at least 2,000 recorded lines from female and male native speakers of the language. We present the methodological details behind corpora acquisition, which can be scaled to acquiring data for other languages of interest. We describe the experiments in building a multilingual text-to-speech model that is constructed by combining our corpora. Our results indicate that using these corpora results in good quality voices, with Mean Opinion Scores (MOS) > 3.6, for all the languages tested. We believe that these resources, released with an open-source license, and the described methodology will help in the progress of speech applications for the languages described and aid corpora development for other, smaller, languages of India and beyond.

pdf bib
Crowdsourcing Latin American Spanish for Low-Resource Text-to-Speech
Adriana Guevara-Rukoz | Isin Demirsahin | Fei He | Shan-Hui Cathy Chu | Supheakmungkol Sarin | Knot Pipatsrisawat | Alexander Gutkin | Alena Butryna | Oddur Kjartansson
Proceedings of the Twelfth Language Resources and Evaluation Conference

In this paper we present a multidialectal corpus approach for building a text-to-speech voice for a new dialect in a language with existing resources, focusing on various South American dialects of Spanish. We first present public speech datasets for Argentinian, Chilean, Colombian, Peruvian, Puerto Rican and Venezuelan Spanish specifically constructed with text-to-speech applications in mind using crowd-sourcing. We then compare the monodialectal voices built with minimal data to a multidialectal model built by pooling all the resources from all dialects. Our results show that the multidialectal model outperforms the monodialectal baseline models. We also experiment with a “zero-resource” dialect scenario where we build a multidialectal voice for a dialect while holding out target dialect recordings from the training data.

2018

pdf bib
Building Open Javanese and Sundanese Corpora for Multilingual Text-to-Speech
Jaka Aris Eko Wibawa | Supheakmungkol Sarin | Chenfang Li | Knot Pipatsrisawat | Keshan Sodimana | Oddur Kjartansson | Alexander Gutkin | Martin Jansche | Linne Ha
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf bib
Voice Builder: A Tool for Building Text-To-Speech Voices
Pasindu De Silva | Theeraphol Wattanavekin | Tang Hao | Knot Pipatsrisawat
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

2016

pdf bib
TTS for Low Resource Languages: A Bangla Synthesizer
Alexander Gutkin | Linne Ha | Martin Jansche | Knot Pipatsrisawat | Richard Sproat
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

We present a text-to-speech (TTS) system designed for the dialect of Bengali spoken in Bangladesh. This work is part of an ongoing effort to address the needs of under-resourced languages. We propose a process for streamlining the bootstrapping of TTS systems for under-resourced languages. First, we use crowdsourcing to collect the data from multiple ordinary speakers, each speaker recording small amount of sentences. Second, we leverage an existing text normalization system for a related language (Hindi) to bootstrap a linguistic front-end for Bangla. Third, we employ statistical techniques to construct multi-speaker acoustic models using Long Short-Term Memory Recurrent Neural Network (LSTM-RNN) and Hidden Markov Model (HMM) approaches. We then describe our experiments that show that the resulting TTS voices score well in terms of their perceived quality as measured by Mean Opinion Score (MOS) evaluations.