2022
pdf
bib
abs
Handwriting recognition for Scottish Gaelic
William Lamb
|
Beatrice Alex
|
Mark Sinclair
Proceedings of the 4th Celtic Language Technology Workshop within LREC2022
Like most other minority languages, Scottish Gaelic has limited tools and resources available for Natural Language Processing research and applications. These limitations restrict the potential of the language to participate in modern speech technology, while also restricting research in fields such as corpus linguistics and the Digital Humanities. At the same time, Gaelic has a long written history, is well-described linguistically, and is unusually well-supported in terms of potential NLP training data. For instance, archives such as the School of Scottish Studies hold thousands of digitised recordings of vernacular speech, many of which have been transcribed as paper-based, handwritten manuscripts. In this paper, we describe a project to digitise and recognise a corpus of handwritten narrative transcriptions, with the intention of re-purposing it to develop a Gaelic speech recognition system.
pdf
bib
abs
Developing Automatic Speech Recognition for Scottish Gaelic
Lucy Evans
|
William Lamb
|
Mark Sinclair
|
Beatrice Alex
Proceedings of the 4th Celtic Language Technology Workshop within LREC2022
This paper discusses our efforts to develop a full automatic speech recognition (ASR) system for Scottish Gaelic, starting from a point of limited resource. Building ASR technology is important for documenting and revitalising endangered languages; it enables existing resources to be enhanced with automatic subtitles and transcriptions, improves accessibility for users, and, in turn, encourages continued use of the language. In this paper, we explain the many difficulties faced when collecting minority language data for speech recognition. A novel cross-lingual approach to the alignment of training data is used to overcome one such difficulty, and in this way we demonstrate how majority language resources can bootstrap the development of lower-resourced language technology. We use the Kaldi speech recognition toolkit to develop several Gaelic ASR systems, and report a final WER of 26.30%. This is a 9.50% improvement on our original model.
2014
pdf
bib
abs
The UEDIN ASR systems for the IWSLT 2014 evaluation
Peter Bell
|
Pawel Swietojanski
|
Joris Driesen
|
Mark Sinclair
|
Fergus McInnes
|
Steve Renals
Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign
This paper describes the University of Edinburgh (UEDIN) ASR systems for the 2014 IWSLT Evaluation. Notable features of the English system include deep neural network acoustic models in both tandem and hybrid configuration with the use of multi-level adaptive networks, LHUC adaptation and Maxout units. The German system includes lightly supervised training and a new method for dictionary generation. Our voice activity detection system now uses a semi-Markov model to incorporate a prior on utterance lengths. There are improvements of up to 30% relative WER on the tst2013 English test set.
2013
pdf
bib
abs
Description of the UEDIN system for German ASR
Joris Driesen
|
Peter Bell
|
Mark Sinclair
|
Steve Renals
Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign
In this paper we describe the ASR system for German built at the University of Edinburgh (UEDIN) for the 2013 IWSLT evaluation campaign. For ASR, the major challenge to overcome, was to find suitable acoustic training data. Due to the lack of expertly transcribed acoustic speech data for German, acoustic model training had to be performed on publicly available data crawled from the internet. For evaluation, lack of a manual segmentation into utterances was handled in two different ways: by generating an automatic segmentation, and by treating entire input files as a single segment. Demonstrating the latter method is superior in the current task, we obtained a WER of 28.16% on the dev set and 36.21% on the test set.
pdf
bib
abs
The UEDIN English ASR system for the IWSLT 2013 evaluation
Peter Bell
|
Fergus McInnes
|
Siva Reddy Gangireddy
|
Mark Sinclair
|
Alexandra Birch
|
Steve Renals
Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign
This paper describes the University of Edinburgh (UEDIN) English ASR system for the IWSLT 2013 Evaluation. Notable features of the system include deep neural network acoustic models in both tandem and hybrid configuration, cross-domain adaptation with multi-level adaptive networks, and the use of a recurrent neural network language model. Improvements to our system since the 2012 evaluation – which include the use of a significantly improved n-gram language model – result in a 19% relative WER reduction on the tst2012 set.