2020
pdf
bib
abs
Character Alignment in Morphologically Complex Translation Sets for Related Languages
Michael Gasser
|
Binyam Ephrem Seyoum
|
Nazareth Amlesom Kifle
Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects
For languages with complex morphology, word-to-word translation is a task with various potential applications, for example, in information retrieval, language instruction, and dictionary creation, as well as in machine translation. In this paper, we confine ourselves to the subtask of character alignment for the particular case of families of related languages with very few resources for most or all members. There are many such families; we focus on the subgroup of Semitic languages spoken in Ethiopia and Eritrea. We begin with an adaptation of the familiar alignment algorithms behind statistical machine translation, modifying them as appropriate for our task. We show how character alignment can reveal morphological, phonological, and orthographic correspondences among related languages.
bib
abs
Large Vocabulary Read Speech Corpora for Four Ethiopian Languages: Amharic, Tigrigna, Oromo, and Wolaytta
Solomon Teferra Abate
|
Martha Yifiru Tachbelie
|
Michael Melese
|
Hafte Abera
|
Tewodros Gebreselassie
|
Wondwossen Mulugeta
|
Yaregal Assabie
|
Million Meshesha Beyene
|
Solomon Atinafu
|
Binyam Ephrem Seyoum
Proceedings of the Fourth Widening Natural Language Processing Workshop
Automatic Speech Recognition (ASR) is one of the most important technologies to help people live a better life in the 21st century. However, its development requires a big speech corpus for a language. The development of such a corpus is expensive especially for under-resourced Ethiopian languages. To address this problem we have developed four medium-sized (longer than 22 hours each) speech corpora for four Ethiopian languages: Amharic, Tigrigna, Oromo, and Wolaytta. In a way of checking the usability of the corpora and deliver a baseline ASR for each language. In this paper, we present the corpora and the baseline ASR systems for each language. The word error rates (WERs) we achieved show that the corpora are usable for further investigation and we recommend the collection of text corpora to train strong language models for Oromo and Wolaytta compared to others.
pdf
bib
abs
Large Vocabulary Read Speech Corpora for Four Ethiopian Languages: Amharic, Tigrigna, Oromo and Wolaytta
Solomon Teferra Abate
|
Martha Yifiru Tachbelie
|
Michael Melese
|
Hafte Abera
|
Tewodros Abebe
|
Wondwossen Mulugeta
|
Yaregal Assabie
|
Million Meshesha
|
Solomon Afnafu
|
Binyam Ephrem Seyoum
Proceedings of the Twelfth Language Resources and Evaluation Conference
Automatic Speech Recognition (ASR) is one of the most important technologies to support spoken communication in modern life. However, its development benefits from large speech corpus. The development of such a corpus is expensive and most of the human languages, including the Ethiopian languages, do not have such resources. To address this problem, we have developed four large (about 22 hours) speech corpora for four Ethiopian languages: Amharic, Tigrigna, Oromo and Wolaytta. To assess usability of the corpora for (the purpose of) speech processing, we have developed ASR systems for each language. In this paper, we present the corpora and the baseline ASR systems we have developed. We have achieved word error rates (WERs) of 37.65%, 31.03%, 38.02%, 33.89% for Amharic, Tigrigna, Oromo and Wolaytta, respectively. This results show that the corpora are suitable for further investigation towards the development of ASR systems. Thus, the research community can use the corpora to further improve speech processing systems. From our results, it is clear that the collection of text corpora to train strong language models for all of the languages is still required, especially for Oromo and Wolaytta.
pdf
bib
abs
Comparing Neural Network Parsers for a Less-resourced and Morphologically-rich Language: Amharic Dependency Parser
Binyam Ephrem Seyoum
|
Yusuke Miyao
|
Baye Yimam Mekonnen
Proceedings of the first workshop on Resources for African Indigenous Languages
In this paper, we compare four state-of-the-art neural network dependency parsers for the Semitic language Amharic. As Amharic is a morphologically-rich and less-resourced language, the out-of-vocabulary (OOV) problem will be higher when we develop data-driven models. This fact limits researchers to develop neural network parsers because the neural network requires large quantities of data to train a model. We empirically evaluate neural network parsers when a small Amharic treebank is used for training. Based on our experiment, we obtain an 83.79 LAS score using the UDPipe system. Better accuracy is achieved when the neural parsing system uses external resources like word embedding. Using such resources, the LAS score for UDPipe improves to 85.26. Our experiment shows that the neural networks can learn dependency relations better from limited data while segmentation and POS tagging require much data.
2018
pdf
bib
abs
Parallel Corpora for bi-lingual English-Ethiopian Languages Statistical Machine Translation
Solomon Teferra Abate
|
Michael Melese
|
Martha Yifiru Tachbelie
|
Million Meshesha
|
Solomon Atinafu
|
Wondwossen Mulugeta
|
Yaregal Assabie
|
Hafte Abera
|
Binyam Ephrem
|
Tewodros Abebe
|
Wondimagegnhue Tsegaye
|
Amanuel Lemma
|
Tsegaye Andargie
|
Seifedin Shifaw
Proceedings of the 27th International Conference on Computational Linguistics
In this paper, we describe an attempt towards the development of parallel corpora for English and Ethiopian Languages, such as Amharic, Tigrigna, Afan-Oromo, Wolaytta and Ge’ez. The corpora are used for conducting a bi-directional statistical machine translation experiments. The BLEU scores of the bi-directional Statistical Machine Translation (SMT) systems show a promising result. The morphological richness of the Ethiopian languages has a great impact on the performance of SMT specially when the targets are Ethiopian languages. Now we are working towards an optimal alignment for a bi-directional English-Ethiopian languages SMT.
pdf
bib
Universal Dependencies for Amharic
Binyam Ephrem Seyoum
|
Yusuke Miyao
|
Baye Yimam Mekonnen
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)
pdf
bib
Portable Spelling Corrector for a Less-Resourced Language: Amharic
Andargachew Mekonnen Gezmu
|
Andreas Nürnberger
|
Binyam Ephrem Seyoum
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)
pdf
bib
abs
Contemporary Amharic Corpus: Automatically Morpho-Syntactically Tagged Amharic Corpus
Andargachew Mekonnen Gezmu
|
Binyam Ephrem Seyoum
|
Michael Gasser
|
Andreas Nürnberger
Proceedings of the First Workshop on Linguistic Resources for Natural Language Processing
We introduced the contemporary Amharic corpus, which is automatically tagged for morpho-syntactic information. Texts are collected from 25,199 documents from different domains and about 24 million orthographic words are tokenized. Since it is partly a web corpus, we made some automatic spelling error correction. We have also modified the existing morphological analyzer, HornMorpho, to use it for the automatic tagging.
pdf
bib
abs
Parallel Corpora for bi-Directional Statistical Machine Translation for Seven Ethiopian Language Pairs
Solomon Teferra Abate
|
Michael Melese
|
Martha Yifiru Tachbelie
|
Million Meshesha
|
Solomon Atinafu
|
Wondwossen Mulugeta
|
Yaregal Assabie
|
Hafte Abera
|
Binyam Ephrem
|
Tewodros Abebe
|
Wondimagegnhue Tsegaye
|
Amanuel Lemma
|
Tsegaye Andargie
|
Seifedin Shifaw
Proceedings of the First Workshop on Linguistic Resources for Natural Language Processing
In this paper, we describe the development of parallel corpora for Ethiopian Languages: Amharic, Tigrigna, Afan-Oromo, Wolaytta and Geez. To check the usability of all the corpora we conducted baseline bi-directional statistical machine translation (SMT) experiments for seven language pairs. The performance of the bi-directional SMT systems shows that all the corpora can be used for further investigations. We have also shown that the morphological complexity of the Ethio-Semitic languages has a negative impact on the performance of the SMT especially when they are target languages. Based on the results we obtained, we are currently working towards handling the morphological complexities to improve the performance of statistical machine translation among the Ethiopian languages.