Binyam Gebrekidan Gebre

Also published as: Binyam Gebre


2015

pdf bib
Comparing Approaches to the Identification of Similar Languages
Marcos Zampieri | Binyam Gebrekidan Gebre | Hernani Costa | Josef van Genabith
Proceedings of the Joint Workshop on Language Technology for Closely Related Languages, Varieties and Dialects

2014

pdf bib
VarClass: An Open-source Language Identification Tool for Language Varieties
Marcos Zampieri | Binyam Gebre
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

This paper presents VarClass, an open-source tool for language identification available both to be downloaded as well as through a graphical user-friendly interface. The main difference of VarClass in comparison to other state-of-the-art language identification tools is its focus on language varieties. General purpose language identification tools do not take language varieties into account and our work aims to fill this gap. VarClass currently contains language models for over 27 languages in which 10 of them are language varieties. We report an average performance of over 90.5% accuracy in a challenging dataset. More language models will be included in the upcoming months.

pdf bib
Unsupervised Feature Learning for Visual Sign Language Identification
Binyam Gebrekidan Gebre | Onno Crasborn | Peter Wittenburg | Sebastian Drude | Tom Heskes
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

2013

pdf bib
N-gram Language Models and POS Distribution for the Identification of Spanish Varieties (Ngrammes et Traits Morphosyntaxiques pour la Identification de Variétés de l’Espagnol) [in French]
Marcos Zampieri | Binyam Gebrekidan Gebre | Sascha Diwersy
Proceedings of TALN 2013 (Volume 2: Short Papers)

pdf bib
Improving Native Language Identification with TF-IDF Weighting
Binyam Gebrekidan Gebre | Marcos Zampieri | Peter Wittenburg | Tom Heskes
Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications

2012

pdf bib
AVATecH — automated annotation through audio and video analysis
Przemyslaw Lenkiewicz | Binyam Gebrekidan Gebre | Oliver Schreer | Stefano Masneri | Daniel Schneider | Sebastian Tschöpel
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

In different fields of the humanities annotations of multimodal resources are a necessary component of the research workflow. Examples include linguistics, psychology, anthropology, etc. However, creation of those annotations is a very laborious task, which can take 50 to 100 times the length of the annotated media, or more. This can be significantly improved by applying innovative audio and video processing algorithms, which analyze the recordings and provide automated annotations. This is the aim of the AVATecH project, which is a collaboration of the Max Planck Institute for Psycholinguistics (MPI) and the Fraunhofer institutes HHI and IAIS. In this paper we present a set of results of automated annotation together with an evaluation of their quality.

pdf bib
Towards Automatic Gesture Stroke Detection
Binyam Gebrekidan Gebre | Peter Wittenburg | Przemyslaw Lenkiewicz
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

Automatic annotation of gesture strokes is important for many gesture and sign language researchers. The unpredictable diversity of human gestures and video recording conditions require that we adopt a more adaptive case-by-case annotation model. In this paper, we present a work-in progress annotation model that allows a user to a) track hands/face b) extract features c) distinguish strokes from non-strokes. The hands/face tracking is done with color matching algorithms and is initialized by the user. The initialization process is supported with immediate visual feedback. Sliders are also provided to support a user-friendly adjustment of skin color ranges. After successful initialization, features related to positions, orientations and speeds of tracked hands/face are extracted using unique identifiable features (corners) from a window of frames and are used for training a learning algorithm. Our preliminary results for stroke detection under non-ideal video conditions are promising and show the potential applicability of our methodology.