2010
pdf
bib
Multimodal Annotation of Conversational Data
Philippe Blache
|
Roxane Bertrand
|
Emmanuel Bruno
|
Brigitte Bigi
|
Robert Espesser
|
Gaelle Ferré
|
Mathilde Guardiola
|
Daniel Hirst
|
Ning Tan
|
Edlira Cela
|
Jean-Claude Martin
|
Stéphane Rauzy
|
Mary-Annick Morel
|
Elisabeth Murisasco
|
Irina Nesterenko
Proceedings of the Fourth Linguistic Annotation Workshop
pdf
bib
abs
Automatic Detection of Syllable Boundaries in Spontaneous Speech
Brigitte Bigi
|
Christine Meunier
|
Irina Nesterenko
|
Roxane Bertrand
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)
This paper presents the outline and performance of an automatic syllable boundary detection system. The syllabification of phonemes is performed with a rule-based system, implemented in a Java program. Phonemes are categorized into 6 classes. A set of specific rules are developed and categorized as general rules which can be applied in all cases, and exception rules which are applied in some specific situations. These rules deal with a French spontaneous speech corpus. Moreover, the proposed phonemes, classes and rules are listed in an external configuration file of the tool (under GPL licence) that make the tool very easy to adapt to a specific corpus by adding or modifying rules, phoneme encoding or phoneme classes, by the use of a new configuration file. Finally, performances are evaluated and compared to 3 other French syllabification systems and show significant improvements. Automatic system output and expert's syllabification are in agreement for most of syllable boundaries in our corpus.
pdf
bib
abs
The OTIM Formal Annotation Model: A Preliminary Step before Annotation Scheme
Philippe Blache
|
Roxane Bertrand
|
Mathilde Guardiola
|
Marie-Laure Guénot
|
Christine Meunier
|
Irina Nesterenko
|
Berthille Pallaud
|
Laurent Prévot
|
Béatrice Priego-Valverde
|
Stéphane Rauzy
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)
Large annotation projects, typically those addressing the question of multimodal annotation in which many different kinds of information have to be encoded, have to elaborate precise and high level annotation schemes. Doing this requires first to define the structure of the information: the different objects and their organization. This stage has to be as much independent as possible from the coding language constraints. This is the reason why we propose a preliminary formal annotation model, represented with typed feature structures. This representation requires a precise definition of the different objects, their properties (or features) and their relations, represented in terms of type hierarchies. This approach has been used to specify the annotation scheme of a large multimodal annotation project (OTIM) and experimented in the annotation of a multimodal corpus (CID, Corpus of Interactional Data). This project aims at collecting, annotating and exploiting a dialogue video corpus in a multimodal perspective (including speech and gesture modalities). The corpus itself, is made of 8 hours of dialogues, fully transcribed and richly annotated (phonetics, syntax, pragmatics, gestures, etc.).