AnnoTheia: A Semi-Automatic Annotation Toolkit for Audio-Visual Speech Technologies

José-M. Acosta-Triana, David Gimeno-Gómez, Carlos-D. Martínez-Hinarejos


Abstract
More than 7,000 known languages are spoken around the world. However, due to the lack of annotated resources, only a small fraction of them are currently covered by speech technologies. Albeit self-supervised speech representations, recent massive speech corpora collections, as well as the organization of challenges, have alleviated this inequality, most studies are mainly benchmarked on English. This situation is aggravated when tasks involving both acoustic and visual speech modalities are addressed. In order to promote research on low-resource languages for audio-visual speech technologies, we present AnnoTheia, a semi-automatic annotation toolkit that detects when a person speaks on the scene and the corresponding transcription. In addition, to show the complete process of preparing AnnoTheia for a language of interest, we also describe the adaptation of a pre-trained model for active speaker detection to Spanish, using a database not initially conceived for this type of task. Prior evaluations show that the toolkit is able to speed up to four times the annotation process. The AnnoTheia toolkit, tutorials, and pre-trained models are available at https://github.com/joactr/AnnoTheia/.
Anthology ID:
2024.lrec-main.113
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
1260–1269
Language:
URL:
https://aclanthology.org/2024.lrec-main.113
DOI:
Bibkey:
Cite (ACL):
José-M. Acosta-Triana, David Gimeno-Gómez, and Carlos-D. Martínez-Hinarejos. 2024. AnnoTheia: A Semi-Automatic Annotation Toolkit for Audio-Visual Speech Technologies. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 1260–1269, Torino, Italia. ELRA and ICCL.
Cite (Informal):
AnnoTheia: A Semi-Automatic Annotation Toolkit for Audio-Visual Speech Technologies (Acosta-Triana et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.113.pdf