Javier Pérez


2008

pdf bib
Corpus and Voices for Catalan Speech Synthesis
Antonio Bonafonte | Jordi Adell | Ignasi Esquerra | Silvia Gallego | Asunción Moreno | Javier Pérez
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

In this paper we describe the design and production of Catalan database for building synthetic voices. Two speakers, with 10 hours per speaker, have recorded 10 hours of speech. The speaker selection and the corpus design aim to provide resources for high quality synthesis. The resources have been used to build voices for the Festival TTS. Both the original recordings and the Festival databases are freely available for research and for commertial use.

2006

pdf bib
Acceptance Testing of a Spoken Language Translation System
Rafael Banchs | Antonio Bonafonte | Javier Pérez
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

This paper describes an acceptance test procedure for evaluating a spoken language translation system between Catalan and Spanish. The procedure consists of two independent tests. The first test was an utterance-oriented evaluation for determining how the use of speech benefits communication. This test allowed for comparing relative performance of the different system components, explicitly: source text to target text, source text to target speech, source speech to target text, and source speech to target speech. The second test was a task-oriented experiment for evaluating if users could achieve some predefined goals for a given task with the state of the technology. Eight subjects familiar with the technology and four subjects not familiar with the technology participated in the tests. From the results we can conclude that state of technology is getting closer to provide effective speech-to-speech translation systems but there is still lot of work to be done in this area. No significant differences in performance between users that are familiar with the technology and users that are not familiar with the technology were evidenced. This constitutes, as far as we know, the first evaluation of a Spoken Translation System that considers performance at both, the utterance level and the task level.

pdf bib
GAIA: Common Framework for the Development of Speech Translation Technologies
Javier Pérez | Antonio Bonafonte
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

We present here an open-source software platform for the integration of speech translation components. This tool is useful to integrate into a common framework different automatic speech recognition, spoken language translation and text-to-speech synthesis solutions, as demonstrated in the evaluation of the European LC-STAR project, and during the development of the national ALIADO project. Gaia operates with great flexibility, and it has been used to obtain the text and speech corpora needed when performing speech translation. The platform follows a modular distributed approach, with a specifically designed extensible network protocol handling the communication with the different modules. A well defined and publicly available API facilitates the integration of existing solutions into the architecture. Completely functional audio and text interfaces together with remote monitoring tools are provided.

pdf bib
ECESS Inter-Module Interface Specification for Speech Synthesis
Javier Pérez | Antonio Bonafonte | Horst-Udo Hain | Eric Keller | Stefan Breuer | Jilei Tian
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

The newly founded European Centre of Excellence for Speech Synthesis (ECESS) is an initiative to promote the development of the European research area (ERA) in the field of Language Technology. ECESS focuses on the great challenge of high-quality speech synthesis which is of crucial importance for future spoken-language technologies. The main goals of ECESS are to achieve the critical mass needed to promote progress in TTS technology substantially, to integrate basic research know-how related to speech synthesis and to attract public and private funding. To this end, a common system architecture based on exchangeable modules supplied by the ECESS members is to be established. The XML-based interface that connects these modules is the topic of this paper.