Chadi Raheb


2023

pdf bib
Using C-LARA to evaluate GPT-4’s multilingual processing
ChatGPT C-LARA-Instance | Belinda Chiera | Cathy Chua | Chadi Raheb | Manny Rayner | Annika Simonsen | Zhengkang Xiang | Rina Zviel-Girshin
Proceedings of the 21st Annual Workshop of the Australasian Language Technology Association

We present a cross-linguistic study in which the open source C-LARA platform was used to evaluate GPT-4’s ability to perform several key tasks relevant to Computer Assisted Language Learning. For each of the languages English, Farsi, Faroese, Mandarin and Russian, we instructed GPT-4, through C-LARA, to write six different texts, using prompts chosen to obtain texts of widely differing character. We then further instructed GPT-4 to annotate each text with segmentation markup, glosses and lemma/part-of-speech information; native speakers hand-corrected the texts and annotations to obtain error rates on the different component tasks. The C-LARA platform makes it easy to combine the results into a single multimodal document, further facilitating checking of their correctness. GPT-4’s performance varied widely across languages and processing tasks, but performance on different text genres was roughly comparable. In some cases, most notably glossing of English text, we found that GPT-4 was consistently able to revise its annotations to improve them.

2022

pdf bib
Using the LARA Little Prince to compare human and TTS audio quality
Elham Akhlaghi | Ingibjörg Iða Auðunardóttir | Anna Bączkowska | Branislav Bédi | Hakeem Beedar | Harald Berthelsen | Cathy Chua | Catia Cucchiarin | Hanieh Habibi | Ivana Horváthová | Junta Ikeda | Christèle Maizonniaux | Neasa Ní Chiaráin | Chadi Raheb | Manny Rayner | John Sloan | Nikos Tsourakis | Chunlin Yao
Proceedings of the Thirteenth Language Resources and Evaluation Conference

A popular idea in Computer Assisted Language Learning (CALL) is to use multimodal annotated texts, with annotations typically including embedded audio and translations, to support L2 learning through reading. An important question is how to create good quality audio, which can be done either through human recording or by a Text-To-Speech (TTS) engine. We may reasonably expect TTS to be quicker and easier, but human to be of higher quality. Here, we report a study using the open source LARA platform and ten languages. Samples of audio totalling about five minutes, representing the same four passages taken from LARA versions of Saint-Exupèry’s “Le petit prince”, were provided for each language in both human and TTS form; the passages were chosen to instantiate the 2x2 cross product of the conditions dialogue, not-dialogue and humour, not-humour. 251 subjects used a web form to compare human and TTS versions of each item and rate the voices as a whole. For the three languages where TTS did best, English, French and Irish, the evidence from this study and the previous one it extended suggest that TTS audio is now pedagogically adequate and roughly comparable with a non-professional human voice in terms of exemplifying correct pronunciation and prosody. It was however still judged substantially less natural and less pleasant to listen to. No clear evidence was found to support the hypothesis that dialogue and humour pose special problems for TTS. All data and software will be made freely available.