2023
pdf
bib
abs
Using C-LARA to evaluate GPT-4’s multilingual processing
ChatGPT C-LARA-Instance
|
Belinda Chiera
|
Cathy Chua
|
Chadi Raheb
|
Manny Rayner
|
Annika Simonsen
|
Zhengkang Xiang
|
Rina Zviel-Girshin
Proceedings of the 21st Annual Workshop of the Australasian Language Technology Association
We present a cross-linguistic study in which the open source C-LARA platform was used to evaluate GPT-4’s ability to perform several key tasks relevant to Computer Assisted Language Learning. For each of the languages English, Farsi, Faroese, Mandarin and Russian, we instructed GPT-4, through C-LARA, to write six different texts, using prompts chosen to obtain texts of widely differing character. We then further instructed GPT-4 to annotate each text with segmentation markup, glosses and lemma/part-of-speech information; native speakers hand-corrected the texts and annotations to obtain error rates on the different component tasks. The C-LARA platform makes it easy to combine the results into a single multimodal document, further facilitating checking of their correctness. GPT-4’s performance varied widely across languages and processing tasks, but performance on different text genres was roughly comparable. In some cases, most notably glossing of English text, we found that GPT-4 was consistently able to revise its annotations to improve them.
2022
pdf
bib
Using public domain resources and off-the-shelf tools to produce high-quality multimedia texts
Manny Rayner
|
Belinda Chiera
|
Cathy Chua
Proceedings of the 20th Annual Workshop of the Australasian Language Technology Association
pdf
bib
abs
Reading Assistance through LARA, the Learning And Reading Assistant
Elham Akhlaghi
|
Ingibjörg Iða Auðunardóttir
|
Branislav Bédi
|
Hakeem Beedar
|
Harald Berthelsen
|
Cathy Chua
|
Catia Cucchiarini
|
Brynjarr Eyjólfsson
|
Nedelina Ivanova
|
Christèle Maizonniaux
|
Neasa Ní Chiaráin
|
Manny Rayner
|
John Sloan
|
Sigurður Vigfússon
|
Ghil’ad Zuckermann
Proceedings of the 2nd Workshop on Tools and Resources to Empower People with REAding DIfficulties (READI) within the 13th Language Resources and Evaluation Conference
We present an overview of LARA, the Learning And Reading Assistant, an open source platform for easy creation and use of multimedia annotated texts designed to support the improvement of reading skills. The paper is divided into three parts. In the first, we give a brief summary of LARA’s processing. In the second, we describe some generic functionality specially relevant for reading assistance: support for phonetically annotated texts, support for image-based texts, and integrated production of text-to-speech (TTS) generated audio. In the third, we outline some of the larger projects so far carried out with LARA, involving development of content for learning second and foreign (L2) languages such as Icelandic, Farsi, Irish, Old Norse and the Australian Aboriginal language Barngarla, where the issues involved overlap with those that arise when trying to help students improve first-language (L1) reading skills. All software and almost all content is freely available.
pdf
bib
abs
Using the LARA Little Prince to compare human and TTS audio quality
Elham Akhlaghi
|
Ingibjörg Iða Auðunardóttir
|
Anna Bączkowska
|
Branislav Bédi
|
Hakeem Beedar
|
Harald Berthelsen
|
Cathy Chua
|
Catia Cucchiarin
|
Hanieh Habibi
|
Ivana Horváthová
|
Junta Ikeda
|
Christèle Maizonniaux
|
Neasa Ní Chiaráin
|
Chadi Raheb
|
Manny Rayner
|
John Sloan
|
Nikos Tsourakis
|
Chunlin Yao
Proceedings of the Thirteenth Language Resources and Evaluation Conference
A popular idea in Computer Assisted Language Learning (CALL) is to use multimodal annotated texts, with annotations typically including embedded audio and translations, to support L2 learning through reading. An important question is how to create good quality audio, which can be done either through human recording or by a Text-To-Speech (TTS) engine. We may reasonably expect TTS to be quicker and easier, but human to be of higher quality. Here, we report a study using the open source LARA platform and ten languages. Samples of audio totalling about five minutes, representing the same four passages taken from LARA versions of Saint-Exupèry’s “Le petit prince”, were provided for each language in both human and TTS form; the passages were chosen to instantiate the 2x2 cross product of the conditions dialogue, not-dialogue and humour, not-humour. 251 subjects used a web form to compare human and TTS versions of each item and rate the voices as a whole. For the three languages where TTS did best, English, French and Irish, the evidence from this study and the previous one it extended suggest that TTS audio is now pedagogically adequate and roughly comparable with a non-professional human voice in terms of exemplifying correct pronunciation and prosody. It was however still judged substantially less natural and less pleasant to listen to. No clear evidence was found to support the hypothesis that dialogue and humour pose special problems for TTS. All data and software will be made freely available.
2020
pdf
bib
abs
Constructing Multimodal Language Learner Texts Using LARA: Experiences with Nine Languages
Elham Akhlaghi
|
Branislav Bédi
|
Fatih Bektaş
|
Harald Berthelsen
|
Matthias Butterweck
|
Cathy Chua
|
Catia Cucchiarin
|
Gülşen Eryiğit
|
Johanna Gerlach
|
Hanieh Habibi
|
Neasa Ní Chiaráin
|
Manny Rayner
|
Steinþór Steingrímsson
|
Helmer Strik
Proceedings of the Twelfth Language Resources and Evaluation Conference
LARA (Learning and Reading Assistant) is an open source platform whose purpose is to support easy conversion of plain texts into multimodal online versions suitable for use by language learners. This involves semi-automatically tagging the text, adding other annotations and recording audio. The platform is suitable for creating texts in multiple languages via crowdsourcing techniques that can be used for teaching a language via reading and listening. We present results of initial experiments by various collaborators where we measure the time required to produce substantial LARA resources, up to the length of short novels, in Dutch, English, Farsi, French, German, Icelandic, Irish, Swedish and Turkish. The first results are encouraging. Although there are some startup problems, the conversion task seems manageable for the languages tested so far. The resulting enriched texts are posted online and are freely available in both source and compiled form.