C. Maria Keet


2024

pdf bib
Automatically Generating IsiZulu Words From Indo-Arabic Numerals
Zola Mahlaza | Tadiwa Magwenzi | C. Maria Keet | Langa Khumalo
Proceedings of the 17th International Natural Language Generation Conference

Artificial conversational agents are deployed to assist humans in a variety of tasks. Some of these tasks require the capability to communicate numbers as part of their internal and abstract representations of meaning, such as for banking and scheduling appointments. They currently cannot do so for isiZulu because there are no algorithms to do so due to a lack of speech and text data and the transformation is complex and it may include dependence on the type of noun that is counted. We solved this by extracting and iteratively improving on the rules for speaking and writing numerals as words and creating two algorithms to automate the transformation. Evaluation of the algorithms by two isiZulu grammarians showed that six out of seven number categories were 90-100% correct. The same software was used with an additional set of rules to create a large monolingual text corpus, made up of 771 643 sentences, to enable future data-driven approaches.

pdf bib
ReproHum #0866-04: Another Evaluation of Readers’ Reactions to News Headlines
Zola Mahlaza | Toky Hajatiana Raboanary | Kyle Seakgwa | C. Maria Keet
Proceedings of the Fourth Workshop on Human Evaluation of NLP Systems (HumEval) @ LREC-COLING 2024

The reproduction of Natural Language Processing (NLP) studies is important in establishing their reliability. Nonetheless, many papers in NLP have never been reproduced. This paper presents a reproduction of Gabriel et al. (2022)’s work to establish the extent to which their findings, pertaining to the utility of large language models (T5 and GPT2) to automatically generate writer’s intents when given headlines to curb misinformation, can be confirmed. Our results show no evidence to support two of their four findings and they partially support the rest of the original findings. Specifically, while we confirmed that all the models are judged to be capable of influencing readers’ trust or distrust, there was a difference in T5’s capability to reduce trust. Our results show that its generations are more likely to have greater influence in reducing trust while Gabriel et al. (2022) found more cases where they had no impact at all. In addition, most of the model generations are considered socially acceptable only if we relax the criteria for determining a majority to mean more than chance rather than the apparent > 70% of the original study. Overall, while they found that “machine-generated MRF implications alongside news headlines to readers can increase their trust in real news while decreasing their trust in misinformation”, we found that they are more likely to decrease trust in both cases vs. having no impact at all.

2023

pdf bib
Proceedings of the 16th International Natural Language Generation Conference
C. Maria Keet | Hung-Yi Lee | Sina Zarrieß
Proceedings of the 16th International Natural Language Generation Conference

pdf bib
Proceedings of the 16th International Natural Language Generation Conference: System Demonstrations
C. Maria Keet | Hung-Yi Lee | Sina Zarrieß
Proceedings of the 16th International Natural Language Generation Conference: System Demonstrations

2021

pdf bib
Assessing and Enhancing Bottom-up CNL Design for Competency Questions for Ontologies
Mary-Jane Antia | C. Maria Keet
Proceedings of the Seventh International Workshop on Controlled Natural Language (CNL 2020/21)

2020

pdf bib
OWLSIZ: An isiZulu CNL for structured knowledge validation
Zola Mahlaza | C. Maria Keet
Proceedings of the 3rd International Workshop on Natural Language Generation from the Semantic Web (WebNLG+)

In iterative knowledge elicitation, engineers are expected to be directly involved in validating the already captured knowledge and obtaining new knowledge increments, thus making the process time consuming. Languages such as English have controlled natural languages than can be repurposed to generate natural language questions from an ontology in order to allow a domain expert to independently validate the contents of an ontology without understanding a ontology authoring language such as OWL. IsiZulu, South Africa’s main L1 language by number speakers, does not have such a resource, hence, it is not possible to build a verbaliser to generate such questions. Therefore, we propose an isiZulu controlled natural language, called OWL Simplified isiZulu (OWLSIZ), for producing grammatical and fluent questions from an ontology. Human evaluation of the generated questions showed that participants’ judgements agree that most (83%) questions are positive for grammaticality or understandability.

2018

pdf bib
Pluralizing Nouns across Agglutinating Bantu Languages
Joan Byamugisha | C. Maria Keet | Brian DeRenzi
Proceedings of the 27th International Conference on Computational Linguistics

Text generation may require the pluralization of nouns, such as in context-sensitive user interfaces and in natural language generation more broadly. While this has been solved for the widely-used languages, this is still a challenge for the languages in the Bantu language family. Pluralization results obtained for isiZulu and Runyankore showed there were similarities in approach, including the need to combine syntax with semantics, despite belonging to different language zones. This suggests that bootstrapping and generalizability might be feasible. We investigated this systematically for seven languages across three different Guthrie language zones. The first outcome is that Meinhof’s 1948 specification of the noun classes are indeed inadequate for computational purposes for all examined languages, due to non-determinism in prefixes, and we thus redefined the characteristic noun class tables of 29 noun classes into 53. The second main result is that the generic pluralizer achieved over 93% accuracy in coverage testing and over 94% on a random sample. This is comparable to the language-specific isiZulu and Runyankore pluralizers.

2017

pdf bib
Evaluation of a Runyankore grammar engine for healthcare messages
Joan Byamugisha | C. Maria Keet | Brian DeRenzi
Proceedings of the 10th International Conference on Natural Language Generation

Natural Language Generation (NLG) can be used to generate personalized health information, which is especially useful when provided in one’s own language. However, the NLG technique widely used in different domains and languages—templates—was shown to be inapplicable to Bantu languages, due to their characteristic agglutinative structure. We present here our use of the grammar engine NLG technique to generate text in Runyankore, a Bantu language indigenous to Uganda. Our grammar engine adds to previous work in this field with new rules for cardinality constraints, prepositions in roles, the passive, and phonological conditioning. We evaluated the generated text with linguists and non-linguists, who regarded most text as grammatically correct and understandable; and over 60% of them regarded all the text generated by our system to have been authored by a human being.

pdf bib
Toward an NLG System for Bantu languages: first steps with Runyankore (demo)
Joan Byamugisha | C. Maria Keet | Brian DeRenzi
Proceedings of the 10th International Conference on Natural Language Generation

There are many domain-specific and language-specific NLG systems, of which it may be possible to adapt to related domains and languages. The languages in the Bantu language family have their own set of features distinct from other major groups, which therefore severely limits the options to bootstrap an NLG system from existing ones. We present here our first proof-of-concept application for knowledge-to-text NLG as a plugin to the Protege 5.x ontology development system, tailored to Runyankore, a Bantu language indigenous to Uganda. It comprises a basic annotation model for linguistic information such as noun class, an implementation of existing verbalisation rules and a CFG for verbs, and a basic interface for data entry.

2016

pdf bib
Tense and Aspect in Runyankore Using a Context-Free Grammar
Joan Byamugisha | C. Maria Keet | Brian DeRenzi
Proceedings of the 9th International Natural Language Generation conference

pdf bib
On the verbalization patterns of part-whole relations in isiZulu
C. Maria Keet | Langa Khumalo
Proceedings of the 9th International Natural Language Generation conference