Joseph Marvin Imperial


2023

pdf bib
BasahaCorpus: An Expanded Linguistic Resource for Readability Assessment in Central Philippine Languages
Joseph Marvin Imperial | Ekaterina Kochmar
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Current research on automatic readability assessment (ARA) has focused on improving the performance of models in high-resource languages such as English. In this work, we introduce and release BasahaCorpus as part of an initiative aimed at expanding available corpora and baseline models for readability assessment in lower resource languages in the Philippines. We compiled a corpus of short fictional narratives written in Hiligaynon, Minasbate, Karay-a, and Rinconada—languages belonging to the Central Philippine family tree subgroup—to train ARA models using surface-level, syllable-pattern, and n-gram overlap features. We also propose a new hierarchical cross-lingual modeling approach that takes advantage of a language’s placement in the family tree to increase the amount of available training data. Our study yields encouraging results that support previous work showcasing the efficacy of cross-lingual models in low-resource settings, as well as similarities in highly informative linguistic features for mutually intelligible languages.

pdf bib
Flesch or Fumble? Evaluating Readability Standard Alignment of Instruction-Tuned Language Models
Joseph Marvin Imperial | Harish Tayyar Madabushi
Proceedings of the Third Workshop on Natural Language Generation, Evaluation, and Metrics (GEM)

Readability metrics and standards such as Flesch Kincaid Grade Level (FKGL) and the Common European Framework of Reference for Languages (CEFR) exist to guide teachers and educators to properly assess the complexity of educational materials before administering them for classroom use. In this study, we select a diverse set of open and closed-source instruction-tuned language models and investigate their performances in writing story completions and simplifying narratives—tasks that teachers perform—using standard-guided prompts controlling text readability. Our extensive findings provide empirical proof of how globally recognized models like ChatGPT may be considered less effective and may require more refined prompts for these generative tasks compared to other open-sourced models such as BLOOMZ and FlanT5—which have shown promising results.

pdf bib
CebuaNER: A New Baseline Cebuano Named Entity Recognition Model
Ma. Beatrice Emanuela Pilar | Dane Dedoroy | Ellyza Mari Papas | Mary Loise Buenaventura | Myron Darrel Montefalcon | Jay Rhald Padilla | Joseph Marvin Imperial | Mideth Abisado | Lany Maceda
Proceedings of the 37th Pacific Asia Conference on Language, Information and Computation

pdf bib
Automatic Readability Assessment for Closely Related Languages
Joseph Marvin Imperial | Ekaterina Kochmar
Findings of the Association for Computational Linguistics: ACL 2023

In recent years, the main focus of research on automatic readability assessment (ARA) has shifted towards using expensive deep learning-based methods with the primary goal of increasing models’ accuracy. This, however, is rarely applicable for low-resource languages where traditional handcrafted features are still widely used due to the lack of existing NLP tools to extract deeper linguistic representations. In this work, we take a step back from the technical component and focus on how linguistic aspects such as mutual intelligibility or degree of language relatedness can improve ARA in a low-resource setting. We collect short stories written in three languages in the Philippines—Tagalog, Bikol, and Cebuano—to train readability assessment models and explore the interaction of data and features in various cross-lingual setups. Our results show that the inclusion of CrossNGO, a novel specialized feature exploiting n-gram overlap applied to languages with high mutual intelligibility, significantly improves the performance of ARA models compared to the use of off-the-shelf large multilingual language models alone. Consequently, when both linguistic representations are combined, we achieve state-of-the-art results for Tagalog and Cebuano, and baseline scores for ARA in Bikol.

pdf bib
Uniform Complexity for Text Generation
Joseph Marvin Imperial | Harish Tayyar Madabushi
Findings of the Association for Computational Linguistics: EMNLP 2023

Large language models (LLMs) have shown promising results in a wide array of generative NLP tasks, such as summarization and machine translation. In the context of narrative generation, however, existing models still do not capture factors that contribute to producing consistent text. For instance, it is logical that a piece of text or a story should be uniformly readable throughout and that this form of complexity should be controllable. As such, if the complexity of an input text prompt is rated first-grade reading level in the Flesch Reading Ease test, then the generated text continuing the plot should also be within this range of complexity. With this in mind, we introduce Uniform Complexity for Text Generation (UCTG), a new benchmark test which raises the challenge of making generative models observe uniform linguistic properties with respect to prompts. We experiment with over 150+ linguistically and cognitively motivated features for evaluating text complexity in humans and generative models. From our results, we find that models such as GPT-2 struggle to preserve the complexity of input prompts used in its generations, even if finetuned with professionally written texts.

2022

pdf bib
NU HLT at CMCL 2022 Shared Task: Multilingual and Crosslingual Prediction of Human Reading Behavior in Universal Language Space
Joseph Marvin Imperial
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics

In this paper, we present a unified model that works for both multilingual and crosslingual prediction of reading times of words in various languages. The secret behind the success of this model is in the preprocessing step where all words are transformed to their universal language representation via the International Phonetic Alphabet (IPA). To the best of our knowledge, this is the first study to favorably exploit this phonological property of language for the two tasks. Various feature types were extracted covering basic frequencies, n-grams, information theoretic, and psycholinguistically-motivated predictors for model training. A finetuned Random Forest model obtained best performance for both tasks with 3.8031 and 3.9065 MAE scores for mean first fixation duration (FFDAvg) and mean total reading time (TRTAvg) respectively.

pdf bib
A Baseline Readability Model for Cebuano
Joseph Marvin Imperial | Lloyd Lois Antonie Reyes | Michael Antonio Ibanez | Ranz Sapinit | Mohammed Hussien
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)

In this study, we developed the first baseline readability model for the Cebuano language. Cebuano is the second most-used native language in the Philippines with about 27.5 million speakers. As the baseline, we extracted traditional or surface-based features, syllable patterns based from Cebuano’s documented orthography, and neural embeddings from the multilingual BERT model. Results show that the use of the first two handcrafted linguistic features obtained the best performance trained on an optimized Random Forest model with approximately 87% across all metrics. The feature sets and algorithm used also is similar to previous results in readability assessment for the Filipino language—showing potential of crosslingual application. To encourage more work for readability assessment in Philippine languages such as Cebuano, we open-sourced both code and data.

2021

pdf bib
Under the Microscope: Interpreting Readability Assessment Models for Filipino
Joseph Marvin Imperial | Ethel Ong
Proceedings of the 35th Pacific Asia Conference on Language, Information and Computation

pdf bib
Science Mapping of Publications in Natural Language Processing in the Philippines: 2006 to 2020
Rachel Edita O. Roxas | Joseph Marvin Imperial | Angelica H. De La Cruz
Proceedings of the 35th Pacific Asia Conference on Language, Information and Computation

pdf bib
BERT Embeddings for Automatic Readability Assessment
Joseph Marvin Imperial
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021)

Automatic readability assessment (ARA) is the task of evaluating the level of ease or difficulty of text documents for a target audience. For researchers, one of the many open problems in the field is to make such models trained for the task show efficacy even for low-resource languages. In this study, we propose an alternative way of utilizing the information-rich embeddings of BERT models with handcrafted linguistic features through a combined method for readability assessment. Results show that the proposed method outperforms classical approaches in readability assessment using English and Filipino datasets, obtaining as high as 12.4% increase in F1 performance. We also show that the general information encoded in BERT embeddings can be used as a substitute feature set for low-resource languages like Filipino with limited semantic and syntactic NLP tools to explicitly extract feature values for the task.

2020

pdf bib
A Simple Disaster-Related Knowledge Base for Intelligent Agents
Clark Emmanuel Paulo | Arvin Ken Ramirez | David Clarence Reducindo | Rannie Mark Mateo | Joseph Marvin Imperial
Proceedings of the 34th Pacific Asia Conference on Language, Information and Computation