Chao-Yi Lu


2022

pdf bib
Exploring Methods for Building Dialects-Mandarin Code-Mixing Corpora: A Case Study in Taiwanese Hokkien
Sin-En Lu | Bo-Han Lu | Chao-Yi Lu | Richard Tzong-Han Tsai
Findings of the Association for Computational Linguistics: EMNLP 2022

In natural language processing (NLP), code-mixing (CM) is a challenging task, especially when the mixed languages include dialects. In Southeast Asian countries such as Singapore, Indonesia, and Malaysia, Hokkien-Mandarin is the most widespread code-mixed language pair among Chinese immigrants, and it is also common in Taiwan. However, dialects such as Hokkien often have a scarcity of resources and the lack of an official writing system, limiting the development of dialect CM research. In this paper, we propose a method to construct a Hokkien-Mandarin CM dataset to mitigate the limitation, overcome the morphological issue under the Sino-Tibetan language family, and offer an efficient Hokkien word segmentation method through a linguistics-based toolkit. Furthermore, we use our proposed dataset and employ transfer learning to train the XLM (cross-lingual language model) for translation tasks. To fit the code-mixing scenario, we adapt XLM slightly. We found that by using linguistic knowledge, rules, and language tags, the model produces good results on CM data translation while maintaining monolingual translation quality.

2021

pdf bib
A Survey of Approaches to Automatic Question Generation:from 2019 to Early 2021
Chao-Yi Lu | Sin-En Lu
Proceedings of the 33rd Conference on Computational Linguistics and Speech Processing (ROCLING 2021)

To provide analysis of recent researches of automatic question generation from text,we surveyed 9 papers between 2019 to early 2021, retrieved from Paper with Code(PwC). Our research follows the survey reported by Kurdi et al.(2020), in which analysis of 93 papers from 2014 to early2019 are provided. We analyzed the 9papers from aspects including: (1) purpose of question generation, (2) generation method, and (3) evaluation. We found that recent approaches tend to rely on semantic information and Transformer-based models are attracting increasing interest since they are more efficient. On the other hand,since there isn’t any widely acknowledged automatic evaluation metric designed for question generation, researchers adopt metrics of other natural language processing tasks to compare different systems.