Yuylam@hkmu.edu.hk Yuylam@hkmu.edu.hk


2024

pdf bib
Few-shot Question Generation for Reading Comprehension
Yin Poon | John Sie Yuen Lee | Yuylam@hkmu.edu.hk Yuylam@hkmu.edu.hk | Wlsuen@hkmu.edu.hk Wlsuen@hkmu.edu.hk | Eong@hkmu.edu.hk Eong@hkmu.edu.hk | Skwchu@hkmu.edu.hk Skwchu@hkmu.edu.hk
Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)

According to the internationally recognized PIRLS (Progress in International Reading Literacy Study) assessment standards, reading comprehension questions should require not only information retrieval, but also higher-order processes such as inferencing, interpreting and evaluation. However, these kinds of questions are often not available in large quantities for training question generation models. This paper investigates whether pre-trained Large Language Models (LLMs) can produce higher-order questions. Human assessment on a Chinese dataset shows that few-shot LLM prompting generates more usable and higher-order questions than two competitive neural baselines.