Yinlin Jiang


2021

pdf bib
Simple or Complex? Complexity-controllable Question Generation with Soft Templates and Deep Mixture of Experts Model
Sheng Bi | Xiya Cheng | Yuan-Fang Li | Lizhen Qu | Shirong Shen | Guilin Qi | Lu Pan | Yinlin Jiang
Findings of the Association for Computational Linguistics: EMNLP 2021

The ability to generate natural-language questions with controlled complexity levels is highly desirable as it further expands the applicability of question generation. In this paper, we propose an end-to-end neural complexity-controllable question generation model, which incorporates a mixture of experts (MoE) as the selector of soft templates to improve the accuracy of complexity control and the quality of generated questions. The soft templates capture question similarity while avoiding the expensive construction of actual templates. Our method introduces a novel, cross-domain complexity estimator to assess the complexity of a question, taking into account the passage, the question, the answer and their interactions. The experimental results on two benchmark QA datasets demonstrate that our QG model is superior to state-of-the-art methods in both automatic and manual evaluation. Moreover, our complexity estimator is significantly more accurate than the baselines in both in-domain and out-domain settings.