Zehua Xia
2023
Improving Question Generation with Multi-level Content Planning
Zehua Xia
|
Qi Gou
|
Bowen Yu
|
Haiyang Yu
|
Fei Huang
|
Yongbin Li
|
Nguyen Cam-Tu
Findings of the Association for Computational Linguistics: EMNLP 2023
This paper addresses the problem of generating questions from a given context and an answer, specifically focusing on questions that require multi-hop reasoning across an extended context. Previous studies have suggested that key phrase selection is essential for question generation (QG), yet it is still challenging to connect such disjointed phrases into meaningful questions, particularly for long context. To mitigate this issue, we propose MultiFactor, a novel QG framework based on multi-level content planning. Specifically, MultiFactor includes two components: FA-Model, which simultaneously selects key phrases and generates full answers, and Q-Model which takes the generated full answer as an additional input to generate questions. Here, full answer generation is introduced to connect the short answer with the selected key phrases, thus forming an answer-aware summary to facilitate QG. Both FA-Model and Q-Model are formalized as simple-yet-effective Phrase-Enhanced Transformers, our joint model for phrase selection and text generation. Experimental results show that our method outperforms strong baselines on two popular QG datasets. Our code is available at https://github.com/zeaver/MultiFactor.
Diversify Question Generation with Retrieval-Augmented Style Transfer
Qi Gou
|
Zehua Xia
|
Bowen Yu
|
Haiyang Yu
|
Fei Huang
|
Yongbin Li
|
Nguyen Cam-Tu
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Given a textual passage and an answer, humans are able to ask questions with various expressions, but this ability is still challenging for most question generation (QG) systems. Existing solutions mainly focus on the internal knowledge within the given passage or the semantic word space for diverse content planning. These methods, however, have not considered the potential of external knowledge for expression diversity. To bridge this gap, we propose RAST, a framework for Retrieval-Augmented Style Transfer, where the objective is to utilize the style of diverse templates for question generation. For training RAST, we develop a novel Reinforcement Learning (RL) based approach that maximizes a weighted combination of diversity reward and consistency reward. Here, the consistency reward is computed by a Question-Answering (QA) model, whereas the diversity reward measures how much the final output mimics the retrieved template. Experimental results show that our method outperforms previous diversity-driven baselines on diversity while being comparable in terms of consistency scores. Our code is available at https://github.com/gouqi666/RAST.
Cross-lingual Data Augmentation for Document-grounded Dialog Systems in Low Resource Languages
Qi Gou
|
Zehua Xia
|
Wenzhe Du
Proceedings of the Third DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering
This paper proposes a framework to address the issue of data scarcity in Document-Grounded Dialogue Systems(DGDS). Our model leverages high-resource languages to enhance the capability of dialogue generation in low-resource languages. Specifically, We present a novel pipeline CLEM (Cross-Lingual Enhanced Model) including adversarial training retrieval (Retriever and Re-ranker), and Fid (fusion-in-decoder) generator. To further leverage high-resource language, we also propose an innovative architecture to conduct alignment across different languages with translated training. Extensive experiment results demonstrate the effectiveness of our model and we achieved 4th place in the DialDoc 2023 Competition. Therefore, CLEM can serve as a solution to resource scarcity in DGDS and provide useful guidance for multi-lingual alignment tasks.
Search
Co-authors
- Qi Gou 3
- Bowen Yu 2
- Haiyang Yu 2
- Fei Huang 2
- Yongbin Li 2
- show all...