MultiQG-TI: Towards Question Generation from Multi-modal Sources

Zichao Wang, Richard Baraniuk


Abstract
We study the new problem of automatic question generation (QG) from multi-modal sources containing images and texts, significantly expanding the scope of most of the existing work that focuses exclusively on QG from only textual sources. We propose a simple solution for our new problem, called MultiQG-TI, which enables a text-only question generator to process visual input in addition to textual input. Specifically, we leverage an image-to-text model and an optical character recognition model to obtain the textual description of the image and extract any texts in the image, respectively, and then feed them together with the input texts to the question generator. We only fine-tune the question generator while keeping the other components fixed. On the challenging ScienceQA dataset, we demonstrate that MultiQG-TI significantly outperforms ChatGPT with few-shot prompting, despite having hundred-times less trainable parameters. Additional analyses empirically confirm the necessity of both visual and textual signals for QG and show the impact of various modeling choices. Code is available at https://anonymous.4open.science/r/multimodal-QG-47F2/
Anthology ID:
2023.bea-1.55
Volume:
Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Ekaterina Kochmar, Jill Burstein, Andrea Horbach, Ronja Laarmann-Quante, Nitin Madnani, Anaïs Tack, Victoria Yaneva, Zheng Yuan, Torsten Zesch
Venue:
BEA
SIG:
SIGEDU
Publisher:
Association for Computational Linguistics
Note:
Pages:
682–691
Language:
URL:
https://aclanthology.org/2023.bea-1.55
DOI:
10.18653/v1/2023.bea-1.55
Bibkey:
Cite (ACL):
Zichao Wang and Richard Baraniuk. 2023. MultiQG-TI: Towards Question Generation from Multi-modal Sources. In Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023), pages 682–691, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
MultiQG-TI: Towards Question Generation from Multi-modal Sources (Wang & Baraniuk, BEA 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.bea-1.55.pdf