Nicholas Suwono
2023
Location-Aware Visual Question Generation with Lightweight Models
Nicholas Suwono
|
Justin Chen
|
Tun Hung
|
Ting-Hao Huang
|
I-Bin Liao
|
Yung-Hui Li
|
Lun-Wei Ku
|
Shao-Hua Sun
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
This work introduces a novel task, location-aware visual question generation (LocaVQG), which aims to generate engaging questions from data relevant to a particular geographical location. Specifically, we represent such location-aware information with surrounding images and a GPS coordinate. To tackle this task, we present a dataset generation pipeline that leverages GPT-4 to produce diverse and sophisticated questions. Then, we aim to learn a lightweight model that can address the LocaVQG task and fit on an edge device, such as a mobile phone. To this end, we propose a method which can reliably generate engaging questions from location-aware information. Our proposed method outperforms baselines regarding human evaluation (e.g., engagement, grounding, coherence) and automatic evaluation metrics (e.g., BERTScore, ROUGE-2). Moreover, we conduct extensive ablation studies to justify our proposed techniques for both generating the dataset and solving the task.
Search
Co-authors
- Justin Chen 1
- Tun Hung 1
- Ting-Hao Huang 1
- I-Bin Liao 1
- Yung-Hui Li 1
- show all...