Ashish Thapliyal
2023
MaXM: Towards Multilingual Visual Question Answering
Soravit Changpinyo
|
Linting Xue
|
Michal Yarom
|
Ashish Thapliyal
|
Idan Szpektor
|
Julien Amelot
|
Xi Chen
|
Radu Soricut
Findings of the Association for Computational Linguistics: EMNLP 2023
Visual Question Answering (VQA) has been primarily studied through the lens of the English language. Yet, tackling VQA in other languages in the same manner would require a considerable amount of resources. In this paper, we propose scalable solutions to multilingual visual question answering (mVQA), on both data and modeling fronts. We first propose a translation-based framework to mVQA data generation that requires much less human annotation efforts than the conventional approach of directly collection questions and answers. Then, we apply our framework to the multilingual captions in the Crossmodal-3600 dataset and develop an efficient annotation protocol to create MaXM, a test-only VQA benchmark in 7 diverse languages. Finally, we develop a simple, lightweight, and effective approach as well as benchmark state-of-the-art English and multilingual VQA models. We hope that our benchmark encourages further research on mVQA.
Emergence of Abstract State Representations in Embodied Sequence Modeling
Tian Yun
|
Zilai Zeng
|
Kunal Handa
|
Ashish Thapliyal
|
Bo Pang
|
Ellie Pavlick
|
Chen Sun
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Decision making via sequence modeling aims to mimic the success of language models, where actions taken by an embodied agent are modeled as tokens to predict. Despite their promising performance, it remains unclear if embodied sequence modeling leads to the emergence of internal representations that represent the environmental state information. A model that lacks abstract state representations would be liable to make decisions based on surface statistics which fail to generalize. We take the BabyAI environment, a grid world in which language-conditioned navigation tasks are performed, and build a sequence modeling Transformer, which takes a language instruction, a sequence of actions, and environmental observations as its inputs. In order to investigate the emergence of abstract state representations, we design a “blindfolded” navigation task, where only the initial environmental layout, the language instruction, and the action sequence to complete the task are available for training. Our probing results show that intermediate environmental layouts can be reasonably reconstructed from the internal activations of a trained model, and that language instructions play a role in the reconstruction accuracy. Our results suggest that many key features of state representations can emerge via embodied sequence modeling, supporting an optimistic outlook for applications of sequence modeling objectives to more complex embodied decision-making domains.
Search
Co-authors
- Soravit Changpinyo 1
- Linting Xue 1
- Michal Yarom 1
- Idan Szpektor 1
- Julien Amelot 1
- show all...