%0 Conference Proceedings %T Are VQA Systems RAD? Measuring Robustness to Augmented Data with Focused Interventions %A Rosenberg, Daniel %A Gat, Itai %A Feder, Amir %A Reichart, Roi %Y Zong, Chengqing %Y Xia, Fei %Y Li, Wenjie %Y Navigli, Roberto %S Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers) %D 2021 %8 August %I Association for Computational Linguistics %C Online %F rosenberg-etal-2021-vqa %X Deep learning algorithms have shown promising results in visual question answering (VQA) tasks, but a more careful look reveals that they often do not understand the rich signal they are being fed with. To understand and better measure the generalization capabilities of VQA systems, we look at their robustness to counterfactually augmented data. Our proposed augmentations are designed to make a focused intervention on a specific property of the question such that the answer changes. Using these augmentations, we propose a new robustness measure, Robustness to Augmented Data (RAD), which measures the consistency of model predictions between original and augmented examples. Through extensive experimentation, we show that RAD, unlike classical accuracy measures, can quantify when state-of-the-art systems are not robust to counterfactuals. We find substantial failure cases which reveal that current VQA systems are still brittle. Finally, we connect between robustness and generalization, demonstrating the predictive power of RAD for performance on unseen augmentations. %R 10.18653/v1/2021.acl-short.10 %U https://aclanthology.org/2021.acl-short.10 %U https://doi.org/10.18653/v1/2021.acl-short.10 %P 61-70