%0 Conference Proceedings %T Adversarial Training for Commonsense Inference %A Pereira, Lis %A Liu, Xiaodong %A Cheng, Fei %A Asahara, Masayuki %A Kobayashi, Ichiro %Y Gella, Spandana %Y Welbl, Johannes %Y Rei, Marek %Y Petroni, Fabio %Y Lewis, Patrick %Y Strubell, Emma %Y Seo, Minjoon %Y Hajishirzi, Hannaneh %S Proceedings of the 5th Workshop on Representation Learning for NLP %D 2020 %8 July %I Association for Computational Linguistics %C Online %F pereira-etal-2020-adversarial %X We apply small perturbations to word embeddings and minimize the resultant adversarial risk to regularize the model. We exploit a novel combination of two different approaches to estimate these perturbations: 1) using the true label and 2) using the model prediction. Without relying on any human-crafted features, knowledge bases, or additional datasets other than the target datasets, our model boosts the fine-tuning performance of RoBERTa, achieving competitive results on multiple reading comprehension datasets that require commonsense inference. %R 10.18653/v1/2020.repl4nlp-1.8 %U https://aclanthology.org/2020.repl4nlp-1.8 %U https://doi.org/10.18653/v1/2020.repl4nlp-1.8 %P 55-60