Exploring Strategies for Generalizable Commonsense Reasoning with Pre-trained Models

Kaixin Ma, Filip Ilievski, Jonathan Francis, Satoru Ozaki, Eric Nyberg, Alessandro Oltramari


Abstract
Commonsense reasoning benchmarks have been largely solved by fine-tuning language models. The downside is that fine-tuning may cause models to overfit to task-specific data and thereby forget their knowledge gained during pre-training. Recent works only propose lightweight model updates as models may already possess useful knowledge from past experience, but a challenge remains in understanding what parts and to what extent models should be refined for a given task. In this paper, we investigate what models learn from commonsense reasoning datasets. We measure the impact of three different adaptation methods on the generalization and accuracy of models. Our experiments with two models show that fine-tuning performs best, by learning both the content and the structure of the task, but suffers from overfitting and limited generalization to novel answers. We observe that alternative adaptation methods like prefix-tuning have comparable accuracy, but generalize better to unseen answers and are more robust to adversarial splits.
Anthology ID:
2021.emnlp-main.445
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5474–5483
Language:
URL:
https://aclanthology.org/2021.emnlp-main.445
DOI:
10.18653/v1/2021.emnlp-main.445
Bibkey:
Cite (ACL):
Kaixin Ma, Filip Ilievski, Jonathan Francis, Satoru Ozaki, Eric Nyberg, and Alessandro Oltramari. 2021. Exploring Strategies for Generalizable Commonsense Reasoning with Pre-trained Models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5474–5483, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Exploring Strategies for Generalizable Commonsense Reasoning with Pre-trained Models (Ma et al., EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.445.pdf
Video:
 https://aclanthology.org/2021.emnlp-main.445.mp4
Code
 mayer123/cs_model_adaptation
Data
CommonGenSQuAD