Are Larger Pretrained Language Models Uniformly Better? Comparing Performance at the Instance Level Ruiqi Zhong author Dhruba Ghosh author Dan Klein author Jacob Steinhardt author 2021-08 text Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 Chengqing Zong editor Fei Xia editor Wenjie Li editor Roberto Navigli editor Association for Computational Linguistics Online conference publication zhong-etal-2021-larger 10.18653/v1/2021.findings-acl.334 https://aclanthology.org/2021.findings-acl.334/ 2021-08 3813 3827