Counterfactually-Augmented SNLI Training Data Does Not Yield Better Generalization Than Unaugmented Data William Huang author Haokun Liu author Samuel R Bowman author 2020-11 text Proceedings of the First Workshop on Insights from Negative Results in NLP Anna Rogers editor João Sedoc editor Anna Rumshisky editor Association for Computational Linguistics Online conference publication huang-etal-2020-counterfactually 10.18653/v1/2020.insights-1.13 https://aclanthology.org/2020.insights-1.13/ 2020-11 82 87