ScoNe: Benchmarking Negation Reasoning in Language Models With Fine-Tuning and In-Context Learning

Jingyuan S. She, Christopher Potts, Samuel R. Bowman, Atticus Geiger


Abstract
A number of recent benchmarks seek to assess how well models handle natural language negation. However, these benchmarks lack the controlled example paradigms that would allow us to infer whether a model had truly learned how negation morphemes semantically scope. To fill these analytical gaps, we present the Scoped Negation NLI (ScoNe-NLI) benchmark, which contains contrast sets of six examples with up to two negations where either zero, one, or both negative morphemes affect the NLI label. We use ScoNe-NLI to assess fine-tuning and in-context learning strategies. We find that RoBERTa and DeBERTa models solve ScoNe-NLI after many shot fine-tuning. For in-context learning, we test the latest InstructGPT models and find that most prompt strategies are not successful, including those using step-by-step reasoning. To better understand this result, we extend ScoNe with ScoNe-NLG, a sentence completion test set that embeds negation reasoning in short narratives. Here, InstructGPT is successful, which reveals the model can correctly reason about negation, but struggles to do so on NLI examples outside of its core pretraining regime.
Anthology ID:
2023.acl-short.154
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1803–1821
Language:
URL:
https://aclanthology.org/2023.acl-short.154
DOI:
10.18653/v1/2023.acl-short.154
Bibkey:
Cite (ACL):
Jingyuan S. She, Christopher Potts, Samuel R. Bowman, and Atticus Geiger. 2023. ScoNe: Benchmarking Negation Reasoning in Language Models With Fine-Tuning and In-Context Learning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1803–1821, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
ScoNe: Benchmarking Negation Reasoning in Language Models With Fine-Tuning and In-Context Learning (She et al., ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-short.154.pdf
Video:
 https://aclanthology.org/2023.acl-short.154.mp4