WSC+: Enhancing The Winograd Schema Challenge Using Tree-of-Experts

Pardis Zahraei, Ali Emami


Abstract
The Winograd Schema Challenge (WSC) serves as a prominent benchmark for evaluating machine understanding. While Large Language Models (LLMs) excel at answering WSC questions, their ability to generate such questions remains less explored. In this work, we propose Tree-of-Experts (ToE), a novel prompting method which enhances the generation of WSC instances (50% valid cases vs. 10% in recent methods). Using this approach, we introduce WSC+, a novel dataset comprising 3,026 LLM-generated sentences. Notably, we extend the WSC framework by incorporating new ‘ambiguous’ and ‘offensive’ categories, providing a deeper insight into model overconfidence and bias. Our analysis reveals nuances in generation-evaluation consistency, suggesting that LLMs may not always outperform in evaluating their own generated questions when compared to those crafted by other models. On WSC+, GPT-4, the top-performing LLM, achieves an accuracy of 68.7%, significantly below the human benchmark of 95.1%.
Anthology ID:
2024.eacl-long.99
Volume:
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1650–1671
Language:
URL:
https://aclanthology.org/2024.eacl-long.99
DOI:
Bibkey:
Cite (ACL):
Pardis Zahraei and Ali Emami. 2024. WSC+: Enhancing The Winograd Schema Challenge Using Tree-of-Experts. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1650–1671, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
WSC+: Enhancing The Winograd Schema Challenge Using Tree-of-Experts (Zahraei & Emami, EACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.eacl-long.99.pdf