%0 Conference Proceedings %T Sarcasm Detection is Way Too Easy! An Empirical Comparison of Human and Machine Sarcasm Detection %A Abu Farha, Ibrahim %A Wilson, Steven %A Oprea, Silviu %A Magdy, Walid %Y Goldberg, Yoav %Y Kozareva, Zornitsa %Y Zhang, Yue %S Findings of the Association for Computational Linguistics: EMNLP 2022 %D 2022 %8 December %I Association for Computational Linguistics %C Abu Dhabi, United Arab Emirates %F abu-farha-etal-2022-sarcasm %X Recently, author-annotated sarcasm datasets, which focus on intended, rather than perceived sarcasm, have been introduced. Although datasets collected using first-party annotation have important benefits, there is no comparison of human and machine performance on these new datasets. In this paper, we collect new annotations to provide human-level benchmarks for these first-party annotated sarcasm tasks in both English and Arabic, and compare the performance of human annotators to that of state-of-the-art sarcasm detection systems. Our analysis confirms that sarcasm detection is extremely challenging, with individual humans performing close to or slightly worse than the best trained models. With majority voting, however, humans are able to achieve the best results on all tasks. We also perform error analysis, finding that some of the most challenging examples are those that require additional context. We also highlight common features and patterns used to express sarcasm in English and Arabic such as idioms and proverbs. We suggest that to better capture sarcasm, future sarcasm detection datasets and models should focus on representing conversational and cultural context while leveraging world knowledge and common sense. %R 10.18653/v1/2022.findings-emnlp.387 %U https://aclanthology.org/2022.findings-emnlp.387 %U https://doi.org/10.18653/v1/2022.findings-emnlp.387 %P 5284-5295