Written Justifications are Key to Aggregate Crowdsourced Forecasts

Saketh Kotamraju, Eduardo Blanco


Abstract
This paper demonstrates that aggregating crowdsourced forecasts benefits from modeling the written justifications provided by forecasters. Our experiments show that the majority and weighted vote baselines are competitive, and that the written justifications are beneficial to call a question throughout its life except in the last quarter. We also conduct an error analysis shedding light into the characteristics that make a justification unreliable.
Anthology ID:
2021.findings-emnlp.355
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2021
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
Findings
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4206–4216
Language:
URL:
https://aclanthology.org/2021.findings-emnlp.355
DOI:
10.18653/v1/2021.findings-emnlp.355
Bibkey:
Cite (ACL):
Saketh Kotamraju and Eduardo Blanco. 2021. Written Justifications are Key to Aggregate Crowdsourced Forecasts. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4206–4216, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Written Justifications are Key to Aggregate Crowdsourced Forecasts (Kotamraju & Blanco, Findings 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.findings-emnlp.355.pdf
Video:
 https://aclanthology.org/2021.findings-emnlp.355.mp4
Code
 saketh12/forecasting_emnlp2021