SemEval-2019 Task 10: Math Question Answering

Mark Hopkins, Ronan Le Bras, Cristian Petrescu-Prahova, Gabriel Stanovsky, Hannaneh Hajishirzi, Rik Koncel-Kedziorski


Abstract
We report on the SemEval 2019 task on math question answering. We provided a question set derived from Math SAT practice exams, including 2778 training questions and 1082 test questions. For a significant subset of these questions, we also provided SMT-LIB logical form annotations and an interpreter that could solve these logical forms. Systems were evaluated based on the percentage of correctly answered questions. The top system correctly answered 45% of the test questions, a considerable improvement over the 17% random guessing baseline.
Anthology ID:
S19-2153
Volume:
Proceedings of the 13th International Workshop on Semantic Evaluation
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota, USA
Editors:
Jonathan May, Ekaterina Shutova, Aurelie Herbelot, Xiaodan Zhu, Marianna Apidianaki, Saif M. Mohammad
Venue:
SemEval
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
893–899
Language:
URL:
https://aclanthology.org/S19-2153/
DOI:
10.18653/v1/S19-2153
Bibkey:
Cite (ACL):
Mark Hopkins, Ronan Le Bras, Cristian Petrescu-Prahova, Gabriel Stanovsky, Hannaneh Hajishirzi, and Rik Koncel-Kedziorski. 2019. SemEval-2019 Task 10: Math Question Answering. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 893–899, Minneapolis, Minnesota, USA. Association for Computational Linguistics.
Cite (Informal):
SemEval-2019 Task 10: Math Question Answering (Hopkins et al., SemEval 2019)
Copy Citation:
PDF:
https://aclanthology.org/S19-2153.pdf
Code
 allenai/semeval-2019-task-10