Weakly Supervised Formula Learner for Solving Mathematical Problems
Yuxuan
Wu
author
Hideki
Nakayama
author
2022-10
text
Proceedings of the 29th International Conference on Computational Linguistics
Nicoletta
Calzolari
editor
Chu-Ren
Huang
editor
Hansaem
Kim
editor
James
Pustejovsky
editor
Leo
Wanner
editor
Key-Sun
Choi
editor
Pum-Mo
Ryu
editor
Hsin-Hsi
Chen
editor
Lucia
Donatelli
editor
Heng
Ji
editor
Sadao
Kurohashi
editor
Patrizia
Paggio
editor
Nianwen
Xue
editor
Seokhwan
Kim
editor
Younggyun
Hahm
editor
Zhong
He
editor
Tony
Kyungil
Lee
editor
Enrico
Santus
editor
Francis
Bond
editor
Seung-Hoon
Na
editor
International Committee on Computational Linguistics
Gyeongju, Republic of Korea
conference publication
Mathematical reasoning task is a subset of the natural language question answering task. Existing work suggested solving this task with a two-phase approach, where the model first predicts formulas from questions and then calculates answers from such formulas. This approach achieved desirable performance in existing work. However, its reliance on annotated formulas as intermediate labels throughout its training limited its application. In this work, we put forward the idea to enable models to learn optimal formulas autonomously. We proposed Weakly Supervised Formula Learner, a learning framework that drives the formula exploration with weak supervision from the final answers to mathematical problems. Our experiments are conducted on two representative mathematical reasoning datasets MathQA and Math23K. On MathQA, our method outperformed baselines trained on complete yet imperfect formula annotations. On Math23K, our method outperformed other weakly supervised learning methods.
wu-nakayama-2022-weakly
https://aclanthology.org/2022.coling-1.150
2022-10
1743
1752