Investigating Numeracy Learning Ability of a Text-to-Text Transfer Model

Kuntal Kumar Pal, Chitta Baral


Abstract
The transformer-based pre-trained language models have been tremendously successful in most of the conventional NLP tasks. But they often struggle in those tasks where numerical understanding is required. Some possible reasons can be the tokenizers and pre-training objectives which are not specifically designed to learn and preserve numeracy. Here we investigate the ability of text-to-text transfer learning model (T5), which has outperformed its predecessors in the conventional NLP tasks, to learn numeracy. We consider four numeracy tasks: numeration, magnitude order prediction, finding minimum and maximum in a series, and sorting. We find that, although T5 models perform reasonably well in the interpolation setting, they struggle considerably in the extrapolation setting across all four tasks.
Anthology ID:
2021.findings-emnlp.265
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2021
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
Findings
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
3095–3101
Language:
URL:
https://aclanthology.org/2021.findings-emnlp.265
DOI:
10.18653/v1/2021.findings-emnlp.265
Bibkey:
Cite (ACL):
Kuntal Kumar Pal and Chitta Baral. 2021. Investigating Numeracy Learning Ability of a Text-to-Text Transfer Model. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3095–3101, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Investigating Numeracy Learning Ability of a Text-to-Text Transfer Model (Pal & Baral, Findings 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.findings-emnlp.265.pdf
Video:
 https://aclanthology.org/2021.findings-emnlp.265.mp4
Code
 kuntalkumarpal/t5numeracy
Data
DROP