Induced Natural Language Rationales and Interleaved Markup Tokens Enable Extrapolation in Large Language Models

Mirelle Candida Bueno, Carlos Gemmell, Jeff Dalton, Roberto Lotufo, Rodrigo Nogueira


Abstract
The ability to extrapolate, i.e., to make predictions on sequences that are longer than those presented as training examples, is a challenging problem for current deep learning models. Recent work shows that this limitation persists in state-of-the-art Transformer-based models. Most solutions to this problem use specific architectures or training methods that do not generalize to other tasks. We demonstrate that large language models can succeed in extrapolation without modifying their architecture or training procedure. Our experimental results show that generating step-by-step rationales and introducing marker tokens are both required for effective extrapolation. First, we induce a language model to produce step-by-step rationales before outputting the answer to effectively communicate the task to the model. However, as sequences become longer, we find that current models struggle to keep track of token positions. To address this issue, we interleave output tokens with markup tokens that act as explicit positional and counting symbols. Our findings show how these two complementary approaches enable remarkable sequence extrapolation and highlight a limitation of current architectures to effectively generalize without explicit surface form guidance. Code available at https://anonymous.4open.science/r/induced-rationales-markup-tokens-0650/README.md
Anthology ID:
2022.mathnlp-1.3
Volume:
Proceedings of the 1st Workshop on Mathematical Natural Language Processing (MathNLP)
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates (Hybrid)
Editors:
Deborah Ferreira, Marco Valentino, Andre Freitas, Sean Welleck, Moritz Schubotz
Venue:
MathNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
17–24
Language:
URL:
https://aclanthology.org/2022.mathnlp-1.3
DOI:
10.18653/v1/2022.mathnlp-1.3
Bibkey:
Cite (ACL):
Mirelle Candida Bueno, Carlos Gemmell, Jeff Dalton, Roberto Lotufo, and Rodrigo Nogueira. 2022. Induced Natural Language Rationales and Interleaved Markup Tokens Enable Extrapolation in Large Language Models. In Proceedings of the 1st Workshop on Mathematical Natural Language Processing (MathNLP), pages 17–24, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
Cite (Informal):
Induced Natural Language Rationales and Interleaved Markup Tokens Enable Extrapolation in Large Language Models (Bueno et al., MathNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.mathnlp-1.3.pdf
Video:
 https://aclanthology.org/2022.mathnlp-1.3.mp4