Exploring Continual Learning of Compositional Generalization in NLI

Xiyan Fu, Anette Frank


Abstract
Compositional Natural Language Inference (NLI) has been explored to assess the true abilities of neural models to perform NLI. Yet, current evaluations assume models to have full access to all primitive inferences in advance, in contrast to humans that continuously acquire inference knowledge. In this paper, we introduce the Continual Compositional Generalization in Inference (C2Gen NLI) challenge, where a model continuously acquires knowledge of constituting primitive inference tasks as a basis for compositional inferences. We explore how continual learning affects compositional generalization in NLI, by designing a continual learning setup for compositional NLI inference tasks. Our experiments demonstrate that models fail to compositionally generalize in a continual scenario. To address this problem, we first benchmark various continual learning algorithms and verify their efficacy. We then further analyze C2Gen, focusing on how to order primitives and compositional inference types, and examining correlations between subtasks. Our analyses show that by learning subtasks continuously while observing their dependencies and increasing degrees of difficulty, continual learning can enhance composition generalization ability.1
Anthology ID:
2024.tacl-1.51
Volume:
Transactions of the Association for Computational Linguistics, Volume 12
Month:
Year:
2024
Address:
Cambridge, MA
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
912–932
Language:
URL:
https://aclanthology.org/2024.tacl-1.51
DOI:
10.1162/tacl_a_00680
Bibkey:
Cite (ACL):
Xiyan Fu and Anette Frank. 2024. Exploring Continual Learning of Compositional Generalization in NLI. Transactions of the Association for Computational Linguistics, 12:912–932.
Cite (Informal):
Exploring Continual Learning of Compositional Generalization in NLI (Fu & Frank, TACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.tacl-1.51.pdf