Effective Cross-Task Transfer Learning for Explainable Natural Language Inference with T5

Irina Bigoulaeva, Rachneet Singh Sachdeva, Harish Tayyar Madabushi, Aline Villavicencio, Iryna Gurevych


Abstract
We compare sequential fine-tuning with a model for multi-task learning in the context where we are interested in boosting performance on two of the tasks, one of which depends on the other. We test these models on the FigLang2022 shared task which requires participants to predict language inference labels on figurative language along with corresponding textual explanations of the inference predictions. Our results show that while sequential multi-task learning can be tuned to be good at the first of two target tasks, it performs less well on the second and additionally struggles with overfitting. Our findings show that simple sequential fine-tuning of text-to-text models is an extraordinarily powerful method of achieving cross-task knowledge transfer while simultaneously predicting multiple interdependent targets. So much so, that our best model achieved the (tied) highest score on the task.
Anthology ID:
2022.flp-1.8
Volume:
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates (Hybrid)
Editors:
Debanjan Ghosh, Beata Beigman Klebanov, Smaranda Muresan, Anna Feldman, Soujanya Poria, Tuhin Chakrabarty
Venue:
Fig-Lang
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
54–60
Language:
URL:
https://aclanthology.org/2022.flp-1.8
DOI:
10.18653/v1/2022.flp-1.8
Bibkey:
Cite (ACL):
Irina Bigoulaeva, Rachneet Singh Sachdeva, Harish Tayyar Madabushi, Aline Villavicencio, and Iryna Gurevych. 2022. Effective Cross-Task Transfer Learning for Explainable Natural Language Inference with T5. In Proceedings of the 3rd Workshop on Figurative Language Processing (FLP), pages 54–60, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
Cite (Informal):
Effective Cross-Task Transfer Learning for Explainable Natural Language Inference with T5 (Bigoulaeva et al., Fig-Lang 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.flp-1.8.pdf
Video:
 https://aclanthology.org/2022.flp-1.8.mp4