Start Simple: Progressive Difficulty Multitask Learning

Yunfei Luo, Yuyang Liu, Rukai Cai, Tauhidur Rahman


Abstract
The opaque nature of neural networks, often described as black boxes, poses significant challenges in understanding their learning mechanisms, which limit our ability to fully optimize and trust these models.Inspired by how humans learn, this paper proposes a novel neural network training strategy that employs multitask learning with progressive difficulty subtasks, which we believe can potentially shed light on the internal learning mechanisms of neural networks.We implemented this strategy across a range of NLP tasks, data sets, and neural network architectures and observed notable improvements in model performance.This suggests that neural networks may be able to extract common features and internalize shared representations across similar subtasks that differ in their difficulty.Analyzing this strategy could lead us to more interpretable and robust neural networks, enhancing both their performance and our understanding of their nature.
Anthology ID:
2024.naacl-srw.7
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Yang (Trista) Cao, Isabel Papadimitriou, Anaelia Ovalle
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
48–55
Language:
URL:
https://aclanthology.org/2024.naacl-srw.7
DOI:
Bibkey:
Cite (ACL):
Yunfei Luo, Yuyang Liu, Rukai Cai, and Tauhidur Rahman. 2024. Start Simple: Progressive Difficulty Multitask Learning. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop), pages 48–55, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Start Simple: Progressive Difficulty Multitask Learning (Luo et al., NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-srw.7.pdf