MetaWeighting: Learning to Weight Tasks in Multi-Task Learning

Yuren Mao, Zekai Wang, Weiwei Liu, Xuemin Lin, Pengtao Xie


Abstract
Task weighting, which assigns weights on the including tasks during training, significantly matters the performance of Multi-task Learning (MTL); thus, recently, there has been an explosive interest in it. However, existing task weighting methods assign weights only based on the training loss, while ignoring the gap between the training loss and generalization loss. It degenerates MTL’s performance. To address this issue, the present paper proposes a novel task weighting algorithm, which automatically weights the tasks via a learning-to-learn paradigm, referred to as MetaWeighting. Extensive experiments are conducted to validate the superiority of our proposed method in multi-task text classification.
Anthology ID:
2022.findings-acl.271
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3436–3448
Language:
URL:
https://aclanthology.org/2022.findings-acl.271
DOI:
10.18653/v1/2022.findings-acl.271
Bibkey:
Cite (ACL):
Yuren Mao, Zekai Wang, Weiwei Liu, Xuemin Lin, and Pengtao Xie. 2022. MetaWeighting: Learning to Weight Tasks in Multi-Task Learning. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3436–3448, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
MetaWeighting: Learning to Weight Tasks in Multi-Task Learning (Mao et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-acl.271.pdf
Software:
 2022.findings-acl.271.software.zip