Kuang-Huei Lee


2021

pdf bib
Learning Task Sampling Policy for Multitask Learning
Dhanasekar Sundararaman | Henry Tsai | Kuang-Huei Lee | Iulia Turc | Lawrence Carin
Findings of the Association for Computational Linguistics: EMNLP 2021

It has been shown that training multi-task models with auxiliary tasks can improve the target task quality through cross-task transfer. However, the importance of each auxiliary task to the primary task is likely not known a priori. While the importance weights of auxiliary tasks can be manually tuned, it becomes practically infeasible with the number of tasks scaling up. To address this, we propose a search method that automatically assigns importance weights. We formulate it as a reinforcement learning problem and learn a task sampling schedule based on the evaluation accuracy of the multi-task model. Our empirical evaluation on XNLI and GLUE shows that our method outperforms uniform sampling and the corresponding single-task baseline.