John Kender


2020

pdf bib
Predictive Model Selection for Transfer Learning in Sequence Labeling Tasks
Parul Awasthy | Bishwaranjan Bhattacharjee | John Kender | Radu Florian
Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing

Transfer learning is a popular technique to learn a task using less training data and fewer compute resources. However, selecting the correct source model for transfer learning is a challenging task. We demonstrate a novel predictive method that determines which existing source model would minimize error for transfer learning to a given target. This technique does not require learning for prediction, and avoids computational costs of trail-and-error. We have evaluated this technique on nine datasets across diverse domains, including newswire, user forums, air flight booking, cybersecurity news, etc. We show that it per-forms better than existing techniques such as fine-tuning over vanilla BERT, or curriculum learning over the largest dataset on top of BERT, resulting in average F1 score gains in excess of 3%. Moreover, our technique consistently selects the best model using fewer tries.