Foreseeing the Benefits of Incidental Supervision

Hangfeng He, Mingyuan Zhang, Qiang Ning, Dan Roth


Abstract
Real-world applications often require improved models by leveraging *a range of cheap incidental supervision signals*. These could include partial labels, noisy labels, knowledge-based constraints, and cross-domain or cross-task annotations – all having statistical associations with gold annotations but not exactly the same. However, we currently lack a principled way to measure the benefits of these signals to a given target task, and the common practice of evaluating these benefits is through exhaustive experiments with various models and hyperparameters. This paper studies whether we can, *in a single framework, quantify the benefits of various types of incidental signals for a given target task without going through combinatorial experiments*. We propose a unified PAC-Bayesian motivated informativeness measure, PABI, that characterizes the uncertainty reduction provided by incidental supervision signals. We demonstrate PABI’s effectiveness by quantifying the value added by various types of incidental signals to sequence tagging tasks. Experiments on named entity recognition (NER) and question answering (QA) show that PABI’s predictions correlate well with learning performance, providing a promising way to determine, ahead of learning, which supervision signals would be beneficial.
Anthology ID:
2021.emnlp-main.134
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1782–1800
Language:
URL:
https://aclanthology.org/2021.emnlp-main.134
DOI:
10.18653/v1/2021.emnlp-main.134
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.134.pdf
Code
 CogComp/PABI +  additional community code
Data
QA-SRL Bank 2.0SQuAD