%0 Conference Proceedings %T Learning to Contextually Aggregate Multi-Source Supervision for Sequence Labeling %A Lan, Ouyu %A Huang, Xiao %A Lin, Bill Yuchen %A Jiang, He %A Liu, Liyuan %A Ren, Xiang %Y Jurafsky, Dan %Y Chai, Joyce %Y Schluter, Natalie %Y Tetreault, Joel %S Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics %D 2020 %8 July %I Association for Computational Linguistics %C Online %F lan-etal-2020-learning %X Sequence labeling is a fundamental task for a range of natural language processing problems. When used in practice, its performance is largely influenced by the annotation quality and quantity, and meanwhile, obtaining ground truth labels is often costly. In many cases, ground truth labels do not exist, but noisy annotations or annotations from different domains are accessible. In this paper, we propose a novel framework Consensus Network (ConNet) that can be trained on annotations from multiple sources (e.g., crowd annotation, cross-domain data). It learns individual representation for every source and dynamically aggregates source-specific knowledge by a context-aware attention module. Finally, it leads to a model reflecting the agreement (consensus) among multiple sources. We evaluate the proposed framework in two practical settings of multi-source learning: learning with crowd annotations and unsupervised cross-domain model adaptation. Extensive experimental results show that our model achieves significant improvements over existing methods in both settings. We also demonstrate that the method can apply to various tasks and cope with different encoders. %R 10.18653/v1/2020.acl-main.193 %U https://aclanthology.org/2020.acl-main.193 %U https://doi.org/10.18653/v1/2020.acl-main.193 %P 2134-2146