Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach Yue Yu author Simiao Zuo author Haoming Jiang author Wendi Ren author Tuo Zhao author Chao Zhang author 2021-06 text Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies Kristina Toutanova editor Anna Rumshisky editor Luke Zettlemoyer editor Dilek Hakkani-Tur editor Iz Beltagy editor Steven Bethard editor Ryan Cotterell editor Tanmoy Chakraborty editor Yichao Zhou editor Association for Computational Linguistics Online conference publication yu-etal-2021-fine 10.18653/v1/2021.naacl-main.84 https://aclanthology.org/2021.naacl-main.84/ 2021-06 1063 1077