Yinghui Liu
2022
Revisiting and Advancing Chinese Natural Language Understanding with Accelerated Heterogeneous Knowledge Pre-training
Taolin Zhang
|
Junwei Dong
|
Jianing Wang
|
Chengyu Wang
|
Ang Wang
|
Yinghui Liu
|
Jun Huang
|
Yong Li
|
Xiaofeng He
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
Recently, knowledge-enhanced pre-trained language models (KEPLMs) improve context-aware representations via learning from structured relations in knowledge bases, and/or linguistic knowledge from syntactic or dependency analysis. Unlike English, there is a lack of high-performing open-source Chinese KEPLMs in the natural language processing (NLP) community to support various language understanding applications. In this paper, we revisit and advance the development of Chinese natural language understanding with a series of novel Chinese KEPLMs released in various parameter sizes, namely CKBERT (Chinese knowledge-enhanced BERT). Specifically, both relational and linguistic knowledge is effectively injected into CKBERT based on two novel pre-training tasks, i.e., linguistic-aware masked language modeling and contrastive multi-hop relation modeling. Based on the above two pre-training paradigms and our in-house implemented TorchAccelerator, we have pre-trained base (110M), large (345M) and huge (1.3B) versions of CKBERT efficiently on GPU clusters. Experiments demonstrate that CKBERT consistently outperforms strong baselines for Chinese over various benchmark NLP tasks and in terms of different model sizes.
Search
Co-authors
- Taolin Zhang 1
- Junwei Dong 1
- Jianing Wang 1
- Chengyu Wang 1
- Ang Wang 1
- show all...