Liu Zhuang


2021

pdf bib
A Robustly Optimized BERT Pre-training Approach with Post-training
Liu Zhuang | Lin Wayne | Shi Ya | Zhao Jun
Proceedings of the 20th Chinese National Conference on Computational Linguistics

In the paper we present a ‘pre-training’+‘post-training’+‘fine-tuning’ three-stage paradigm which is a supplementary framework for the standard ‘pre-training’+‘fine-tuning’ languagemodel approach. Furthermore based on three-stage paradigm we present a language modelnamed PPBERT. Compared with original BERT architecture that is based on the standard two-stage paradigm we do not fine-tune pre-trained model directly but rather post-train it on the domain or task related dataset first which helps to better incorporate task-awareness knowl-edge and domain-awareness knowledge within pre-trained model also from the training datasetreduce bias. Extensive experimental results indicate that proposed model improves the perfor-mance of the baselines on 24 NLP tasks which includes eight GLUE benchmarks eight Su-perGLUE benchmarks six extractive question answering benchmarks. More remarkably our proposed model is a more flexible and pluggable model where post-training approach is able to be plugged into other PLMs that are based on BERT. Extensive ablations further validate the effectiveness and its state-of-the-art (SOTA) performance. The open source code pre-trained models and post-trained models are available publicly.