Jiezhong Qiu
2022
GLM: General Language Model Pretraining with Autoregressive Blank Infilling
Zhengxiao Du
|
Yujie Qian
|
Xiao Liu
|
Ming Ding
|
Jiezhong Qiu
|
Zhilin Yang
|
Jie Tang
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
There have been various types of pretraining architectures including autoencoding models (e.g., BERT), autoregressive models (e.g., GPT), and encoder-decoder models (e.g., T5). However, none of the pretraining frameworks performs the best for all tasks of three main categories including natural language understanding (NLU), unconditional generation, and conditional generation. We propose a General Language Model (GLM) based on autoregressive blank infilling to address this challenge. GLM improves blank filling pretraining by adding 2D positional encodings and allowing an arbitrary order to predict spans, which results in performance gains over BERT and T5 on NLU tasks. Meanwhile, GLM can be pretrained for different types of tasks by varying the number and lengths of blanks. On a wide range of tasks across NLU, conditional and unconditional generation, GLM outperforms BERT, T5, and GPT given the same model sizes and data, and achieves the best performance from a single pretrained model with 1.25× parameters of BERT Large , demonstrating its generalizability to different downstream tasks.
2020
Blockwise Self-Attention for Long Document Understanding
Jiezhong Qiu
|
Hao Ma
|
Omer Levy
|
Wen-tau Yih
|
Sinong Wang
|
Jie Tang
Findings of the Association for Computational Linguistics: EMNLP 2020
We present BlockBERT, a lightweight and efficient BERT model for better modeling long-distance dependencies. Our model extends BERT by introducing sparse block structures into the attention matrix to reduce both memory consumption and training/inference time, which also enables attention heads to capture either short- or long-range contextual information. We conduct experiments on language model pre-training and several benchmark question answering datasets with various paragraph lengths. BlockBERT uses 18.7-36.1% less memory and 12.0-25.1% less time to learn the model. During testing, BlockBERT saves 27.8% inference time, while having comparable and sometimes better prediction accuracy, compared to an advanced BERT-based model, RoBERTa.
Search
Co-authors
- Jie Tang 2
- Zhengxiao Du 1
- Yujie Qian 1
- Xiao Liu 1
- Ming Ding 1
- show all...