Chaodong Tong
2019
Improving Natural Language Understanding by Reverse Mapping Bytepair Encoding
Chaodong Tong
|
Huailiang Peng
|
Qiong Dai
|
Lei Jiang
|
Jianghua Huang
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)
We propose a method called reverse mapping bytepair encoding, which maps named-entity information and other word-level linguistic features back to subwords during the encoding procedure of bytepair encoding (BPE). We employ this method to the Generative Pre-trained Transformer (OpenAI GPT) by adding a weighted linear layer after the embedding layer. We also propose a new model architecture named as the multi-channel separate transformer to employ a training process without parameter-sharing. Evaluation on Stories Cloze, RTE, SciTail and SST-2 datasets demonstrates the effectiveness of our approach.
Search