Yue Guan
2022
Transkimmer: Transformer Learns to Layer-wise Skim
Yue Guan
|
Zhengyi Li
|
Jingwen Leng
|
Zhouhan Lin
|
Minyi Guo
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Transformer architecture has become the de-facto model for many machine learning tasks from natural language processing and computer vision. As such, improving its computational efficiency becomes paramount. One of the major computational inefficiency of Transformer based models is that they spend the identical amount of computation throughout all layers. Prior works have proposed to augment the Transformer model with the capability of skimming tokens to improve its computational efficiency. However, they suffer from not having effectual and end-to-end optimization of the discrete skimming predictor. To address the above limitations, we propose the Transkimmer architecture, which learns to identify hidden state tokens that are not required by each layer. The skimmed tokens are then forwarded directly to the final output, thus reducing the computation of the successive layers. The key idea in Transkimmer is to add a parameterized predictor before each layer that learns to make the skimming decision. We also propose to adopt reparameterization trick and add skim loss for the end-to-end training of Transkimmer. Transkimmer achieves 10.97x average speedup on GLUE benchmark compared with vanilla BERT-base baseline with less than 1% accuracy degradation.
2020
How Far Does BERT Look At: Distance-based Clustering and Analysis of BERT’s Attention
Yue Guan
|
Jingwen Leng
|
Chao Li
|
Quan Chen
|
Minyi Guo
Proceedings of the 28th International Conference on Computational Linguistics
Recent research on the multi-head attention mechanism, especially that in pre-trained models such as BERT, has shown us heuristics and clues in analyzing various aspects of the mechanism. As most of the research focus on probing tasks or hidden states, previous works have found some primitive patterns of attention head behavior by heuristic analytical methods, but a more systematic analysis specific on the attention patterns still remains primitive. In this work, we clearly cluster the attention heatmaps into significantly different patterns through unsupervised clustering on top of a set of proposed features, which corroborates with previous observations. We further study their corresponding functions through analytical study. In addition, our proposed features can be used to explain and calibrate different attention heads in Transformer models.
Prosody Features of Collaborative Construction in Mandarin Conversation
Yue Guan
Proceedings of the 34th Pacific Asia Conference on Language, Information and Computation
Search
Co-authors
- Jingwen Leng 2
- Minyi Guo 2
- Chao Li 1
- Quan Chen 1
- Zhengyi Li 1
- show all...