Kamil Bujel
2021
Zero-shot Sequence Labeling for Transformer-based Sentence Classifiers
Kamil Bujel
|
Helen Yannakoudakis
|
Marek Rei
Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021)
We investigate how sentence-level transformers can be modified into effective sequence labelers at the token level without any direct supervision. Existing approaches to zero-shot sequence labeling do not perform well when applied on transformer-based architectures. As transformers contain multiple layers of multi-head self-attention, information in the sentence gets distributed between many tokens, negatively affecting zero-shot token-level performance. We find that a soft attention module which explicitly encourages sharpness of attention weights can significantly outperform existing methods.
Search