Zero-shot Sequence Labeling for Transformer-based Sentence Classifiers

Kamil Bujel, Helen Yannakoudakis, Marek Rei


Abstract
We investigate how sentence-level transformers can be modified into effective sequence labelers at the token level without any direct supervision. Existing approaches to zero-shot sequence labeling do not perform well when applied on transformer-based architectures. As transformers contain multiple layers of multi-head self-attention, information in the sentence gets distributed between many tokens, negatively affecting zero-shot token-level performance. We find that a soft attention module which explicitly encourages sharpness of attention weights can significantly outperform existing methods.
Anthology ID:
2021.repl4nlp-1.20
Volume:
Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021)
Month:
August
Year:
2021
Address:
Online
Editors:
Anna Rogers, Iacer Calixto, Ivan Vulić, Naomi Saphra, Nora Kassner, Oana-Maria Camburu, Trapit Bansal, Vered Shwartz
Venue:
RepL4NLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
195–205
Language:
URL:
https://aclanthology.org/2021.repl4nlp-1.20
DOI:
10.18653/v1/2021.repl4nlp-1.20
Bibkey:
Cite (ACL):
Kamil Bujel, Helen Yannakoudakis, and Marek Rei. 2021. Zero-shot Sequence Labeling for Transformer-based Sentence Classifiers. In Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021), pages 195–205, Online. Association for Computational Linguistics.
Cite (Informal):
Zero-shot Sequence Labeling for Transformer-based Sentence Classifiers (Bujel et al., RepL4NLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.repl4nlp-1.20.pdf
Optional supplementary material:
 2021.repl4nlp-1.20.OptionalSupplementaryMaterial.zip
Video:
 https://aclanthology.org/2021.repl4nlp-1.20.mp4
Code
 bujol12/bert-seq-interpretability
Data
FCE