%0 Conference Proceedings %T Cascaded Semantic and Positional Self-Attention Network for Document Classification %A Jiang, Juyong %A Zhang, Jie %A Zhang, Kai %Y Cohn, Trevor %Y He, Yulan %Y Liu, Yang %S Findings of the Association for Computational Linguistics: EMNLP 2020 %D 2020 %8 November %I Association for Computational Linguistics %C Online %F jiang-etal-2020-cascaded %X Transformers have shown great success in learning representations for language modelling. However, an open challenge still remains on how to systematically aggregate semantic information (word embedding) with positional (or temporal) information (word orders). In this work, we propose a new architecture to aggregate the two sources of information using cascaded semantic and positional self-attention network (CSPAN) in the context of document classification. The CSPAN uses a semantic self-attention layer cascaded with Bi-LSTM to process the semantic and positional information in a sequential manner, and then adaptively combine them together through a residue connection. Compared with commonly used positional encoding schemes, CSPAN can exploit the interaction between semantics and word positions in a more interpretable and adaptive manner, and the classification performance can be notably improved while simultaneously preserving a compact model size and high convergence rate. We evaluate the CSPAN model on several benchmark data sets for document classification with careful ablation studies, and demonstrate the encouraging results compared with state of the art. %R 10.18653/v1/2020.findings-emnlp.59 %U https://aclanthology.org/2020.findings-emnlp.59 %U https://doi.org/10.18653/v1/2020.findings-emnlp.59 %P 669-677