%0 Conference Proceedings %T Star-Transformer %A Guo, Qipeng %A Qiu, Xipeng %A Liu, Pengfei %A Shao, Yunfan %A Xue, Xiangyang %A Zhang, Zheng %Y Burstein, Jill %Y Doran, Christy %Y Solorio, Thamar %S Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) %D 2019 %8 June %I Association for Computational Linguistics %C Minneapolis, Minnesota %F guo-etal-2019-star %X Although Transformer has achieved great successes on many NLP tasks, its heavy structure with fully-connected attention connections leads to dependencies on large training data. In this paper, we present Star-Transformer, a lightweight alternative by careful sparsification. To reduce model complexity, we replace the fully-connected structure with a star-shaped topology, in which every two non-adjacent nodes are connected through a shared relay node. Thus, complexity is reduced from quadratic to linear, while preserving the capacity to capture both local composition and long-range dependency. The experiments on four tasks (22 datasets) show that Star-Transformer achieved significant improvements against the standard Transformer for the modestly sized datasets. %R 10.18653/v1/N19-1133 %U https://aclanthology.org/N19-1133 %U https://doi.org/10.18653/v1/N19-1133 %P 1315-1325