Joshua Ainslie


pdf bib
RealFormer: Transformer Likes Residual Attention
Ruining He | Anirudh Ravula | Bhargav Kanagal | Joshua Ainslie
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf bib
ReadTwice: Reading Very Large Documents with Memories
Yury Zemlyanskiy | Joshua Ainslie | Michiel de Jong | Philip Pham | Ilya Eckstein | Fei Sha
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Knowledge-intensive tasks such as question answering often require assimilating information from different sections of large inputs such as books or article collections. We propose ReadTwice, a simple and effective technique that combines several strengths of prior approaches to model long-range dependencies with Transformers. The main idea is to read text in small segments, in parallel, summarizing each segment into a memory table to be used in a second read of the text. We show that the method outperforms models of comparable size on several question answering (QA) datasets and sets a new state of the art on the challenging NarrativeQA task, with questions about entire books.

pdf bib
Improving Compositional Generalization in Classification Tasks via Structure Annotations
Juyong Kim | Pradeep Ravikumar | Joshua Ainslie | Santiago Ontanon
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

Compositional generalization is the ability to generalize systematically to a new data distribution by combining known components. Although humans seem to have a great ability to generalize compositionally, state-of-the-art neural models struggle to do so. In this work, we study compositional generalization in classification tasks and present two main contributions. First, we study ways to convert a natural language sequence-to-sequence dataset to a classification dataset that also requires compositional generalization. Second, we show that providing structural hints (specifically, providing parse trees and entity links as attention masks for a Transformer model) helps compositional generalization.


pdf bib
ETC: Encoding Long and Structured Inputs in Transformers
Joshua Ainslie | Santiago Ontanon | Chris Alberti | Vaclav Cvicek | Zachary Fisher | Philip Pham | Anirudh Ravula | Sumit Sanghai | Qifan Wang | Li Yang
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Transformer models have advanced the state of the art in many Natural Language Processing (NLP) tasks. In this paper, we present a new Transformer architecture, “Extended Transformer Construction” (ETC), that addresses two key challenges of standard Transformer architectures, namely scaling input length and encoding structured inputs. To scale attention to longer inputs, we introduce a novel global-local attention mechanism between global tokens and regular input tokens. We also show that combining global-local attention with relative position encodings and a “Contrastive Predictive Coding” (CPC) pre-training objective allows ETC to encode structured inputs. We achieve state-of-the-art results on four natural language datasets requiring long and/or structured inputs.