2024
pdf
bib
abs
MEMORY-VQ: Compression for Tractable Internet-Scale Memory
Yury Zemlyanskiy
|
Michiel de Jong
|
Luke Vilnis
|
Santiago Ontanon
|
William Cohen
|
Sumit Sanghai
|
Joshua Ainslie
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)
Retrieval augmentation is a powerful but expensive method to make language models more knowledgeable about the world. Memory-based methods like LUMEN (de Jong et al., 2023a) pre-compute token representations for retrieved passages to drastically speed up inference. However, memory also leads to much greater storage requirements from storing pre-computed representations. We propose MEMORY-VQ, a new method to reduce storage requirements of memory-augmented models without sacrificing performance. Our method uses a vector quantization variational autoencoder (VQ-VAE) to compress token representations. We apply MEMORY-VQ to the LUMEN model to obtain LUMEN-VQ, a memory model that achieves a 16x compression rate with comparable performance on the KILT benchmark. LUMEN-VQ enables practical retrieval augmentation even for extremely large retrieval corpora.
2023
pdf
bib
abs
FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference
Michiel de Jong
|
Yury Zemlyanskiy
|
Joshua Ainslie
|
Nicholas FitzGerald
|
Sumit Sanghai
|
Fei Sha
|
William Cohen
Findings of the Association for Computational Linguistics: ACL 2023
Fusion-in-Decoder (FiD) is a powerful retrieval-augmented language model that sets the state-of-the-art on many knowledge-intensive NLP tasks. However, the architecture used for FiD was chosen by making minimal modifications to a standard T5 model, which our analysis shows to be highly suboptimal for a retrieval-augmented model. In particular, FiD allocates the bulk of FLOPs to the encoder, while the majority of inference time results from memory bandwidth constraints in the decoder. We propose two simple changes to the FiD architecture to alleviate memory bandwidth constraints, and speed up inference by 7x. This allows us to use a much larger decoder at modest cost. We denote FiD with the above modifications as FiDO, and show that it strongly improves performance over existing FiD models for a wide range of inference budgets. For example, FiDO-Large-XXL performs faster inference than FiD-Base and achieves better performance than FiD-Large.
pdf
bib
abs
GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints
Joshua Ainslie
|
James Lee-Thorp
|
Michiel de Jong
|
Yury Zemlyanskiy
|
Federico Lebron
|
Sumit Sanghai
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Multi-query attention (MQA), which only uses a single key-value head, drastically speeds up decoder inference. However, MQA can lead to quality degradation, and moreover it may not be desirable to train a separate model just for faster inference. We (1) propose a recipe for uptraining existing multi-head language model checkpoints into models with MQA using 5% of original pre-training compute, and (2) introduce grouped-query attention (GQA), a generalization of multi-query attention which uses an intermediate (more than one, less than number of query heads) number of key-value heads. We show that uptrained GQA achieves quality close to multi-head attention with comparable speed to MQA.
pdf
bib
abs
CoLT5: Faster Long-Range Transformers with Conditional Computation
Joshua Ainslie
|
Tao Lei
|
Michiel de Jong
|
Santiago Ontanon
|
Siddhartha Brahma
|
Yury Zemlyanskiy
|
David Uthus
|
Mandy Guo
|
James Lee-Thorp
|
Yi Tay
|
Yun-Hsuan Sung
|
Sumit Sanghai
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Many natural language processing tasks benefit from long inputs, but processing long documents with Transformers is expensive – not only due to quadratic attention complexity but also from applying feedforward and projection layers to every token. However, not all tokens are equally important, especially for longer documents. We propose CoLT5, a long-input Transformer model that builds on this intuition by employing conditional computation, devoting more resources to important tokens in both feedforward and attention layers. We show that CoLT5 achieves stronger performance than LongT5 with much faster training and inference, achieving SOTA on the long-input SCROLLS benchmark. Moreover, CoLT5 can effectively and tractably make use of extremely long inputs, showing strong gains up to 64k input length.
pdf
bib
abs
Augmenting Pre-trained Language Models with QA-Memory for Open-Domain Question Answering
Wenhu Chen
|
Pat Verga
|
Michiel de Jong
|
John Wieting
|
William W. Cohen
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
Existing state-of-the-art methods for open-domain question-answering (ODQA) use an open book approach in which information is first retrieved from a large text corpus or knowledge base (KB) and then reasoned over to produce an answer. A recent alternative is to retrieve from a collection of previously-generated question-answer pairs; this has several practical advantages including being more memory and compute-efficient. Question-answer pairs are also appealing in that they can be viewed as an intermediate between text and KB triples: like KB triples, they often concisely express a single relationship, but like text, have much higher coverage than traditional KBs. In this work, we describe a new QA system that augments a text-to-text model with a large memory of question-answer pairs, and a new pre-training task for the latent step of question retrieval. The pre-training task substantially simplifies training and greatly improves performance on smaller QA benchmarks. Unlike prior systems of this sort, our QA system can also answer multi-hop questions that do not explicitly appear in the collection of stored question-answer pairs.
2022
pdf
bib
abs
Generate-and-Retrieve: Use Your Predictions to Improve Retrieval for Semantic Parsing
Yury Zemlyanskiy
|
Michiel de Jong
|
Joshua Ainslie
|
Panupong Pasupat
|
Peter Shaw
|
Linlu Qiu
|
Sumit Sanghai
|
Fei Sha
Proceedings of the 29th International Conference on Computational Linguistics
A common recent approach to semantic parsing augments sequence-to-sequence models by retrieving and appending a set of training samples, called exemplars. The effectiveness of this recipe is limited by the ability to retrieve informative exemplars that help produce the correct parse, which is especially challenging in low-resource settings. Existing retrieval is commonly based on similarity of query and exemplar inputs. We propose GandR, a retrieval procedure that retrieves exemplars for which outputs are also similar. GandR first generates a preliminary prediction with input-based retrieval. Then, it retrieves exemplars with outputs similar to the preliminary prediction which are used to generate a final prediction. GandR sets the state of the art on multiple low-resource semantic parsing tasks.
2021
pdf
bib
abs
ReadTwice: Reading Very Large Documents with Memories
Yury Zemlyanskiy
|
Joshua Ainslie
|
Michiel de Jong
|
Philip Pham
|
Ilya Eckstein
|
Fei Sha
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Knowledge-intensive tasks such as question answering often require assimilating information from different sections of large inputs such as books or article collections. We propose ReadTwice, a simple and effective technique that combines several strengths of prior approaches to model long-range dependencies with Transformers. The main idea is to read text in small segments, in parallel, summarizing each segment into a memory table to be used in a second read of the text. We show that the method outperforms models of comparable size on several question answering (QA) datasets and sets a new state of the art on the challenging NarrativeQA task, with questions about entire books.