Don’t Read Too Much Into It: Adaptive Computation for Open-Domain Question Answering

Yuxiang Wu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel


Abstract
Most approaches to Open-Domain Question Answering consist of a light-weight retriever that selects a set of candidate passages, and a computationally expensive reader that examines the passages to identify the correct answer. Previous works have shown that as the number of retrieved passages increases, so does the performance of the reader. However, they assume all retrieved passages are of equal importance and allocate the same amount of computation to them, leading to a substantial increase in computational cost. To reduce this cost, we propose the use of adaptive computation to control the computational budget allocated for the passages to be read. We first introduce a technique operating on individual passages in isolation which relies on anytime prediction and a per-layer estimation of an early exit probability. We then introduce SKYLINEBUILDER, an approach for dynamically deciding on which passage to allocate computation at each step, based on a resource allocation policy trained via reinforcement learning. Our results on SQuAD-Open show that adaptive computation with global prioritisation improves over several strong static and adaptive methods, leading to a 4.3x reduction in computation while retaining 95% performance of the full model.
Anthology ID:
2020.sustainlp-1.9
Volume:
Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing
Month:
November
Year:
2020
Address:
Online
Editors:
Nafise Sadat Moosavi, Angela Fan, Vered Shwartz, Goran Glavaš, Shafiq Joty, Alex Wang, Thomas Wolf
Venue:
sustainlp
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
63–72
Language:
URL:
https://aclanthology.org/2020.sustainlp-1.9
DOI:
10.18653/v1/2020.sustainlp-1.9
Bibkey:
Cite (ACL):
Yuxiang Wu, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2020. Don’t Read Too Much Into It: Adaptive Computation for Open-Domain Question Answering. In Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing, pages 63–72, Online. Association for Computational Linguistics.
Cite (Informal):
Don’t Read Too Much Into It: Adaptive Computation for Open-Domain Question Answering (Wu et al., sustainlp 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.sustainlp-1.9.pdf
Optional supplementary material:
 2020.sustainlp-1.9.OptionalSupplementaryMaterial.pdf
Video:
 https://slideslive.com/38939431