%0 Conference Proceedings %T Flight of the PEGASUS? Comparing Transformers on Few-shot and Zero-shot Multi-document Abstractive Summarization %A Goodwin, Travis %A Savery, Max %A Demner-Fushman, Dina %Y Scott, Donia %Y Bel, Nuria %Y Zong, Chengqing %S Proceedings of the 28th International Conference on Computational Linguistics %D 2020 %8 December %I International Committee on Computational Linguistics %C Barcelona, Spain (Online) %F goodwin-etal-2020-flight %X Recent work has shown that pre-trained Transformers obtain remarkable performance on many natural language processing tasks including automatic summarization. However, most work has focused on (relatively) data-rich single-document summarization settings. In this paper, we explore highly-abstractive multi-document summarization where the summary is explicitly conditioned on a user-given topic statement or question. We compare the summarization quality produced by three state-of-the-art transformer-based models: BART, T5, and PEGASUS. We report the performance on four challenging summarization datasets: three from the general domain and one from consumer health in both zero-shot and few-shot learning settings. While prior work has shown significant differences in performance for these models on standard summarization tasks, our results indicate that with as few as 10 labeled examples there is no statistically significant difference in summary quality, suggesting the need for more abstractive benchmark collections when determining state-of-the-art. %R 10.18653/v1/2020.coling-main.494 %U https://aclanthology.org/2020.coling-main.494 %U https://doi.org/10.18653/v1/2020.coling-main.494 %P 5640-5646