Extractive Opinion Summarization in Quantized Transformer Spaces

Stefanos Angelidis, Reinald Kim Amplayo, Yoshihiko Suhara, Xiaolan Wang, Mirella Lapata


Abstract
We present the Quantized Transformer (QT), an unsupervised system for extractive opinion summarization. QT is inspired by Vector- Quantized Variational Autoencoders, which we repurpose for popularity-driven summarization. It uses a clustering interpretation of the quantized space and a novel extraction algorithm to discover popular opinions among hundreds of reviews, a significant step towards opinion summarization of practical scope. In addition, QT enables controllable summarization without further training, by utilizing properties of the quantized space to extract aspect-specific summaries. We also make publicly available Space, a large-scale evaluation benchmark for opinion summarizers, comprising general and aspect-specific summaries for 50 hotels. Experiments demonstrate the promise of our approach, which is validated by human studies where judges showed clear preference for our method over competitive baselines.
Anthology ID:
2021.tacl-1.17
Volume:
Transactions of the Association for Computational Linguistics, Volume 9
Month:
Year:
2021
Address:
Cambridge, MA
Editors:
Brian Roark, Ani Nenkova
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
277–293
Language:
URL:
https://aclanthology.org/2021.tacl-1.17
DOI:
10.1162/tacl_a_00366
Bibkey:
Cite (ACL):
Stefanos Angelidis, Reinald Kim Amplayo, Yoshihiko Suhara, Xiaolan Wang, and Mirella Lapata. 2021. Extractive Opinion Summarization in Quantized Transformer Spaces. Transactions of the Association for Computational Linguistics, 9:277–293.
Cite (Informal):
Extractive Opinion Summarization in Quantized Transformer Spaces (Angelidis et al., TACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.tacl-1.17.pdf
Video:
 https://aclanthology.org/2021.tacl-1.17.mp4