Multi-Scale Distribution Deep Variational Autoencoder for Explanation Generation

ZeFeng Cai, Linlin Wang, Gerard de Melo, Fei Sun, Liang He


Abstract
Generating explanations for recommender systems is essential for improving their transparency, as users often wish to understand the reason for receiving a specified recommendation. Previous methods mainly focus on improving the generation quality, but often produce generic explanations that fail to incorporate user and item specific details. To resolve this problem, we present Multi-Scale Distribution Deep Variational Autoencoders (MVAE).These are deep hierarchical VAEs with a prior network that eliminates noise while retaining meaningful signals in the input, coupled with a recognition network serving as the source of information to guide the learning of the prior network. Further, the Multi-scale distribution Learning Framework (MLF) along with a Target Tracking Kullback-Leibler divergence (TKL) mechanism are proposed to employ multi KL divergences at different scales for more effective learning. Extensive empirical experiments demonstrate that our methods can generate explanations with concrete input-specific contents.
Anthology ID:
2022.findings-acl.7
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
68–78
Language:
URL:
https://aclanthology.org/2022.findings-acl.7
DOI:
10.18653/v1/2022.findings-acl.7
Bibkey:
Cite (ACL):
ZeFeng Cai, Linlin Wang, Gerard de Melo, Fei Sun, and Liang He. 2022. Multi-Scale Distribution Deep Variational Autoencoder for Explanation Generation. In Findings of the Association for Computational Linguistics: ACL 2022, pages 68–78, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Multi-Scale Distribution Deep Variational Autoencoder for Explanation Generation (Cai et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-acl.7.pdf