%0 Conference Proceedings %T Improving Personalized Explanation Generation through Visualization %A Geng, Shijie %A Fu, Zuohui %A Ge, Yingqiang %A Li, Lei %A de Melo, Gerard %A Zhang, Yongfeng %Y Muresan, Smaranda %Y Nakov, Preslav %Y Villavicencio, Aline %S Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) %D 2022 %8 May %I Association for Computational Linguistics %C Dublin, Ireland %F geng-etal-2022-improving %X In modern recommender systems, there are usually comments or reviews from users that justify their ratings for different items. Trained on such textual corpus, explainable recommendation models learn to discover user interests and generate personalized explanations. Though able to provide plausible explanations, existing models tend to generate repeated sentences for different items or empty sentences with insufficient details. This begs an interesting question: can we immerse the models in a multimodal environment to gain proper awareness of real-world concepts and alleviate above shortcomings? To this end, we propose a visually-enhanced approach named METER with the help of visualization generation and text–image matching discrimination: the explainable recommendation model is encouraged to visualize what it refers to while incurring a penalty if the visualization is incongruent with the textual explanation. Experimental results and a manual assessment demonstrate that our approach can improve not only the text quality but also the diversity and explainability of the generated explanations. %R 10.18653/v1/2022.acl-long.20 %U https://aclanthology.org/2022.acl-long.20 %U https://doi.org/10.18653/v1/2022.acl-long.20 %P 244-255