Peng Pu


2025

pdf bib
Personalized Question Answering with User Profile Generation and Compression
Hang Su | Yun Yang | Tianyang Liu | Xin Liu | Peng Pu | Xuesong Lu
Findings of the Association for Computational Linguistics: EMNLP 2025

Large language models (LLMs) offer a novel and convenient avenue for humans to acquire knowledge. However, LLMs are prone to providing “midguy” answers regardless of users’ knowledge background, thereby failing to meet each user’s personalized needs. To tackle the problem, we propose to generate personalized answers with LLMs based on users’ past question-answering records. We dynamically generate and update a user’s domain and global profiles as the user asks questions, and use the latest profile as the context to generate the answer for a newly-asked question. To save tokens, we propose to compress the domain profile into a set of keywords and use the keywords to prompt LLMs. We theoretically analyze the effectiveness of the compression strategy. Experimental results show that our method can generate more personalized answers than comparative methods. The code and dataset are available at https://github.com/DaSESmartEdu/PQA.