Personalized Pieces: Efficient Personalized Large Language Models through Collaborative Efforts

Zhaoxuan Tan, Zheyuan Liu, Meng Jiang


Abstract
Personalized large language models (LLMs) aim to tailor interactions, content, and recommendations to individual user preferences. While parameter-efficient fine-tuning (PEFT) methods excel in performance and generalization, they are costly and limit communal benefits when used individually. To this end, we introduce Personalized Pieces (Per-Pcs), a framework that allows users to safely share and assemble personalized PEFT efficiently with collaborative efforts. Per-Pcs involves selecting sharers, breaking their PEFT into pieces, and training gates for each piece. These pieces are added to a pool, from which target users can select and assemble personalized PEFT using their history data. This approach preserves privacy and enables fine-grained user modeling without excessive storage and computation demands. Experimental results show Per-Pcs outperforms non-personalized and PEFT retrieval baselines, offering performance comparable to OPPU with significantly lower resource use across six tasks. Further analysis highlights Per-Pcs’s robustness concerning sharer count and selection strategy, pieces sharing ratio, and scalability in computation time and storage space. Per-Pcs’s modularity promotes safe sharing, making LLM personalization more efficient, effective, and widely accessible through collaborative efforts.
Anthology ID:
2024.emnlp-main.371
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6459–6475
Language:
URL:
https://aclanthology.org/2024.emnlp-main.371
DOI:
10.18653/v1/2024.emnlp-main.371
Bibkey:
Cite (ACL):
Zhaoxuan Tan, Zheyuan Liu, and Meng Jiang. 2024. Personalized Pieces: Efficient Personalized Large Language Models through Collaborative Efforts. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 6459–6475, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Personalized Pieces: Efficient Personalized Large Language Models through Collaborative Efforts (Tan et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.371.pdf
Software:
 2024.emnlp-main.371.software.zip
Data:
 2024.emnlp-main.371.data.zip