Customizing Large Language Model Generation Style using Parameter-Efficient Finetuning

Xinyue Liu, Harshita Diddee, Daphne Ippolito


Abstract
One-size-fits-all large language models (LLMs) are increasingly being used to help people with their writing. However, the style these models are trained to write in may not suit all users or use cases. LLMs would be more useful as writing assistants if their idiolect could be customized to match each user. In this paper, we explore whether parameter-efficient finetuning (PEFT) with Low-Rank Adaptation can effectively guide the style of LLM generations. We use this method to customize LLaMA-2 to ten different authors and show that the generated text has lexical, syntactic, and surface alignment with the target author but struggles with content memorization. Our findings highlight the potential of PEFT to support efficient, user-level customization of LLMs.
Anthology ID:
2024.inlg-main.34
Volume:
Proceedings of the 17th International Natural Language Generation Conference
Month:
September
Year:
2024
Address:
Tokyo, Japan
Editors:
Saad Mahamood, Nguyen Le Minh, Daphne Ippolito
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
412–426
Language:
URL:
https://aclanthology.org/2024.inlg-main.34
DOI:
Bibkey:
Cite (ACL):
Xinyue Liu, Harshita Diddee, and Daphne Ippolito. 2024. Customizing Large Language Model Generation Style using Parameter-Efficient Finetuning. In Proceedings of the 17th International Natural Language Generation Conference, pages 412–426, Tokyo, Japan. Association for Computational Linguistics.
Cite (Informal):
Customizing Large Language Model Generation Style using Parameter-Efficient Finetuning (Liu et al., INLG 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.inlg-main.34.pdf