Multi-Objective Linguistic Control of Large Language Models

Dang Nguyen, Jiuhai Chen, Tianyi Zhou


Abstract
Large language models (LLMs), despite their breakthroughs on many challenging benchmark tasks, prefer to generate verbose responses and lack the controllability of output complexity, which is usually preferred by human users in practice. In this paper, we study how to precisely control multiple linguistic complexities of LLM output by finetuning using off-the-shelf data. To this end, we propose multi-control tuning (MCTune), which includes multiple linguistic complexity values of ground-truth responses as controls in the input for instruction tuning. We finetune LLaMA2-7B on Alpaca-GPT4 and WizardLM datasets. Evaluations on widely used benchmarks demonstrate that our method does not only improve LLMs’ multi-complexity controllability substantially but also retains or even enhances the quality of the responses as a side benefit.
Anthology ID:
2024.findings-acl.257
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4336–4347
Language:
URL:
https://aclanthology.org/2024.findings-acl.257
DOI:
Bibkey:
Cite (ACL):
Dang Nguyen, Jiuhai Chen, and Tianyi Zhou. 2024. Multi-Objective Linguistic Control of Large Language Models. In Findings of the Association for Computational Linguistics ACL 2024, pages 4336–4347, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Multi-Objective Linguistic Control of Large Language Models (Nguyen et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.257.pdf