PARA: Parameter-Efficient Fine-tuning with Prompt-Aware Representation Adjustment

Zequan Liu, Yi Zhao, Ming Tan, Wei Zhu, Aaron Xuxiang Tian


Abstract
In the realm of parameter-efficient fine-tuning (PEFT) methods, while options like LoRA are available, there is a persistent demand in the industry for a PEFT approach that excels in both efficiency and performance within the context of single-backbone multi-tenant applications. This paper introduces a new and straightforward PEFT technique, termed Prompt Aware Representation Adjustment (PARA). The core of our proposal is to integrate a lightweight vector generator within each Transformer layer. This generator produces vectors that are responsive to input prompts, thereby adjusting the hidden representations accordingly. Our extensive experimentation across diverse tasks has yielded promising results. Firstly, the PARA method has been shown to surpass current PEFT benchmarks in terms of performance, despite having a similar number of adjustable parameters. Secondly, it has proven to be more efficient than LoRA in the single-backbone multi-tenant scenario, highlighting its significant potential for industrial adoption.
Anthology ID:
2024.emnlp-industry.55
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track
Month:
November
Year:
2024
Address:
Miami, Florida, US
Editors:
Franck Dernoncourt, Daniel Preoţiuc-Pietro, Anastasia Shimorina
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
728–737
Language:
URL:
https://aclanthology.org/2024.emnlp-industry.55
DOI:
Bibkey:
Cite (ACL):
Zequan Liu, Yi Zhao, Ming Tan, Wei Zhu, and Aaron Xuxiang Tian. 2024. PARA: Parameter-Efficient Fine-tuning with Prompt-Aware Representation Adjustment. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track, pages 728–737, Miami, Florida, US. Association for Computational Linguistics.
Cite (Informal):
PARA: Parameter-Efficient Fine-tuning with Prompt-Aware Representation Adjustment (Liu et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-industry.55.pdf