Efficient Layer-wise LLM Fine-tuning for Revision Intention Prediction

Zhexiong Liu, Diane Litman


Abstract
Large Language Models (LLMs) have shown extraordinary success across various text generation tasks; however, their potential for simple yet essential text classification remains underexplored, as LLM pre-training tends to emphasize generation over classification. While LLMs with instruction tuning can transform classification into a generation task, they often struggle to categorize nuanced texts. One such example is text revision, which involves nuanced edits between pairs of texts. Although simply fine-tuning LLMs for revision classification seems plausible, it requires a large amount of revision annotations, which are exceptionally expensive and scarce in the community. To address this issue, we introduce a plug-and-play layer-wise parameter-efficient fine-tuning (PEFT) framework, i.e., IR-Tuning, which fine-tunes a subset of important LLM layers that are dynamically selected based on their gradient norm distribution, while freezing those of redundant layers. Extensive experiments suggest that IR-Tuning surpasses several layer-wise PEFT baselines over diverse text revisions, while achieving fast convergence, low GPU memory consumption, and effectiveness on small revision corpora.
Anthology ID:
2025.findings-emnlp.829
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15319–15334
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.829/
DOI:
Bibkey:
Cite (ACL):
Zhexiong Liu and Diane Litman. 2025. Efficient Layer-wise LLM Fine-tuning for Revision Intention Prediction. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 15319–15334, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Efficient Layer-wise LLM Fine-tuning for Revision Intention Prediction (Liu & Litman, Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.829.pdf
Checklist:
 2025.findings-emnlp.829.checklist.pdf