Feiyu Duan


2024

pdf bib
PositionID: LLMs can Control Lengths, Copy and Paste with Explicit Positional Awareness
Noah Wang | Feiyu Duan | Yibo Zhang | Wangchunshu Zhou | Ke Xu | Wenhao Huang | Jie Fu
Findings of the Association for Computational Linguistics: EMNLP 2024

Large Language Models (LLMs) demonstrate impressive capabilities across various domains, including role-playing, creative writing, mathematical reasoning, and coding. Despite these advancements, LLMs still encounter challenges with length control, frequently failing to adhere to specific length constraints due to their token-level operations and insufficient training on data with strict length limitations. We identify this issue as stemming from a lack of positional awareness and propose novel approaches—PositionID Prompting and PositionID Fine-Tuning—to address it. These methods enhance the model’s ability to continuously monitor and manage text length during generation. Additionally, we introduce PositionID CP Prompting to enable LLMs to perform copy and paste operations accurately. Furthermore, we develop two benchmarks for evaluating length control and copy-paste abilities. Our experiments demonstrate that our methods significantly improve the model’s adherence to length constraints and copy-paste accuracy without compromising response quality.