Less is More—Achieving SOTA at PolEval 2025 Task 2a: Gender-inclusive LLMs for Polish (Proofreading) with LoRA and Qwen3-8B

Adam Majczyk


Abstract
In this paper the winning solution of PolEval 2025 Task 2a is presented. The approach utilizes LoRA fine-tuning of the Qwen3-8B model. Multiple LoRA matrix ranks are explored. Versions with and without the system prompt in loss calculation are evaluated. New SOTA was established at F1=0.6039 beating the previously best model at F1=0.5985. After the task’s conclusion the solution was improved upon and F1=0.6283±0.0056 was achieved.
Anthology ID:
2025.poleval-main.7
Volume:
Proceedings of the PolEval 2025 Workshop
Month:
November
Year:
2025
Address:
Warsaw
Editors:
Łukasz Kobyliński, Alina Wróblewska, Maciej Ogrodniczuk
Venues:
PolEval | WS
SIG:
Publisher:
Institute of Computer Science PAS and Association for Computational Linguistics
Note:
Pages:
48–53
Language:
URL:
https://aclanthology.org/2025.poleval-main.7/
DOI:
Bibkey:
Cite (ACL):
Adam Majczyk. 2025. Less is More—Achieving SOTA at PolEval 2025 Task 2a: Gender-inclusive LLMs for Polish (Proofreading) with LoRA and Qwen3-8B. In Proceedings of the PolEval 2025 Workshop, pages 48–53, Warsaw. Institute of Computer Science PAS and Association for Computational Linguistics.
Cite (Informal):
Less is More—Achieving SOTA at PolEval 2025 Task 2a: Gender-inclusive LLMs for Polish (Proofreading) with LoRA and Qwen3-8B (Majczyk, PolEval 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.poleval-main.7.pdf