Efficient Financial Fraud Detection on Mobile Devices Using Lightweight Large Language Models

Lakpriya Senevirathna, Deshan Koshala Sumanathilaka


Abstract
The growth of mobile financial transactions presents new challenges for fraud detection, where traditional and ML methods often miss emerging patterns. While Large Language Models (LLMs) offer advanced language understanding, they are typically too resource-intensive for mobile deployment and raise privacy concerns due to cloud reliance. This paper proposes a lightweight, privacy-preserving approach by fine-tuning and quantizing compact LLMs for on-device fraud detection from textual data. Models were optimized using Open Neural Network Exchange (ONNX) conversion and quantization to ensure efficiency. The fine-tuned quantized Llama-160M-Chat-v1 (bnb4) achieved 99.47% accuracy with a 168MB footprint, while fine-tuned quantized Qwen1.5-0.5B-Chat (bnb4) reached 99.50% accuracy at 797MB. These results demonstrate that optimized LLMs can deliver accurate, real-time fraud detection on mobile devices without compromising user privacy.
Anthology ID:
2025.ranlp-1.126
Volume:
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era
Month:
September
Year:
2025
Address:
Varna, Bulgaria
Editors:
Galia Angelova, Maria Kunilovskaya, Marie Escribe, Ruslan Mitkov
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd., Shoumen, Bulgaria
Note:
Pages:
1090–1098
Language:
URL:
https://aclanthology.org/2025.ranlp-1.126/
DOI:
Bibkey:
Cite (ACL):
Lakpriya Senevirathna and Deshan Koshala Sumanathilaka. 2025. Efficient Financial Fraud Detection on Mobile Devices Using Lightweight Large Language Models. In Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era, pages 1090–1098, Varna, Bulgaria. INCOMA Ltd., Shoumen, Bulgaria.
Cite (Informal):
Efficient Financial Fraud Detection on Mobile Devices Using Lightweight Large Language Models (Senevirathna & Sumanathilaka, RANLP 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.ranlp-1.126.pdf