Pengwei Yan
2024
Can Large Language Models Grasp Legal Theories? Enhance Legal Reasoning with Insights from Multi-Agent Collaboration
Weikang Yuan
|
Junjie Cao
|
Zhuoren Jiang
|
Yangyang Kang
|
Jun Lin
|
Kaisong Song
|
Tianqianjin Lin
|
Pengwei Yan
|
Changlong Sun
|
Xiaozhong Liu
Findings of the Association for Computational Linguistics: EMNLP 2024
Large Language Models (LLMs) could struggle to fully understand legal theories and perform complex legal reasoning tasks. In this study, we introduce a challenging task (confusing charge prediction) to better evaluate LLMs’ understanding of legal theories and reasoning capabilities. We also propose a novel framework: Multi-Agent framework for improving complex Legal Reasoning capability (MALR). MALR employs non-parametric learning, encouraging LLMs to automatically decompose complex legal tasks and mimic human learning process to extract insights from legal rules, helping LLMs better understand legal theories and enhance their legal reasoning abilities. Extensive experiments on multiple real-world datasets demonstrate that the proposed framework effectively addresses complex reasoning issues in practical scenarios, paving the way for more reliable applications in the legal domain.
Search
Co-authors
- Weikang Yuan 1
- Junjie Cao 1
- Zhuoren Jiang 1
- Yangyang Kang 1
- Jun Lin 1
- show all...