Enhancing Legal Expertise in Large Language Models through Composite Model Integration: The Development and Evaluation of Law-Neo

Zhihao Liu, Yanzhen Zhu, Mengyuan Lu


Abstract
Although large language models (LLMs) like ChatGPT have demonstrated considerable capabilities in general domains, they often lack proficiency in specialized fields. Enhancing a model’s performance in a specific domain, such as law, while maintaining low costs, has been a significant challenge. Existing methods, such as fine-tuning or building mixture of experts (MoE) models, often struggle to balance model parameters, training costs, and domain-specific performance. Inspired by composition to augment language models, we have developed Law-Neo, a novel model designed to enhance legal LLMs. This model significantly improves the model’s legal domain expertise at minimal training costs, while retaining the logical capabilities of a large-scale anchor model. Our Law-Neo model outperformed other models in comprehensive experiments on multiple legal task benchmarks, demonstrating the effectiveness of this approach.
Anthology ID:
2024.nllp-1.3
Volume:
Proceedings of the Natural Legal Language Processing Workshop 2024
Month:
November
Year:
2024
Address:
Miami, FL, USA
Editors:
Nikolaos Aletras, Ilias Chalkidis, Leslie Barrett, Cătălina Goanță, Daniel Preoțiuc-Pietro, Gerasimos Spanakis
Venue:
NLLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
33–41
Language:
URL:
https://aclanthology.org/2024.nllp-1.3
DOI:
Bibkey:
Cite (ACL):
Zhihao Liu, Yanzhen Zhu, and Mengyuan Lu. 2024. Enhancing Legal Expertise in Large Language Models through Composite Model Integration: The Development and Evaluation of Law-Neo. In Proceedings of the Natural Legal Language Processing Workshop 2024, pages 33–41, Miami, FL, USA. Association for Computational Linguistics.
Cite (Informal):
Enhancing Legal Expertise in Large Language Models through Composite Model Integration: The Development and Evaluation of Law-Neo (Liu et al., NLLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.nllp-1.3.pdf