Jon Foster
2023
Exploration of Open Large Language Models for eDiscovery
Sumit Pai
|
Sounak Lahiri
|
Ujjwal Kumar
|
Krishanu Baksi
|
Elijah Soba
|
Michael Suesserman
|
Nirmala Pudota
|
Jon Foster
|
Edward Bowen
|
Sanmitra Bhattacharya
Proceedings of the Natural Legal Language Processing Workshop 2023
The rapid advancement of Generative Artificial Intelligence (AI), particularly Large Language Models (LLMs), has led to their widespread adoption for various natural language processing (NLP) tasks. One crucial domain ripe for innovation is the Technology-Assisted Review (TAR) process in Electronic discovery (eDiscovery). Traditionally, TAR involves manual review and classification of documents for relevance over large document collections for litigations and investigations. This process is aided by machine learning and NLP tools which require extensive training and fine-tuning. In this paper, we explore the application of LLMs to TAR, specifically for predictive coding. We experiment with out-of-the-box prompting and fine-tuning of LLMs using parameter-efficient techniques. We conduct experiments using open LLMs and compare them to commercially-licensed ones. Our experiments demonstrate that open LLMs lag behind commercially-licensed models in relevance classification using out-of-the-box prompting. However, topic-specific instruction tuning of open LLMs not only improve their effectiveness but can often outperform their commercially-licensed counterparts in performance evaluations. Additionally, we conduct a user study to gauge the preferences of our eDiscovery Subject Matter Specialists (SMS) regarding human-authored versus model-generated reasoning. We demonstrate that instruction-tuned open LLMs can generate high quality reasonings that are comparable to commercial LLMs.
Search
Co-authors
- Sumit Pai 1
- Sounak Lahiri 1
- Ujjwal Kumar 1
- Krishanu Baksi 1
- Elijah Soba 1
- show all...