Suchetha Siddagangappa
2025
LAW: Legal Agentic Workflows for Custody and Fund Services Contracts
William Watson
|
Nicole Cho
|
Nishan Srishankar
|
Zhen Zeng
|
Lucas Cecchi
|
Daniel Scott
|
Suchetha Siddagangappa
|
Rachneet Kaur
|
Tucker Balch
|
Manuela Veloso
Proceedings of the 31st International Conference on Computational Linguistics: Industry Track
Legal contracts in the custody and fund services domain govern critical aspects such as key provider responsibilities, fee schedules, and indemnification rights. However, it is challenging for an off-the-shelf Large Language Model (LLM) to ingest these contracts due to the lengthy unstructured streams of text, limited LLM context windows, and complex legal jargon. To address these challenges, we introduce LAW (Legal Agentic Workflows for Custody and Fund Services Contracts). LAW features a modular design that responds to user queries by orchestrating a suite of domain-specific tools and text agents. Our experiments demonstrate that LAW, by integrating multiple specialized agents and tools, significantly outperforms the baseline. LAW excels particularly in complex tasks such as calculating a contract’s termination date, surpassing the baseline by 92.9% points. Furthermore, LAW offers a cost-effective alternative to traditional fine-tuned legal LLMs by leveraging reusable, domain-specific tools.
2024
Large Language Models as Financial Data Annotators: A Study on Effectiveness and Efficiency
Toyin D. Aguda
|
Suchetha Siddagangappa
|
Elena Kochkina
|
Simerjot Kaur
|
Dongsheng Wang
|
Charese Smiley
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Collecting labeled datasets in finance is challenging due to scarcity of domain experts and higher cost of employing them. While Large Language Models (LLMs) have demonstrated remarkable performance in data annotation tasks on general domain datasets, their effectiveness on domain specific datasets remains under-explored. To address this gap, we investigate the potential of LLMs as efficient data annotators for extracting relations in financial documents. We compare the annotations produced by three LLMs (GPT-4, PaLM 2, and MPT Instruct) against expert annotators and crowdworkers. We demonstrate that the current state-of-the-art LLMs can be sufficient alternatives to non-expert crowdworkers. We analyze models using various prompts and parameter settings and find that customizing the prompts for each relation group by providing specific examples belonging to those groups is paramount. Furthermore, we introduce a reliability index (LLM-RelIndex) used to identify outputs that may require expert attention. Finally, we perform an extensive time, cost and error analysis and provide recommendations for the collection and usage of automated annotations in domain-specific settings.
Search
Fix data
Co-authors
- Toyin D. Aguda 1
- Tucker Balch 1
- Lucas Cecchi 1
- Nicole Cho 1
- Simerjot Kaur 1
- show all...