Nicole Cho
2025
LAW: Legal Agentic Workflows for Custody and Fund Services Contracts
William Watson
|
Nicole Cho
|
Nishan Srishankar
|
Zhen Zeng
|
Lucas Cecchi
|
Daniel Scott
|
Suchetha Siddagangappa
|
Rachneet Kaur
|
Tucker Balch
|
Manuela Veloso
Proceedings of the 31st International Conference on Computational Linguistics: Industry Track
Legal contracts in the custody and fund services domain govern critical aspects such as key provider responsibilities, fee schedules, and indemnification rights. However, it is challenging for an off-the-shelf Large Language Model (LLM) to ingest these contracts due to the lengthy unstructured streams of text, limited LLM context windows, and complex legal jargon. To address these challenges, we introduce LAW (Legal Agentic Workflows for Custody and Fund Services Contracts). LAW features a modular design that responds to user queries by orchestrating a suite of domain-specific tools and text agents. Our experiments demonstrate that LAW, by integrating multiple specialized agents and tools, significantly outperforms the baseline. LAW excels particularly in complex tasks such as calculating a contract’s termination date, surpassing the baseline by 92.9% points. Furthermore, LAW offers a cost-effective alternative to traditional fine-tuned legal LLMs by leveraging reusable, domain-specific tools.
2023
HiddenTables and PyQTax: A Cooperative Game and Dataset For TableQA to Ensure Scale and Data Privacy Across a Myriad of Taxonomies
William Watson
|
Nicole Cho
|
Tucker Balch
|
Manuela Veloso
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
A myriad of different Large Language Models (LLMs) face a common challenge in contextually analyzing table question-answering tasks. These challenges are engendered from (1) finite context windows for large tables, (2) multi-faceted discrepancies amongst tokenization patterns against cell boundaries, and (3) various limitations stemming from data confidentiality in the process of using external models such as gpt-35-turbo. We propose a cooperative game dubbed “HiddenTables” as a potential resolution to this challenge. In essence, “HiddenTables” is played between the code-generating LLM “Solver” and the “Oracle” which evaluates the ability of the LLM agents to solve TableQA tasks. This game is based on natural language schemas and importantly, ensures the security of the underlying data. We provide evidential experiments on a diverse set of tables that demonstrate an LLM’s collective inability to generalize and perform on complex queries, handle compositional dependencies, and align natural language to programmatic commands when concrete table schemas are provided. Unlike encoder-based models, we have pushed the boundaries of “HiddenTables” to not be limited by the number of rows - therefore we exhibit improved efficiency in prompt and completion tokens. Our infrastructure has spawned a new dataset “PyQTax” that spans across 116,671 question-table-answer triplets and provides additional fine-grained breakdowns and labels for varying question taxonomies. Therefore, in tandem with our academic contributions regarding LLMs’ deficiency in TableQA tasks, “HiddenTables” is a tactile manifestation of how LLMs can interact with massive datasets while ensuring data security and minimizing generation costs.
Search
Fix data
Co-authors
- Tucker Balch 2
- Manuela Veloso 2
- William Watson 2
- Lucas Cecchi 1
- Rachneet Kaur 1
- show all...