Jonathan Herr
2019
The Extent of Repetition in Contract Language
Dan Simonson
|
Daniel Broderick
|
Jonathan Herr
Proceedings of the Natural Legal Language Processing Workshop 2019
Contract language is repetitive (Anderson and Manns, 2017), but so is all language (Zipf, 1949). In this paper, we measure the extent to which contract language in English is repetitive compared with the language of other English language corpora. Contracts have much smaller vocabulary sizes compared with similarly sized non-contract corpora across multiple contract types, contain 1/5th as many hapax legomena, pattern differently on a log-log plot, use fewer pronouns, and contain sentences that are about 20% more similar to one another than in other corpora. These suggest that the study of contracts in natural language processing controls for some linguistic phenomena and allows for more in depth study of others.
2010
The DARPA Machine Reading Program - Encouraging Linguistic and Reasoning Research with a Series of Reading Tasks
Stephanie Strassel
|
Dan Adams
|
Henry Goldberg
|
Jonathan Herr
|
Ron Keesing
|
Daniel Oblinger
|
Heather Simpson
|
Robert Schrag
|
Jonathan Wright
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)
The goal of DARPAs Machine Reading (MR) program is nothing less than making the worlds natural language corpora available for formal processing. Most text processing research has focused on locating mission-relevant text (information retrieval) and on techniques for enriching text by transforming it to other forms of text (translation, summarization) ― always for use by humans. In contrast, MR will make knowledge contained in text available in forms that machines can use for automated processing. This will be done with little human intervention. Machines will learn to read from a few examples and they will read to learn what they need in order to answer questions or perform some reasoning task. Three independent Reading Teams are building universal text engines which will capture knowledge from naturally occurring text and transform it into the formal representations used by Artificial Intelligence. An Evaluation Team is selecting and annotating text corpora with task domain concepts, creating model reasoning systems with which the reading systems will interact, and establishing question-answer sets and evaluation protocols to measure progress toward this goal. We describe development of the MR evaluation framework, including test protocols, linguistic resources and technical infrastructure.
Search
Co-authors
- Dan Simonson 1
- Daniel Broderick 1
- Stephanie Strassel 1
- Dan Adams 1
- Henry Goldberg 1
- show all...