Jim White


2013

bib
The Pedantic Javadoc Corpus: Comments and Code as Bitext
Jim White
Proceedings of the Workshop on Twenty Years of Bitext

2010

pdf bib
Evaluation of Document Citations in Phase 2 Gale Distillation
Olga Babko-Malaya | Dan Hunter | Connie Fournelle | Jim White
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

The focus of information retrieval evaluations, such as NIST’s TREC evaluations (e.g. Voorhees 2003), is on evaluation of the information content of system responses. On the other hand, retrieval tasks usually involve two different dimensions: reporting relevant information and providing sources of information, including corroborating evidence and alternative documents. Under the DARPA Global Autonomous Language Exploitation (GALE) program, Distillation provides succinct, direct responses to the formatted queries using the outputs of automated transcription and translation technologies. These responses are evaluated in two dimensions: information content, which measures the amount of relevant and non-redundant information, and document support, which measures the number of alternative sources provided in support of reported information. The final metric in the overall GALE distillation evaluation combines the results of scoring of both query responses and document citations. In this paper, we describe our evaluation framework with emphasis on the scoring of document citations and an analysis of how systems perform at providing sources of information.