Mark Przybocki

Also published as: Mark A. Przybocki


2010

bib
Document Image Collection Using Amazon’s Mechanical Turk
Audrey Le | Jerome Ajot | Mark Przybocki | Stephanie Strassel
Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk

pdf bib
Findings of the 2010 Joint Workshop on Statistical Machine Translation and Metrics for Machine Translation
Chris Callison-Burch | Philipp Koehn | Christof Monz | Kay Peterson | Mark Przybocki | Omar Zaidan
Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR

2008

pdf bib
Translation Adequacy and Preference Evaluation Tool (TAP-ET)
Mark Przybocki | Kay Peterson | Sébastien Bronsart
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

Evaluation of Machine Translation (MT) technology is often tied to the requirement for tedious manual judgments of translation quality. While automated MT metrology continues to be an active area of research, a well known and often accepted standard metric is the manual human assessment of adequacy and fluency. There are several software packages that have been used to facilitate these judgments, but for the 2008 NIST Open MT Evaluation, NIST’s Speech Group created an online software tool to accommodate the requirement for centralized data and distributed judges. This paper introduces the NIST TAP-ET application and reviews the reasoning underlying its design. Where available, analysis of data sets judged for Adequacy and Preference using the TAP-ET application will be presented. TAP-ET is freely available and ready to download, and contains a variety of customizable features.

pdf bib
Linguistic Resources and Evaluation Techniques for Evaluation of Cross-Document Automatic Content Extraction
Stephanie Strassel | Mark Przybocki | Kay Peterson | Zhiyi Song | Kazuaki Maeda
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

The NIST Automatic Content Extraction (ACE) Evaluation expands its focus in 2008 to encompass the challenge of cross-document and cross-language global integration and reconciliation of information. While past ACE evaluations have been limited to local (within-document) detection and disambiguation of entities, relations and events, the current evaluation adds global (cross-document and cross-language) entity disambiguation tasks for Arabic and English. This paper presents the 2008 ACE XDoc evaluation task and associated infrastructure. We describe the linguistic resources created by LDC to support the evaluation, focusing on new approaches required for data selection, data processing, annotation task definitions and annotation software, and we conclude with a discussion of the metrics developed by NIST to support the evaluation.

2006

pdf bib
Edit Distance: A Metric for Machine Translation Evaluation
Mark Przybocki | Gregory Sanders | Audrey Le
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

NIST has coordinated machine translation (MT) evaluations for several years using an automatic and repeatable evaluation measure. Under the Global Autonomous Language Exploitation (GALE) program, NIST is tasked with implementing an edit-distance-based evaluation of MT. Here “edit distance” is defined to be the number of modifications a human editor is required to make to a system translation such that the resulting edited translation contains the complete meaning in easily understandable English, as a single high-quality human reference translation. In preparation for this change in evaluation paradigm, NIST conducted two proof-of-concept exercises specifically designed to probe the data space, to answer questions related to editor agreement, and to establish protocols for the formal GALE evaluations. We report here our experimental design, the data used, and our findings for these exercises.

pdf bib
The Mixer and Transcript Reading Corpora: Resources for Multilingual, Crosschannel Speaker Recognition Research
Christopher Cieri | Walt Andrews | Joseph P. Campbell | George Doddington | Jack Godfrey | Shudong Huang | Mark Liberman | Alvin Martin | Hirotaka Nakasone | Mark Przybocki | Kevin Walker
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

This paper describes the planning and creation of the Mixer and Transcript Reading corpora, their properties and yields, and reports on the lessons learned during their development.

2004

pdf bib
The Automatic Content Extraction (ACE) Program – Tasks, Data, and Evaluation
George Doddington | Alexis Mitchell | Mark Przybocki | Lance Ramshaw | Stephanie Strassel | Ralph Weischedel
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
NIST Language Technology Evaluation Cookbook
Alvin F. Martin | John S. Garofolo | Jonathan C. Fiscus | Audrey N. Le | David S. Pallett | Mark A. Przybocki | Gregory A. Sanders
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
Conversational Telephone Speech Corpus Collection for the NIST Speaker Recognition Evaluation 2004
Alvin Martin | David Miller | Mark Przybocki | Joseph Campbell | Hirotaka Nakasone
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

2002

pdf bib
NIST Rich Transcription 2002 Evaluation: A Preview
John Garofolo | Jonathan G. Fiscus | Alvin Martin | David Pallett | Mark Przybocki
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)

2000

pdf bib
Design Issues in Text-Independent Speaker Recognition Evaluation
Alvin Martin | Mark Przybocki
Proceedings of the Second International Conference on Language Resources and Evaluation (LREC’00)

1994

pdf bib
1993 Benchmark Tests for the ARPA Spoken Language Program
David S. Pallett | Jonathan G. Fiscus | William M. Fisher | John S. Garofolo | Bruce A. Lund | Mark A. Przybocki
Human Language Technology: Proceedings of a Workshop held at Plainsboro, New Jersey, March 8-11, 1994