Jon Phillips

Georgetown, MITRE

Also published as: John Phillips

Other people with similar names: John Phillips (Univ. of Manchester)


2008

pdf bib
Applying Automated Metrics to Speech Translation Dialogs
Sherri Condon | Jon Phillips | Christy Doran | John Aberdeen | Dan Parvaz | Beatrice Oshika | Greg Sanders | Craig Schlenoff
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

Over the past five years, the Defense Advanced Research Projects Agency (DARPA) has funded development of speech translation systems for tactical applications. A key component of the research program has been extensive system evaluation, with dual objectives of assessing progress overall and comparing among systems. This paper describes the methods used to obtain BLEU, TER, and METEOR scores for two-way English-Iraqi Arabic systems. We compare the scores with measures based on human judgments and demonstrate the effects of normalization operations on BLEU scores. Issues that are highlighted include the quality of test data and differential results of applying automated metrics to Arabic vs. English.

pdf bib
Performance Evaluation of Speech Translation Systems
Brian Weiss | Craig Schlenoff | Greg Sanders | Michelle Steves | Sherri Condon | Jon Phillips | Dan Parvaz
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

One of the most challenging tasks for uniformed service personnel serving in foreign countries is effective verbal communication with the local population. To remedy this problem, several companies and academic institutions have been funded to develop machine translation systems as part of the DARPA TRANSTAC (Spoken Language Communication and Translation System for Tactical Use) program. The goal of this program is to demonstrate capabilities to rapidly develop and field free-form, two-way translation systems that would enable speakers of different languages to communicate with one another in real-world tactical situations. DARPA has mandated that each TRANSTAC technology be evaluated numerous times throughout the life of the program and has tasked the National Institute of Standards and Technology (NIST) to lead this effort. This paper describes the experimental design methodology and test procedures from the most recent evaluation, conducted in July 2007, which focused on English to/from Iraqi Arabic.

2005

pdf bib
Automating Temporal Annotation with TARSQI
Marc Verhagen | Inderjeet Mani | Roser Sauri | Jessica Littman | Robert Knippen | Seok B. Jang | Anna Rumshisky | John Phillips | James Pustejovsky
Proceedings of the ACL Interactive Poster and Demonstration Sessions