Matthew Snover

Also published as: Matthew G. Snover


2011

pdf bib
Cross-lingual Slot Filling from Comparable Corpora
Matthew Snover | Xiang Li | Wen-Pin Lin | Zheng Chen | Suzanne Tamang | Mingmin Ge | Adam Lee | Qi Li | Hao Li | Sam Anzaroot | Heng Ji
Proceedings of the 4th Workshop on Building and Using Comparable Corpora: Comparable Corpora and the Web

pdf bib
Unsupervised Language-Independent Name Translation Mining from Wikipedia Infoboxes
Wen-Pin Lin | Matthew Snover | Heng Ji
Proceedings of the First workshop on Unsupervised Learning in NLP

2009

pdf bib
Fluency, Adequacy, or HTER? Exploring Different Human Judgments with a Tunable MT Metric
Matthew Snover | Nitin Madnani | Bonnie Dorr | Richard Schwartz
Proceedings of the Fourth Workshop on Statistical Machine Translation

2008

pdf bib
Language and Translation Model Adaptation using Comparable Corpora
Matthew Snover | Bonnie Dorr | Richard Schwartz
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing

2006

pdf bib
PCFGs with Syntactic and Prosodic Indicators of Speech Repairs
John Hale | Izhak Shafran | Lisa Yung | Bonnie J. Dorr | Mary Harper | Anna Krasnyanskaya | Matthew Lease | Yang Liu | Brian Roark | Matthew Snover | Robin Stewart
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics

pdf bib
A Study of Translation Edit Rate with Targeted Human Annotation
Matthew Snover | Bonnie Dorr | Rich Schwartz | Linnea Micciulla | John Makhoul
Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers

We examine a new, intuitive measure for evaluating machine-translation output that avoids the knowledge intensiveness of more meaning-based approaches, and the labor-intensiveness of human judgments. Translation Edit Rate (TER) measures the amount of editing that a human would have to perform to change a system output so it exactly matches a reference translation. We show that the single-reference variant of TER correlates as well with human judgments of MT quality as the four-reference variant of BLEU. We also define a human-targeted TER (or HTER) and show that it yields higher correlations with human judgments than BLEU—even when BLEU is given human-targeted references. Our results indicate that HTER correlates with human judgments better than HMETEOR and that the four-reference variants of TER and HTER correlate with human judgments as well as—or better than—a second human judgment does.

pdf bib
SParseval: Evaluation Metrics for Parsing Speech
Brian Roark | Mary Harper | Eugene Charniak | Bonnie Dorr | Mark Johnson | Jeremy Kahn | Yang Liu | Mari Ostendorf | John Hale | Anna Krasnyanskaya | Matthew Lease | Izhak Shafran | Matthew Snover | Robin Stewart | Lisa Yung
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

While both spoken and written language processing stand to benefit from parsing, the standard Parseval metrics (Black et al., 1991) and their canonical implementation (Sekine and Collins, 1997) are only useful for text. The Parseval metrics are undefined when the words input to the parser do not match the words in the gold standard parse tree exactly, and word errors are unavoidable with automatic speech recognition (ASR) systems. To fill this gap, we have developed a publicly available tool for scoring parses that implements a variety of metrics which can handle mismatches in words and segmentations, including: alignment-based bracket evaluation, alignment-based dependency evaluation, and a dependency evaluation that does not require alignment. We describe the different metrics, how to use the tool, and the outcome of an extensive set of experiments on the sensitivity.

2004

pdf bib
A Lexically-Driven Algorithm for Disfluency Detection
Matthew Snover | Bonnie Dorr | Richard Schwartz
Proceedings of HLT-NAACL 2004: Short Papers

2002

pdf bib
Unsupervised Learning of Morphology Using a Novel Directed Search Algorithm: Taking the First Step
Matthew G. Snover | Gaja E. Jarosz | Michael R. Brent
Proceedings of the ACL-02 Workshop on Morphological and Phonological Learning

2001

pdf bib
A Bayesian Model For Morpheme and Paradigm Identification
Matthew G. Snover | Michael R. Brent
Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics