2024
pdf
bib
abs
To Diverge or Not to Diverge: A Morphosyntactic Perspective on Machine Translation vs Human Translation
Jiaming Luo
|
Colin Cherry
|
George Foster
Transactions of the Association for Computational Linguistics, Volume 12
We conduct a large-scale fine-grained comparative analysis of machine translations (MTs) against human translations (HTs) through the lens of morphosyntactic divergence. Across three language pairs and two types of divergence defined as the structural difference between the source and the target, MT is consistently more conservative than HT, with less morphosyntactic diversity, more convergent patterns, and more one-to-one alignments. Through analysis on different decoding algorithms, we attribute this discrepancy to the use of beam search that biases MT towards more convergent patterns. This bias is most amplified when the convergent pattern appears around 50% of the time in training data. Lastly, we show that for a majority of morphosyntactic divergences, their presence in HT is correlated with decreased MT performance, presenting a greater challenge for MT systems.
pdf
bib
abs
Barriers to Effective Evaluation of Simultaneous Interpretation
Shira Wein
|
Te I
|
Colin Cherry
|
Juraj Juraska
|
Dirk Padfield
|
Wolfgang Macherey
Findings of the Association for Computational Linguistics: EACL 2024
Simultaneous interpretation is an especially challenging form of translation because it requires converting speech from one language to another in real-time. Though prior work has relied on out-of-the-box machine translation metrics to evaluate interpretation data, we hypothesize that strategies common in high-quality human interpretations, such as summarization, may not be handled well by standard machine translation metrics. In this work, we examine both qualitatively and quantitatively four potential barriers to evaluation of interpretation: disfluency, summarization, paraphrasing, and segmentation. Our experiments reveal that, while some machine translation metrics correlate fairly well with human judgments of interpretation quality, much work is still needed to account for strategies of interpretation during evaluation. As a first step to address this, we develop a fine-tuned model for interpretation evaluation, and achieve better correlation with human judgments than the state-of-the-art machine translation metrics.
pdf
bib
abs
Translating Step-by-Step: Decomposing the Translation Process for Improved Translation Quality of Long-Form Texts
Eleftheria Briakou
|
Jiaming Luo
|
Colin Cherry
|
Markus Freitag
Proceedings of the Ninth Conference on Machine Translation
In this paper we present a step-by-step approach to long-form text translation, drawing on established processes in translation studies. Instead of viewing machine translation as a single, monolithic task, we propose a framework that engages language models in a multi-turn interaction, encompassing pre-translation research, drafting, refining, and proofreading, resulting in progressively improved translations.Extensive automatic evaluations using Gemini 1.5 Pro across ten language pairs show that translating step-by-step yields large translation quality improvements over conventional zero-shot prompting approaches and earlier human-like baseline strategies, resulting in state-of-the-art results on WMT 2024.
pdf
bib
abs
Quality-Aware Translation Models: Efficient Generation and Quality Estimation in a Single Model
Christian Tomani
|
David Vilar
|
Markus Freitag
|
Colin Cherry
|
Subhajit Naskar
|
Mara Finkelstein
|
Xavier Garcia
|
Daniel Cremers
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Maximum-a-posteriori (MAP) decoding is the most widely used decoding strategy for neural machine translation (NMT) models. The underlying assumption is that model probability correlates well with human judgment, with better translations getting assigned a higher score by the model. However, research has shown that this assumption does not always hold, and generation quality can be improved by decoding to optimize a utility function backed by a metric or quality-estimation signal, as is done by Minimum Bayes Risk (MBR) or Quality-Aware decoding. The main disadvantage of these approaches is that they require an additional model to calculate the utility function during decoding, significantly increasing the computational cost. In this paper, we propose to make the NMT models themselves quality-aware by training them to estimate the quality of their own output. Using this approach for MBR decoding we can drastically reduce the size of the candidate list, resulting in a speed-up of two-orders of magnitude. When applying our method to MAP decoding we obtain quality gains similar or even superior to quality reranking approaches, but with the efficiency of single pass decoding.
2023
pdf
bib
abs
Searching for Needles in a Haystack: On the Role of Incidental Bilingualism in PaLM’s Translation Capability
Eleftheria Briakou
|
Colin Cherry
|
George Foster
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large, multilingual language models exhibit surprisingly good zero- or few-shot machine translation capabilities, despite having never seen the intentionally-included translation examples provided to typical neural translation systems. We investigate the role of incidental bilingualism—the unintentional consumption of bilingual signals, including translation examples—in explaining the translation capabilities of large language models, taking the Pathways Language Model (PaLM) as a case study. We introduce a mixed-method approach to measure and understand incidental bilingualism at scale. We show that PaLM is exposed to over 30 million translation pairs across at least 44 languages. Furthermore, the amount of incidental bilingual content is highly correlated with the amount of monolingual in-language content for non-English languages. We relate incidental bilingual content to zero-shot prompts and show that it can be used to mine new prompts to improve PaLM’s out-of-English zero-shot translation quality. Finally, in a series of small-scale ablations, we show that its presence has a substantial impact on translation capabilities, although this impact diminishes with model scale.
pdf
bib
abs
Prompting PaLM for Translation: Assessing Strategies and Performance
David Vilar
|
Markus Freitag
|
Colin Cherry
|
Jiaming Luo
|
Viresh Ratnakar
|
George Foster
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large language models (LLMs) that have been trained on multilingual but not parallel text exhibit a remarkable ability to translate between languages. We probe this ability in an in-depth study of the pathways language model (PaLM), which has demonstrated the strongest machine translation (MT) performance among similarly-trained LLMs to date. We investigate various strategies for choosing translation examples for few-shot prompting, concluding that example quality is the most important factor. Using optimized prompts, we revisit previous assessments of PaLM’s MT capabilities with more recent test sets, modern MT metrics, and human evaluation, and find that its performance, while impressive, still lags that of state-of-the-art supervised systems. We conclude by providing an analysis of PaLM’s MT output which reveals some interesting properties and prospects for future work.
pdf
bib
abs
XTREME-UP: A User-Centric Scarce-Data Benchmark for Under-Represented Languages
Sebastian Ruder
|
Jonathan Clark
|
Alexander Gutkin
|
Mihir Kale
|
Min Ma
|
Massimo Nicosia
|
Shruti Rijhwani
|
Parker Riley
|
Jean-Michel Sarr
|
Xinyi Wang
|
John Wieting
|
Nitish Gupta
|
Anna Katanova
|
Christo Kirov
|
Dana Dickinson
|
Brian Roark
|
Bidisha Samanta
|
Connie Tao
|
David Adelani
|
Vera Axelrod
|
Isaac Caswell
|
Colin Cherry
|
Dan Garrette
|
Reeve Ingle
|
Melvin Johnson
|
Dmitry Panteleev
|
Partha Talukdar
Findings of the Association for Computational Linguistics: EMNLP 2023
Data scarcity is a crucial issue for the development of highly multilingual NLP systems. Yet for many under-represented languages (ULs) — languages for which NLP research is particularly far behind in meeting user needs — it is feasible to annotate small amounts of data. Motivated by this, we propose XTREME-UP, a benchmark defined by: its focus on the scarce-data scenario rather than zero-shot; its focus on user-centric tasks — tasks with broad adoption by speakers of high-resource languages; and its focus on under-represented languages where this scarce-data scenario tends to be most realistic. XTREME-UP evaluates the capabilities of language models across 88 under-represented languages over 9 key user-centric technologies including ASR, OCR, MT, and information access tasks that are of general utility. We create new datasets for OCR, autocomplete, semantic parsing, and transliteration, and build on and refine existing datasets for other tasks. XTREME-UP provides methodology for evaluating many modeling scenarios including text only, multi-modal (vision, audio, and text), supervised parameter tuning, and in-context learning. We evaluate commonly used models on the benchmark. We release all code and scripts to train and evaluate models.
2022
pdf
bib
Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing
Colin Cherry
|
Angela Fan
|
George Foster
|
Gholamreza (Reza) Haffari
|
Shahram Khadivi
|
Nanyun (Violet) Peng
|
Xiang Ren
|
Ehsan Shareghi
|
Swabha Swayamdipta
Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing
pdf
bib
abs
A Natural Diet: Towards Improving Naturalness of Machine Translation Output
Markus Freitag
|
David Vilar
|
David Grangier
|
Colin Cherry
|
George Foster
Findings of the Association for Computational Linguistics: ACL 2022
Machine translation (MT) evaluation often focuses on accuracy and fluency, without paying much attention to translation style. This means that, even when considered accurate and fluent, MT output can still sound less natural than high quality human translations or text originally written in the target language. Machine translation output notably exhibits lower lexical diversity, and employs constructs that mirror those in the source sentence. In this work we propose a method for training MT systems to achieve a more natural style, i.e. mirroring the style of text originally written in the target language. Our method tags parallel training data according to the naturalness of the target side by contrasting language models trained on natural and translated data. Tagging data allows us to put greater emphasis on target sentences originally written in the target language. Automatic metrics show that the resulting models achieve lexical richness on par with human translations, mimicking a style much closer to sentences originally written in the target language. Furthermore, we find that their output is preferred by human experts when compared to the baseline translations.
pdf
bib
abs
Exploring the Benefits and Limitations of Multilinguality for Non-autoregressive Machine Translation
Sweta Agrawal
|
Julia Kreutzer
|
Colin Cherry
Proceedings of the Seventh Conference on Machine Translation (WMT)
Non-autoregressive (NAR) machine translation has recently received significant developments and now achieves comparable quality with autoregressive (AR) models on some benchmarks while providing an efficient alternative to AR inference. However, while AR translation is often used to implement multilingual models that benefit from transfer between languages and from improved serving efficiency, multilingual NAR models remain relatively unexplored. Taking Connectionist Temporal Classification as an example NAR model and IMPUTER as a semi-NAR model, we present a comprehensive empirical study of multilingual NAR. We test its capabilities with respect to positive transfer between related languages and negative transfer under capacity constraints. As NAR models require distilled training sets, we carefully study the impact of bilingual versus multilingual teachers. Finally, we fit a scaling law for multilingual NAR to determine capacity bottlenecks, which quantifies its performance relative to the AR model as the model scale increases.
2021
pdf
bib
abs
Assessing Reference-Free Peer Evaluation for Machine Translation
Sweta Agrawal
|
George Foster
|
Markus Freitag
|
Colin Cherry
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Reference-free evaluation has the potential to make machine translation evaluation substantially more scalable, allowing us to pivot easily to new languages or domains. It has been recently shown that the probabilities given by a large, multilingual model can achieve state of the art results when used as a reference-free metric. We experiment with various modifications to this model, and demonstrate that by scaling it up we can match the performance of BLEU. We analyze various potential weaknesses of the approach, and find that it is surprisingly robust and likely to offer reasonable performance across a broad spectrum of domains and different system qualities.
pdf
bib
abs
Inverted Projection for Robust Speech Translation
Dirk Padfield
|
Colin Cherry
Proceedings of the 18th International Conference on Spoken Language Translation (IWSLT 2021)
Traditional translation systems trained on written documents perform well for text-based translation but not as well for speech-based applications. We aim to adapt translation models to speech by introducing actual lexical errors from ASR and segmentation errors from automatic punctuation into our translation training data. We introduce an inverted projection approach that projects automatically detected system segments onto human transcripts and then re-segments the gold translations to align with the projected human transcripts. We demonstrate that this overcomes the train-test mismatch present in other training approaches. The new projection approach achieves gains of over 1 BLEU point over a baseline that is exposed to the human transcripts and segmentations, and these gains hold for both IWSLT data and YouTube data.
pdf
bib
Proceedings of the Second Workshop on Automatic Simultaneous Translation
Hua Wu
|
Colin Cherry
|
Liang Huang
|
Zhongjun He
|
Qun Liu
|
Maha Elbayad
|
Mark Liberman
|
Haifeng Wang
|
Mingbo Ma
|
Ruiqing Zhang
Proceedings of the Second Workshop on Automatic Simultaneous Translation
2020
pdf
bib
abs
Human-Paraphrased References Improve Neural Machine Translation
Markus Freitag
|
George Foster
|
David Grangier
|
Colin Cherry
Proceedings of the Fifth Conference on Machine Translation
Automatic evaluation comparing candidate translations to human-generated paraphrases of reference translations has recently been proposed by freitag2020bleu. When used in place of original references, the paraphrased versions produce metric scores that correlate better with human judgment. This effect holds for a variety of different automatic metrics, and tends to favor natural formulations over more literal (translationese) ones. In this paper we compare the results of performing end-to-end system development using standard and paraphrased references. With state-of-the-art English-German NMT components, we show that tuning to paraphrased references produces a system that is ignificantly better according to human judgment, but 5 BLEU points worse when tested on standard references. Our work confirms the finding that paraphrased references yield metric scores that correlate better with human judgment, and demonstrates for the first time that using these scores for system development can lead to significant improvements.
pdf
bib
Proceedings of the First Workshop on Automatic Simultaneous Translation
Hua Wu
|
Colin Cherry
|
Liang Huang
|
Zhongjun He
|
Mark Liberman
|
James Cross
|
Yang Liu
Proceedings of the First Workshop on Automatic Simultaneous Translation
pdf
bib
abs
Re-translation versus Streaming for Simultaneous Translation
Naveen Arivazhagan
|
Colin Cherry
|
Wolfgang Macherey
|
George Foster
Proceedings of the 17th International Conference on Spoken Language Translation
There has been great progress in improving streaming machine translation, a simultaneous paradigm where the system appends to a growing hypothesis as more source content becomes available. We study a related problem in which revisions to the hypothesis beyond strictly appending words are permitted. This is suitable for applications such as live captioning an audio feed. In this setting, we compare custom streaming approaches to re-translation, a straightforward strategy where each new source token triggers a distinct translation from scratch. We find re-translation to be as good or better than state-of-the-art streaming systems, even when operating under constraints that allow very few revisions. We attribute much of this success to a previously proposed data-augmentation technique that adds prefix-pairs to the training data, which alongside wait-k inference forms a strong baseline for streaming translation. We also highlight re-translation’s ability to wrap arbitrarily powerful MT systems with an experiment showing large improvements from an upgrade to its base model.
pdf
bib
abs
Inference Strategies for Machine Translation with Conditional Masking
Julia Kreutzer
|
George Foster
|
Colin Cherry
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Conditional masked language model (CMLM) training has proven successful for non-autoregressive and semi-autoregressive sequence generation tasks, such as machine translation. Given a trained CMLM, however, it is not clear what the best inference strategy is. We formulate masked inference as a factorization of conditional probabilities of partial sequences, show that this does not harm performance, and investigate a number of simple heuristics motivated by this perspective. We identify a thresholding strategy that has advantages over the standard “mask-predict” algorithm, and provide analyses of its behavior on machine translation tasks.
pdf
bib
abs
Simultaneous Translation
Liang Huang
|
Colin Cherry
|
Mingbo Ma
|
Naveen Arivazhagan
|
Zhongjun He
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts
Simultaneous translation, which performs translation concurrently with the source speech, is widely useful in many scenarios such as international conferences, negotiations, press releases, legal proceedings, and medicine. This problem has long been considered one of the hardest problems in AI and one of its holy grails. Recently, with rapid improvements in machine translation, speech recognition, and speech synthesis, there has been exciting progress towards simultaneous translation. This tutorial will focus on the design and evaluation of policies for simultaneous translation, to leave attendees with a deep technical understanding of the history, the recent advances, and the remaining challenges in this field.
2019
pdf
bib
abs
Monotonic Infinite Lookback Attention for Simultaneous Machine Translation
Naveen Arivazhagan
|
Colin Cherry
|
Wolfgang Macherey
|
Chung-Cheng Chiu
|
Semih Yavuz
|
Ruoming Pang
|
Wei Li
|
Colin Raffel
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Simultaneous machine translation begins to translate each source sentence before the source speaker is finished speaking, with applications to live and streaming scenarios. Simultaneous systems must carefully schedule their reading of the source sentence to balance quality against latency. We present the first simultaneous translation system to learn an adaptive schedule jointly with a neural machine translation (NMT) model that attends over all source tokens read thus far. We do so by introducing Monotonic Infinite Lookback (MILk) attention, which maintains both a hard, monotonic attention head to schedule the reading of the source sentence, and a soft attention head that extends from the monotonic head back to the beginning of the source. We show that MILk’s adaptive schedule allows it to arrive at latency-quality trade-offs that are favorable to those of a recently proposed wait-k strategy for many latency values.
pdf
bib
abs
Reinforcement Learning based Curriculum Optimization for Neural Machine Translation
Gaurav Kumar
|
George Foster
|
Colin Cherry
|
Maxim Krikun
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
We consider the problem of making efficient use of heterogeneous training data in neural machine translation (NMT). Specifically, given a training dataset with a sentence-level feature such as noise, we seek an optimal curriculum, or order for presenting examples to the system during training. Our curriculum framework allows examples to appear an arbitrary number of times, and thus generalizes data weighting, filtering, and fine-tuning schemes. Rather than relying on prior knowledge to design a curriculum, we use reinforcement learning to learn one automatically, jointly with the NMT system, in the course of a single training run. We show that this approach can beat uniform baselines on Paracrawl and WMT English-to-French datasets by +3.4 and +1.3 BLEU respectively. Additionally, we match the performance of strong filtering baselines and hand-designed, state-of-the-art curricula.
pdf
bib
Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019)
Colin Cherry
|
Greg Durrett
|
George Foster
|
Reza Haffari
|
Shahram Khadivi
|
Nanyun Peng
|
Xiang Ren
|
Swabha Swayamdipta
Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019)
2018
pdf
bib
Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track)
Colin Cherry
|
Graham Neubig
Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track)
pdf
bib
Proceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP
Reza Haffari
|
Colin Cherry
|
George Foster
|
Shahram Khadivi
|
Bahar Salehi
Proceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP
pdf
bib
abs
Revisiting Character-Based Neural Machine Translation with Capacity and Compression
Colin Cherry
|
George Foster
|
Ankur Bapna
|
Orhan Firat
|
Wolfgang Macherey
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Translating characters instead of words or word-fragments has the potential to simplify the processing pipeline for neural machine translation (NMT), and improve results by eliminating hyper-parameters and manual feature engineering. However, it results in longer sequences in which each symbol contains less information, creating both modeling and computational challenges. In this paper, we show that the modeling problem can be solved by standard sequence-to-sequence architectures of sufficient depth, and that deep models operating at the character level outperform identical models operating over word fragments. This result implies that alternative architectures for handling character input are better viewed as methods for reducing computation time than as improved ways of modeling longer sequences. From this perspective, we evaluate several techniques for character-level NMT, verify that they do not match the performance of our deep character baseline model, and evaluate the performance versus computation time tradeoffs they offer. Within this framework, we also perform the first evaluation for NMT of conditional computation over time, in which the model learns which timesteps can be skipped, rather than having them be dictated by a fixed schedule specified before training begins.
2017
pdf
bib
abs
A Challenge Set Approach to Evaluating Machine Translation
Pierre Isabelle
|
Colin Cherry
|
George Foster
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Neural machine translation represents an exciting leap forward in translation quality. But what longstanding weaknesses does it resolve, and which remain? We address these questions with a challenge set approach to translation evaluation and error analysis. A challenge set consists of a small set of sentences, each hand-designed to probe a system’s capacity to bridge a particular structural divergence between languages. To exemplify this approach, we present an English-French challenge set, and use it to analyze phrase-based and neural systems. The resulting analysis provides not only a more fine-grained picture of the strengths of neural systems, but also insight into which linguistic phenomena remain out of reach.
pdf
bib
abs
Cost Weighting for Neural Machine Translation Domain Adaptation
Boxing Chen
|
Colin Cherry
|
George Foster
|
Samuel Larkin
Proceedings of the First Workshop on Neural Machine Translation
In this paper, we propose a new domain adaptation technique for neural machine translation called cost weighting, which is appropriate for adaptation scenarios in which a small in-domain data set and a large general-domain data set are available. Cost weighting incorporates a domain classifier into the neural machine translation training algorithm, using features derived from the encoder representation in order to distinguish in-domain from out-of-domain data. Classifier probabilities are used to weight sentences according to their domain similarity when updating the parameters of the neural translation model. We compare cost weighting to two traditional domain adaptation techniques developed for statistical machine translation: data selection and sub-corpus weighting. Experiments on two large-data tasks show that both the traditional techniques and our novel proposal lead to significant gains, with cost weighting outperforming the traditional methods.
pdf
bib
NRC Machine Translation System for WMT 2017
Chi-kiu Lo
|
Boxing Chen
|
Colin Cherry
|
George Foster
|
Samuel Larkin
|
Darlene Stewart
|
Roland Kuhn
Proceedings of the Second Conference on Machine Translation
2016
pdf
bib
SemEval-2016 Task 6: Detecting Stance in Tweets
Saif Mohammad
|
Svetlana Kiritchenko
|
Parinaz Sobhani
|
Xiaodan Zhu
|
Colin Cherry
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)
pdf
bib
abs
Bilingual Methods for Adaptive Training Data Selection for Machine Translation
Boxing Chen
|
Roland Kuhn
|
George Foster
|
Colin Cherry
|
Fei Huang
Conferences of the Association for Machine Translation in the Americas: MT Researchers' Track
In this paper, we propose a new data selection method which uses semi-supervised convolutional neural networks based on bitokens (Bi-SSCNNs) for training machine translation systems from a large bilingual corpus. In earlier work, we devised a data selection method based on semi-supervised convolutional neural networks (SSCNNs). The new method, Bi-SSCNN, is based on bitokens, which use bilingual information. When the new methods are tested on two translation tasks (Chinese-to-English and Arabic-to-English), they significantly outperform the other three data selection methods in the experiments. We also show that the BiSSCNN method is much more effective than other methods in preventing noisy sentence pairs from being chosen for training. More interestingly, this method only needs a tiny amount of in-domain data to train the selection model, which makes fine-grained topic-dependent translation adaptation possible. In the follow-up experiments, we find that neural machine translation (NMT) is more sensitive to noisy data than statistical machine translation (SMT). Therefore, Bi-SSCNN which can effectively screen out noisy sentence pairs, can benefit NMT much more than SMT.We observed a BLEU improvement over 3 points on an English-to-French WMT task when Bi-SSCNNs were used.
pdf
bib
An Empirical Evaluation of Noise Contrastive Estimation for the Neural Network Joint Model of Translation
Colin Cherry
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
pdf
bib
Integrating Morphological Desegmentation into Phrase-based Decoding
Mohammad Salameh
|
Colin Cherry
|
Grzegorz Kondrak
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
pdf
bib
NRC Russian-English Machine Translation System for WMT 2016
Chi-kiu Lo
|
Colin Cherry
|
George Foster
|
Darlene Stewart
|
Rabib Islam
|
Anna Kazantseva
|
Roland Kuhn
Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers
pdf
bib
abs
A Dataset for Detecting Stance in Tweets
Saif Mohammad
|
Svetlana Kiritchenko
|
Parinaz Sobhani
|
Xiaodan Zhu
|
Colin Cherry
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
We can often detect from a person’s utterances whether he/she is in favor of or against a given target entity (a product, topic, another person, etc.). Here for the first time we present a dataset of tweets annotated for whether the tweeter is in favor of or against pre-chosen targets of interest―their stance. The targets of interest may or may not be referred to in the tweets, and they may or may not be the target of opinion in the tweets. The data pertains to six targets of interest commonly known and debated in the United States. Apart from stance, the tweets are also annotated for whether the target of interest is the target of opinion in the tweet. The annotations were performed by crowdsourcing. Several techniques were employed to encourage high-quality annotations (for example, providing clear and simple instructions) and to identify and discard poor annotations (for example, using a small set of check questions annotated by the authors). This Stance Dataset, which was subsequently also annotated for sentiment, can be used to better understand the relationship between stance, sentiment, entity relationships, and textual inference.
2015
pdf
bib
The Unreasonable Effectiveness of Word Representations for Twitter Named Entity Recognition
Colin Cherry
|
Hongyu Guo
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
pdf
bib
Inflection Generation as Discriminative String Transduction
Garrett Nicolai
|
Colin Cherry
|
Grzegorz Kondrak
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
pdf
bib
What Matters Most in Morphologically Segmented SMT Models?
Mohammad Salameh
|
Colin Cherry
|
Grzegorz Kondrak
Proceedings of the Ninth Workshop on Syntax, Semantics and Structure in Statistical Translation
pdf
bib
Morpho-syntactic Regularities in Continuous Word Representations: A multilingual study.
Garrett Nicolai
|
Colin Cherry
|
Grzegorz Kondrak
Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing
pdf
bib
NRC: Infused Phrase Vectors for Named Entity Recognition in Twitter
Colin Cherry
|
Hongyu Guo
|
Chengbi Dai
Proceedings of the Workshop on Noisy User-generated Text
2014
pdf
bib
NRC-Canada-2014: Detecting Aspects and Sentiment in Customer Reviews
Svetlana Kiritchenko
|
Xiaodan Zhu
|
Colin Cherry
|
Saif Mohammad
Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014)
pdf
bib
A Systematic Comparison of Smoothing Techniques for Sentence-Level BLEU
Boxing Chen
|
Colin Cherry
Proceedings of the Ninth Workshop on Statistical Machine Translation
pdf
bib
Lattice Desegmentation for Statistical Machine Translation
Mohammad Salameh
|
Colin Cherry
|
Grzegorz Kondrak
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
2013
pdf
bib
Regularized Minimum Error Rate Training
Michel Galley
|
Chris Quirk
|
Colin Cherry
|
Kristina Toutanova
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing
pdf
bib
Improved Reordering for Phrase-Based Translation using Sparse Features
Colin Cherry
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
pdf
bib
Reversing Morphological Tokenization in English-to-Arabic SMT
Mohammad Salameh
|
Colin Cherry
|
Grzegorz Kondrak
Proceedings of the 2013 NAACL HLT Student Research Workshop
2012
pdf
bib
Batch Tuning Strategies for Statistical Machine Translation
Colin Cherry
|
George Foster
Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
pdf
bib
MSR SPLAT, a language analysis toolkit
Chris Quirk
|
Pallavi Choudhury
|
Jianfeng Gao
|
Hisami Suzuki
|
Kristina Toutanova
|
Michael Gamon
|
Wen-tau Yih
|
Colin Cherry
|
Lucy Vanderwende
Proceedings of the Demonstration Session at the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
pdf
bib
On Hierarchical Re-ordering and Permutation Parsing for Phrase-based Decoding
Colin Cherry
|
Robert C. Moore
|
Chris Quirk
Proceedings of the Seventh Workshop on Statistical Machine Translation
pdf
bib
Paraphrasing for Style
Wei Xu
|
Alan Ritter
|
Bill Dolan
|
Ralph Grishman
|
Colin Cherry
Proceedings of COLING 2012
2011
pdf
bib
Lexically-Triggered Hidden Markov Models for Clinical Document Coding
Svetlana Kiritchenko
|
Colin Cherry
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies
pdf
bib
Joint Training of Dependency Parsing Filters through Latent Support Vector Machines
Colin Cherry
|
Shane Bergsma
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies
pdf
bib
Indexing Spoken Documents with Hierarchical Semantic Structures: Semantic Tree-to-string Alignment Models
Xiaodan Zhu
|
Colin Cherry
|
Gerald Penn
Proceedings of 5th International Joint Conference on Natural Language Processing
pdf
bib
Data-Driven Response Generation in Social Media
Alan Ritter
|
Colin Cherry
|
William B. Dolan
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing
2010
pdf
bib
Unsupervised Modeling of Twitter Conversations
Alan Ritter
|
Colin Cherry
|
Bill Dolan
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
pdf
bib
Integrating Joint n-gram Features into a Discriminative Training Framework
Sittichai Jiampojamarn
|
Colin Cherry
|
Grzegorz Kondrak
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
pdf
bib
Book Review: Statistical Machine Translation by Philipp Koehn
Colin Cherry
Computational Linguistics, Volume 36, Issue 4 - December 2010
pdf
bib
Fast and Accurate Arc Filtering for Dependency Parsing
Shane Bergsma
|
Colin Cherry
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)
pdf
bib
Imposing Hierarchical Browsing Structures onto Spoken Documents
Xiaodan Zhu
|
Colin Cherry
|
Gerald Penn
Coling 2010: Posters
2009
pdf
bib
Unsupervised Morphological Segmentation with Log-Linear Models
Hoifung Poon
|
Colin Cherry
|
Kristina Toutanova
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics
pdf
bib
On the Syllabification of Phonemes
Susan Bartlett
|
Grzegorz Kondrak
|
Colin Cherry
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics
pdf
bib
Cohesive Constraints in A Beam Search Phrase-based Decoder
Nguyen Bach
|
Stephan Vogel
|
Colin Cherry
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers
pdf
bib
Discriminative Substring Decoding for Transliteration
Colin Cherry
|
Hisami Suzuki
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing
pdf
bib
NEWS 2009 Machine Transliteration Shared Task System Description: Transliteration with Letter-to-Phoneme Technology
Colin Cherry
|
Hisami Suzuki
Proceedings of the 2009 Named Entities Workshop: Shared Task on Transliteration (NEWS 2009)
pdf
bib
A global model for joint lemmatization and part-of-speech prediction
Kristina Toutanova
|
Colin Cherry
Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP
2008
pdf
bib
Cohesive Phrase-Based Decoding for Statistical Machine Translation
Colin Cherry
Proceedings of ACL-08: HLT
pdf
bib
Automatic Syllabification with Structured SVMs for Letter-to-Phoneme Conversion
Susan Bartlett
|
Grzegorz Kondrak
|
Colin Cherry
Proceedings of ACL-08: HLT
pdf
bib
Joint Processing and Discriminative Training for Letter-to-Phoneme Conversion
Sittichai Jiampojamarn
|
Colin Cherry
|
Grzegorz Kondrak
Proceedings of ACL-08: HLT
pdf
bib
abs
Discriminative, Syntactic Language Modeling through Latent SVMs
Colin Cherry
|
Chris Quirk
Proceedings of the 8th Conference of the Association for Machine Translation in the Americas: Research Papers
We construct a discriminative, syntactic language model (LM) by using a latent support vector machine (SVM) to train an unlexicalized parser to judge sentences. That is, the parser is optimized so that correct sentences receive high-scoring trees, while incorrect sentences do not. Because of this alternative objective, the parser can be trained with only a part-of-speech dictionary and binary-labeled sentences. We follow the paradigm of discriminative language modeling with pseudo-negative examples (Okanohara and Tsujii, 2007), and demonstrate significant improvements in distinguishing real sentences from pseudo-negatives. We also investigate the related task of separating machine-translation (MT) outputs from reference translations, again showing large improvements. Finally, we test our LM in MT reranking, and investigate the language-modeling parser in the context of unsupervised parsing.
2007
pdf
bib
Inversion Transduction Grammar for Joint Phrasal Translation Modeling
Colin Cherry
|
Dekang Lin
Proceedings of SSST, NAACL-HLT 2007 / AMTA Workshop on Syntax and Structure in Statistical Translation
2006
pdf
bib
Soft Syntactic Constraints for Word Alignment through Discriminative Training
Colin Cherry
|
Dekang Lin
Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions
pdf
bib
A Comparison of Syntactically Motivated Word Alignment Spaces
Colin Cherry
|
Dekang Lin
11th Conference of the European Chapter of the Association for Computational Linguistics
pdf
bib
Improved Large Margin Dependency Parsing via Local Constraints and Laplacian Regularization
Qin Iris Wang
|
Colin Cherry
|
Dan Lizotte
|
Dale Schuurmans
Proceedings of the Tenth Conference on Computational Natural Language Learning (CoNLL-X)
pdf
bib
Biomedical Term Recognition with the Perceptron HMM Algorithm
Sittichai Jiampojamarn
|
Grzegorz Kondrak
|
Colin Cherry
Proceedings of the HLT-NAACL BioNLP Workshop on Linking Natural Language and Biology
2005
pdf
bib
An Expectation Maximization Approach to Pronoun Resolution
Colin Cherry
|
Shane Bergsma
Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005)
pdf
bib
Dependency Treelet Translation: Syntactically Informed Phrasal SMT
Chris Quirk
|
Arul Menezes
|
Colin Cherry
Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05)
2003
pdf
bib
Word Alignment with Cohesion Constraint
Dekang Lin
|
Colin Cherry
Companion Volume of the Proceedings of HLT-NAACL 2003 - Short Papers
pdf
bib
A Probability Model to Improve Word Alignment
Colin Cherry
|
Dekang Lin
Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics
pdf
bib
ProAlign: Shared Task System Description
Dekang Lin
|
Colin Cherry
Proceedings of the HLT-NAACL 2003 Workshop on Building and Using Parallel Texts: Data Driven Machine Translation and Beyond