Bill Byrne

University of Cambridge

Also published as: W. Byrne, William Byrne, William J. Byrne

Other people with similar names: Bill Byrne (UCSD Ph.d; https://www.linkedin.com/in/billb/)


2024

pdf bib
Retrieving Contextual Information for Long-Form Question Answering using Weak Supervision
Philipp Christmann | Svitlana Vakulenko | Ionut Teodor Sorodoc | Bill Byrne | Adrià de Gispert
Findings of the Association for Computational Linguistics: EMNLP 2024

Long-form question answering (LFQA) aims at generating in-depth answers to end-user questions, providing relevant information beyond the direct answer. However, existing retrievers are typically optimized towards information that directly targets the question, missing out on such contextual information. Furthermore, there is a lack of training data for relevant context. To this end, we propose and compare different weak supervision techniques to optimize retrieval for contextual information. Experiments demonstrate improvements on the end-to-end QA performance on ASQA, a dataset for long-form question answering. Importantly, as more contextual information is retrieved, we improve the relevant page recall for LFQA by 14.7% and the groundedness of generated long-form answers by 12.5%. Finally, we show that long-form answers often anticipate likely follow-up questions, via experiments on a conversational QA dataset.

pdf bib
A Preference-driven Paradigm for Enhanced Translation with Large Language Models
Dawei Zhu | Sony Trenous | Xiaoyu Shen | Dietrich Klakow | Bill Byrne | Eva Hasler
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)

Recent research has shown that large language models (LLMs) can achieve remarkable translation performance through supervised fine-tuning (SFT) using only a small amount of parallel data. However, SFT simply instructs the model to imitate the reference translations at the token level, making it vulnerable to the noise present in the references. Hence, the assistance from SFT often reaches a plateau once the LLMs have achieved a certain level of translation capability, and further increasing the size of parallel data does not provide additional benefits. To overcome this plateau associated with imitation-based SFT, we propose a preference-based approach built upon the Plackett-Luce model. The objective is to steer LLMs towards a more nuanced understanding of translation preferences from a holistic view, while also being more resilient in the absence of gold translations. We further build a dataset named MAPLE to verify the effectiveness of our approach, which includes multiple translations of varying quality for each source sentence. Extensive experiments demonstrate the superiority of our approach in “breaking the plateau” across diverse LLMs and test settings. Our in-depth analysis underscores the pivotal role of diverse translations and accurate preference scores in the success of our approach.

pdf bib
Direct Preference Optimization for Neural Machine Translation with Minimum Bayes Risk Decoding
Guangyu Yang | Jinghong Chen | Weizhe Lin | Bill Byrne
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)

Minimum Bayes Risk (MBR) decoding can significantly improve translation performance of Multilingual Large Language Models (MLLMs). However, MBR decoding is computationally expensive. We show how the recently developed Reinforcement Learning technique, Direct Preference Optimization (DPO), can fine-tune MLLMs to get the gains of MBR without any additional computation in inference. Our method uses only a small monolingual fine-tuning set and yields significantly improved performance on multiple NMT test sets compared to MLLMs without DPO.

pdf bib
Control-DAG: Constrained Decoding for Non-Autoregressive Directed Acyclic T5 using Weighted Finite State Automata
Jinghong Chen | Weizhe Lin | Jingbiao Mei | Bill Byrne
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)

The Directed Acyclic Transformer is a fast non-autoregressive (NAR) model that performs well in Neural Machine Translation. Two issues prevent its application to general Natural Language Generation (NLG) tasks: frequent Out-Of-Vocabulary (OOV) errors and the inability to faithfully generate entity names. We introduce Control-DAG, a constrained decoding algorithm for our Directed Acyclic T5 (DA-T5) model which offers lexical, vocabulary and length control. We show that Control-DAG significantly enhances DA-T5 on the Schema Guided Dialogue and the DART datasets, establishing strong NAR results for Task-Oriented Dialogue and Data-to-Text NLG.

pdf bib
PreFLMR: Scaling Up Fine-Grained Late-Interaction Multi-modal Retrievers
Weizhe Lin | Jingbiao Mei | Jinghong Chen | Bill Byrne
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Large Multimodal Models (LMMs) excel in natural language and visual understanding but are challenged by exacting tasks such as Knowledge-based Visual Question Answering (KB-VQA) which involve the retrieval of relevant information from document collections to use in shaping answers to questions. We present an extensive training and evaluation framework, M2KR, for KB-VQA. M2KR contains a collection of vision and language tasks which we have incorporated into a single suite of benchmark tasks for training and evaluating general-purpose multi-modal retrievers. We use M2KR to develop PreFLMR, a pre-trained version of the recently developed Fine-grained Late-interaction Multi-modal Retriever (FLMR) approach to KB-VQA, and we report new state-of-the-art results across a range of tasks. We also present investigations into the scaling behaviors of PreFLMR intended to be useful in future developments in general-purpose multi-modal retrievers.

pdf bib
Improving Hateful Meme Detection through Retrieval-Guided Contrastive Learning
Jingbiao Mei | Jinghong Chen | Weizhe Lin | Bill Byrne | Marcus Tomalin
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Hateful memes have emerged as a significant concern on the Internet. Detecting hateful memes requires the system to jointly understand the visual and textual modalities. Our investigation reveals that the embedding space of existing CLIP-based systems lacks sensitivity to subtle differences in memes that are vital for correct hatefulness classification. We propose constructing a hatefulness-aware embedding space through retrieval-guided contrastive training. Our approach achieves state-of-the-art performance on the HatefulMemes dataset with an AUROC of 87.0, outperforming much larger fine-tuned large multimodal models. We demonstrate a retrieval-based hateful memes detection system, which is capable of identifying hatefulness based on data unseen in training. This allows developers to update the hateful memes detection system by simply adding new examples without retraining — a desirable feature for real services in the constantly evolving landscape of hateful memes on the Internet.

pdf bib
The Fine-Tuning Paradox: Boosting Translation Quality Without Sacrificing LLM Abilities
David Stap | Eva Hasler | Bill Byrne | Christof Monz | Ke Tran
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Fine-tuning large language models (LLMs) for machine translation has shown improvements in overall translation quality. However, it is unclear what is the impact of fine-tuning on desirable LLM behaviors that are not present in neural machine translation models, such as steerability, inherent document-level translation abilities, and the ability to produce less literal translations. We perform an extensive translation evaluation on the LLaMA and Falcon family of models with model size ranging from 7 billion up to 65 billion parameters.Our results show that while fine-tuning improves the general translation quality of LLMs, several abilities degrade. In particular, we observe a decline in the ability to perform formality steering, to produce technical translations through few-shot examples, and to perform document-level translation. On the other hand, we observe that the model produces less literal translations after fine-tuning on parallel data. We show that by including monolingual data as part of the fine-tuning data we can maintain the abilities while simultaneously enhancing overall translation quality. Our findings emphasize the need for fine-tuning strategies that preserve the benefits of LLMs for machine translation.

2023

pdf bib
An Inner Table Retriever for Robust Table Question Answering
Weizhe Lin | Rexhina Blloshmi | Bill Byrne | Adria de Gispert | Gonzalo Iglesias
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Recent years have witnessed the thriving of pretrained Transformer-based language models for understanding semi-structured tables, with several applications, such as Table Question Answering (TableQA).These models are typically trained on joint tables and surrounding natural language text, by linearizing table content into sequences comprising special tokens and cell information. This yields very long sequences which increase system inefficiency, and moreover, simply truncating long sequences results in information loss for downstream tasks. We propose Inner Table Retriever (ITR), a general-purpose approach for handling long tables in TableQA that extracts sub-tables to preserve the most relevant information for a question. We show that ITR can be easily integrated into existing systems to improve their accuracy with up to 1.3-4.8% and achieve state-of-the-art results in two benchmarks, i.e., 63.4% in WikiTableQuestions and 92.1% in WikiSQL. Additionally, we show that ITR makes TableQA systems more robust to reduced model capacity and to different ordering of columns and rows. We make our code available at: https://github.com/amazon-science/robust-tableqa.

pdf bib
LI-RAGE: Late Interaction Retrieval Augmented Generation with Explicit Signals for Open-Domain Table Question Answering
Weizhe Lin | Rexhina Blloshmi | Bill Byrne | Adria de Gispert | Gonzalo Iglesias
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

Recent open-domain TableQA models are typically implemented as retriever-reader pipelines. The retriever component is usually a variant of the Dense Passage Retriever, which computes the similarities between questions and tables based on a single representation of each. These fixed vectors can be insufficient to capture fine-grained features of potentially very big tables with heterogeneous row/column information. We address this limitation by 1) applying late interaction models which enforce a finer-grained interaction between question and table embeddings at retrieval time. In addition, we 2) incorporate a joint training scheme of the retriever and reader with explicit table-level signals, and 3) embed a binary relevance token as a prefix to the answer generated by the reader, so we can determine at inference time whether the table used to answer the question is reliable and filter accordingly. The combined strategies set a new state-to-the-art performance on two public open-domain TableQA datasets.

pdf bib
xPQA: Cross-Lingual Product Question Answering in 12 Languages
Xiaoyu Shen | Akari Asai | Bill Byrne | Adria De Gispert
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)

Product Question Answering (PQA) systems are key in e-commerce applications as they provide responses to customers’ questions as they shop for products. While existing work on PQA focuses mainly on English, in practice there is need to support multiple customer languages while leveraging product information available in English. To study this practical industrial task, we present xPQA, a large-scale annotated cross-lingual PQA dataset in 12 languages, and report results in (1) candidate ranking, to select the best English candidate containing the information to answer a non-English question; and (2) answer generation, to generate a natural-sounding non-English answer based on the selected English candidate. We evaluate various approaches involving machine translation at runtime or offline, leveraging multilingual pre-trained LMs, and including or excluding xPQA training data. We find that in-domain data is essential as cross-lingual rankers trained on other domains perform poorly on the PQA task, and that translation-based approaches are most effective for candidate ranking while multilingual finetuning works best for answer generation. Still, there remains a significant performance gap between the English and the cross-lingual test sets.

pdf bib
FVQA 2.0: Introducing Adversarial Samples into Fact-based Visual Question Answering
Weizhe Lin | Zhilin Wang | Bill Byrne
Findings of the Association for Computational Linguistics: EACL 2023

The widely used Fact-based Visual Question Answering (FVQA) dataset contains visually-grounded questions that require information retrieval using common sense knowledge graphs to answer. It has been observed that the original dataset is highly imbalanced and concentrated on a small portion of its associated knowledge graph. We introduce FVQA 2.0 which contains adversarial variants of test questions to address this imbalance. We show that systems trained with the original FVQA train sets can be vulnerable to adversarial samples and we demonstrate an augmentation scheme to reduce this vulnerability without human annotations.

pdf bib
More Robust Schema-Guided Dialogue State Tracking via Tree-Based Paraphrase Ranking
Alexandru Coca | Bo-Hsiang Tseng | Weizhe Lin | Bill Byrne
Findings of the Association for Computational Linguistics: EACL 2023

The schema-guided paradigm overcomes scalability issues inherent in building task-oriented dialogue (TOD) agents with static ontologies. Rather than operating on dialogue context alone, agents have access to hierarchical schemas containing task-relevant natural language descriptions. Fine-tuned language models excel at schema-guided dialogue state tracking (DST) but are sensitive to the writing style of the schemas. We explore methods for improving the robustness of DST models. We propose a framework for generating synthetic schemas which uses tree-based ranking to jointly optimise lexical diversity and semantic faithfulness. The robust generalisation of strong baselines is improved when augmenting their training data with prompts generated by our framework, as demonstrated by marked improvements in average Joint Goal Accuracy (JGA) and schema sensitivity (SS) on the SGD-X benchmark.

pdf bib
Neural Ranking with Weak Supervision for Open-Domain Question Answering : A Survey
Xiaoyu Shen | Svitlana Vakulenko | Marco del Tredici | Gianni Barlacchi | Bill Byrne | Adria de Gispert
Findings of the Association for Computational Linguistics: EACL 2023

Neural ranking (NR) has become a key component for open-domain question-answering in order to access external knowledge. However, training a good NR model requires substantial amounts of relevance annotations, which is very costly to scale. To address this, a growing body of research works have been proposed to reduce the annotation cost by training the NR model with weak supervision (WS) instead. These works differ in what resources they require and employ a diverse set of WS signals to train the model. Understanding such differences is crucial for choosing the right WS technique. To facilitate this understanding, we provide a structured overview of standard WS signals used for training a NR model. Based on their required resources, we divide them into three main categories: (1) only documents are needed; (2) documents and questions are needed; and (3) documents and question-answer pairs are needed. For every WS signal, we review its general idea and choices. Promising directions are outlined for future research.

pdf bib
Grounding Description-Driven Dialogue State Trackers with Knowledge-Seeking Turns
Alexandru Coca | Bo-Hsiang Tseng | Jinghong Chen | Weizhe Lin | Weixuan Zhang | Tisha Anders | Bill Byrne
Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue

Schema-guided dialogue state trackers can generalise to new domains without further training, yet they are sensitive to the writing style of the schemata. Augmenting the training set with human or synthetic schema paraphrases improves the model robustness to these variations but can be either costly or difficult to control. We propose to circumvent these issues by grounding the state tracking model in knowledge-seeking turns collected from the dialogue corpus as well as the schema. Including these turns in prompts during finetuning and inference leads to marked improvements in model robustness, as demonstrated by large average joint goal accuracy and schema sensitivity improvements on SGD and SGD-X.

2022

pdf bib
The Devil is in the Details: On the Pitfalls of Vocabulary Selection in Neural Machine Translation
Tobias Domhan | Eva Hasler | Ke Tran | Sony Trenous | Bill Byrne | Felix Hieber
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Vocabulary selection, or lexical shortlisting, is a well-known technique to improve latency of Neural Machine Translation models by constraining the set of allowed output words during inference. The chosen set is typically determined by separately trained alignment model parameters, independent of the source-sentence context at inference time. While vocabulary selection appears competitive with respect to automatic quality metrics in prior work, we show that it can fail to select the right set of output words, particularly for semantically non-compositional linguistic phenomena such as idiomatic expressions, leading to reduced translation quality as perceived by humans. Trading off latency for quality by increasing the size of the allowed set is often not an option in real-world scenarios. We propose a model of vocabulary selection, integrated into the neural translation model, that predicts the set of allowed output words from contextualized encoder representations. This restores translation quality of an unconstrained system, as measured by human evaluations on WMT newstest2020 and idiomatic expressions, at an inference latency competitive with alignment-based selection using aggressive thresholds, thereby removing the dependency on separately trained alignment models.

pdf bib
Retrieval Augmented Visual Question Answering with Outside Knowledge
Weizhe Lin | Bill Byrne
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

Outside-Knowledge Visual Question Answering (OK-VQA) is a challenging VQA task that requires retrieval of external knowledge to answer questions about images. Recent OK-VQA systems use Dense Passage Retrieval (DPR) to retrieve documents from external knowledge bases, such as Wikipedia, but with DPR trained separately from answer generation, introducing a potential limit on the overall system performance.Instead, we propose a joint training scheme which includes differentiable DPR integrated with answer generation so that the system can be trained in an end-to-end fashion. Our experiments show that our scheme outperforms recent OK-VQA systems with strong DPR for retrieval. We also introduce new diagnostic metrics to analyze how retrieval and generation interact. The strong retrieval ability of our model significantly reduces the number of retrieved documents needed in training, yielding significant benefits in answer quality and computation required for training.

pdf bib
uFACT: Unfaithful Alien-Corpora Training for Semantically Consistent Data-to-Text Generation
Tisha Anders | Alexandru Coca | Bill Byrne
Findings of the Association for Computational Linguistics: ACL 2022

We propose uFACT (Un-Faithful Alien Corpora Training), a training corpus construction method for data-to-text (d2t) generation models. We show that d2t models trained on uFACT datasets generate utterances which represent the semantic content of the data sources more accurately compared to models trained on the target corpus alone. Our approach is to augment the training set of a given target corpus with alien corpora which have different semantic representations. We show that while it is important to have faithful data from the target corpus, the faithfulness of additional corpora only plays a minor role. Consequently, uFACT datasets can be constructed with large quantities of unfaithful data. We show how uFACT can be leveraged to obtain state-of-the-art results on the WebNLG benchmark using METEOR as our performance metric. Furthermore, we investigate the sensitivity of the generation faithfulness to the training corpus structure using the PARENT metric, and provide a baseline for this metric on the WebNLG (Gardent et al., 2017) benchmark to facilitate comparisons with future work.

pdf bib
First the Worst: Finding Better Gender Translations During Beam Search
Danielle Saunders | Rosie Sallis | Bill Byrne
Findings of the Association for Computational Linguistics: ACL 2022

Generating machine translations via beam search seeks the most likely output under a model. However, beam search has been shown to amplify demographic biases exhibited by a model. We aim to address this, focusing on gender bias resulting from systematic errors in grammatical gender translation. Almost all prior work on this problem adjusts the training data or the model itself. By contrast, our approach changes only the inference procedure. We constrain beam search to improve gender diversity in n-best lists, and rerank n-best lists using gender features obtained from the source sentence. Combining these strongly improves WinoMT gender translation accuracy for three language pairs without additional bilingual data or retraining. We also demonstrate our approach’s utility for consistently gendering named entities, and its flexibility to handle new gendered language beyond the binary.

pdf bib
FocusQA: Open-Domain Question Answering with a Context in Focus
Gianni Barlacchi | Ivano Lauriola | Alessandro Moschitti | Marco Del Tredici | Xiaoyu Shen | Thuy Vu | Bill Byrne | Adrià de Gispert
Findings of the Association for Computational Linguistics: EMNLP 2022

We introduce question answering with a cotext in focus, a task that simulates a free interaction with a QA system. The user reads on a screen some information about a topic, and they can follow-up with questions that can be either related or not to the topic; and the answer can be found in the document containing the screen content or from other pages. We call such information context. To study the task, we construct FocusQA, a dataset for answer sentence selection (AS2) with 12,165011unique question/context pairs, and a total of 109,940 answers. To build the dataset, we developed a novel methodology that takes existing questions and pairs them with relevant contexts. To show the benefits of this approach, we present a comparative analysis with a set of questions written by humans after reading the context, showing that our approach greatly helps in eliciting more realistic question/context pairs. Finally, we show that the task poses several challenges for incorporating contextual information. In this respect, we introduce strong baselines for answer sentence selection that outperform the precision of state-of-the-art models for AS2 up to 21.3% absolute points.

pdf bib
From Rewriting to Remembering: Common Ground for Conversational QA Models
Marco Del Tredici | Xiaoyu Shen | Gianni Barlacchi | Bill Byrne | Adrià de Gispert
Proceedings of the 4th Workshop on NLP for Conversational AI

In conversational QA, models have to leverage information in previous turns to answer upcoming questions. Current approaches, such as Question Rewriting, struggle to extract relevant information as the conversation unwinds. We introduce the Common Ground (CG), an approach to accumulate conversational information as it emerges and select the relevant information at every turn. We show that CG offers a more efficient and human-like way to exploit conversational information compared to existing approaches, leading to improvements on Open Domain Conversational QA.

pdf bib
Product Answer Generation from Heterogeneous Sources: A New Benchmark and Best Practices
Xiaoyu Shen | Gianni Barlacchi | Marco Del Tredici | Weiwei Cheng | Bill Byrne | Adrià Gispert
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)

It is of great value to answer product questions based on heterogeneous information sources available on web product pages, e.g., semi-structured attributes, text descriptions, user-provided contents, etc. However, these sources have different structures and writing styles, which poses challenges for (1) evidence ranking, (2) source selection, and (3) answer generation. In this paper, we build a benchmark with annotations for both evidence selection and answer generation covering 6 information sources. Based on this benchmark, we conduct a comprehensive study and present a set of best practices. We show that all sources are important and contribute to answering questions. Handling all sources within one single model can produce comparable confidence scores across sources and combining multiple sources for training always helps, even for sources with totally different structures. We further propose a novel data augmentation method to iteratively create training samples for answer generation, which achieves close-to-human performance with only a few thousandannotations. Finally, we perform an in-depth error analysis of model predictions and highlight the challenges for future research.

2021

pdf bib
GCDF1: A Goal- and Context- Driven F-Score for Evaluating User Models
Alexandru Coca | Bo-Hsiang Tseng | Bill Byrne
The First Workshop on Evaluations and Assessments of Neural Conversation Systems

The evaluation of dialogue systems in interaction with simulated users has been proposed to improve turn-level, corpus-based metrics which can only evaluate test cases encountered in a corpus and cannot measure system’s ability to sustain multi-turn interactions. Recently, little emphasis was put on automatically assessing the quality of the user model itself, so unless correlations with human studies are measured, the reliability of user model based evaluation is unknown. We propose GCDF1, a simple but effective measure of the quality of semantic-level conversations between a goal-driven user agent and a system agent. In contrast with previous approaches we measure the F-score at dialogue level and consider user and system behaviours to improve recall and precision estimation. We facilitate scores interpretation by providing a rich hierarchical structure with information about conversational patterns present in the test data and tools to efficiently query the conversations generated. We apply our framework to assess the performance and weaknesses of a Convlab2 user model.

pdf bib
Knowledge-Aware Graph-Enhanced GPT-2 for Dialogue State Tracking
Weizhe Lin | Bo-Hsiang Tseng | Bill Byrne
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Dialogue State Tracking is central to multi-domain task-oriented dialogue systems, responsible for extracting information from user utterances. We present a novel hybrid architecture that augments GPT-2 with representations derived from Graph Attention Networks in such a way to allow causal, sequential prediction of slot values. The model architecture captures inter-slot relationships and dependencies across domains that otherwise can be lost in sequential prediction. We report improvements in state tracking performance in MultiWOZ 2.0 against a strong GPT-2 baseline and investigate a simplified sparse training scenario in which DST models are trained only on session-level annotations but evaluated at the turn level. We further report detailed analyses to demonstrate the effectiveness of graph models in DST by showing that the proposed graph modules capture inter-slot dependencies and improve the predictions of values that are common to multiple domains.

pdf bib
Improving the Quality Trade-Off for Neural Machine Translation Multi-Domain Adaptation
Eva Hasler | Tobias Domhan | Jonay Trenous | Ke Tran | Bill Byrne | Felix Hieber
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Building neural machine translation systems to perform well on a specific target domain is a well-studied problem. Optimizing system performance for multiple, diverse target domains however remains a challenge. We study this problem in an adaptation setting where the goal is to preserve the existing system quality while incorporating data for domains that were not the focus of the original translation system. We find that we can improve over the performance trade-off offered by Elastic Weight Consolidation with a relatively simple data mixing strategy. At comparable performance on the new domains, catastrophic forgetting is mitigated significantly on strong WMT baselines. Combining both approaches improves the Pareto frontier on this task.

2020

pdf bib
Addressing Exposure Bias With Document Minimum Risk Training: Cambridge at the WMT20 Biomedical Translation Task
Danielle Saunders | Bill Byrne
Proceedings of the Fifth Conference on Machine Translation

The 2020 WMT Biomedical translation task evaluated Medline abstract translations. This is a small-domain translation task, meaning limited relevant training data with very distinct style and vocabulary. Models trained on such data are susceptible to exposure bias effects, particularly when training sentence pairs are imperfect translations of each other. This can result in poor behaviour during inference if the model learns to neglect the source sentence. The UNICAM entry addresses this problem during fine-tuning using a robust variant on Minimum Risk Training. We contrast this approach with data-filtering to remove ‘problem’ training examples. Under MRT fine-tuning we obtain good results for both directions of English-German and English-Spanish biomedical translation. In particular we achieve the best English-to-Spanish translation result and second-best Spanish-to-English result, despite using only single models with no ensembling.

pdf bib
Neural Machine Translation Doesn’t Translate Gender Coreference Right Unless You Make It
Danielle Saunders | Rosie Sallis | Bill Byrne
Proceedings of the Second Workshop on Gender Bias in Natural Language Processing

Neural Machine Translation (NMT) has been shown to struggle with grammatical gender that is dependent on the gender of human referents, which can cause gender bias effects. Many existing approaches to this problem seek to control gender inflection in the target language by explicitly or implicitly adding a gender feature to the source sentence, usually at the sentence level. In this paper we propose schemes for incorporating explicit word-level gender inflection tags into NMT. We explore the potential of this gender-inflection controlled translation when the gender feature can be determined from a human reference, or when a test sentence can be automatically gender-tagged, assessing on English-to-Spanish and English-to-German translation. We find that simple existing approaches can over-generalize a gender-feature to multiple entities in a sentence, and suggest effective alternatives in the form of tagged coreference adaptation data. We also propose an extension to assess translations of gender-neutral entities from English given a corresponding linguistic convention, such as a non-binary inflection, in the target language.

pdf bib
Reducing Gender Bias in Neural Machine Translation as a Domain Adaptation Problem
Danielle Saunders | Bill Byrne
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Training data for NLP tasks often exhibits gender bias in that fewer sentences refer to women than to men. In Neural Machine Translation (NMT) gender bias has been shown to reduce translation quality, particularly when the target language has grammatical gender. The recent WinoMT challenge set allows us to measure this effect directly (Stanovsky et al, 2019) Ideally we would reduce system bias by simply debiasing all data prior to training, but achieving this effectively is itself a challenge. Rather than attempt to create a ‘balanced’ dataset, we use transfer learning on a small set of trusted, gender-balanced examples. This approach gives strong and consistent improvements in gender debiasing with much less computational cost than training from scratch. A known pitfall of transfer learning on new domains is ‘catastrophic forgetting’, which we address at adaptation and inference time. During adaptation we show that Elastic Weight Consolidation allows a performance trade-off between general translation quality and bias reduction. At inference time we propose a lattice-rescoring scheme which outperforms all systems evaluated in Stanovsky et al, 2019 on WinoMT with no degradation of general test set BLEU. We demonstrate our approach translating from English into three languages with varied linguistic properties and data availability.

pdf bib
Using Context in Neural Machine Translation Training Objectives
Danielle Saunders | Felix Stahlberg | Bill Byrne
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

We present Neural Machine Translation (NMT) training using document-level metrics with batch-level documents. Previous sequence-objective approaches to NMT training focus exclusively on sentence-level metrics like sentence BLEU which do not correspond to the desired evaluation metric, typically document BLEU. Meanwhile research into document-level NMT training focuses on data or model architecture rather than training procedure. We find that each of these lines of research has a clear space in it for the other, and propose merging them with a scheme that allows a document-level evaluation metric to be used in the NMT training objective. We first sample pseudo-documents from sentence samples. We then approximate the expected document BLEU gradient with Monte Carlo sampling for use as a cost function in Minimum Risk Training (MRT). This two-level sampling procedure gives NMT performance gains over sequence MRT and maximum-likelihood training. We demonstrate that training is more robust for document-level metrics than with sequence metrics. We further demonstrate improvements on NMT with TER and Grammatical Error Correction (GEC) using GLEU, both metrics used at the document level for evaluations.

pdf bib
Inference-only sub-character decomposition improves translation of unseen logographic characters
Danielle Saunders | Weston Feely | Bill Byrne
Proceedings of the 7th Workshop on Asian Translation

Neural Machine Translation (NMT) on logographic source languages struggles when translating ‘unseen’ characters, which never appear in the training data. One possible approach to this problem uses sub-character decomposition for training and test sentences. However, this approach involves complete retraining, and its effectiveness for unseen character translation to non-logographic languages has not been fully explored. We investigate existing ideograph-based sub-character decomposition approaches for Chinese-to-English and Japanese-to-English NMT, for both high-resource and low-resource domains. For each language pair and domain we construct a test set where all source sentences contain at least one unseen logographic character. We find that complete sub-character decomposition often harms unseen character translation, and gives inconsistent results generally. We offer a simple alternative based on decomposition before inference for unseen characters only. Our approach allows flexible application, achieving translation adequacy improvements and requiring no additional models or training.

pdf bib
The Teacher-Student Chatroom Corpus
Andrew Caines | Helen Yannakoudakis | Helena Edmondson | Helen Allen | Pascual Pérez-Paredes | Bill Byrne | Paula Buttery
Proceedings of the 9th Workshop on NLP for Computer Assisted Language Learning

2019

pdf bib
Domain Adaptive Inference for Neural Machine Translation
Danielle Saunders | Felix Stahlberg | Adrià de Gispert | Bill Byrne
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

We investigate adaptive ensemble weighting for Neural Machine Translation, addressing the case of improving performance on a new and potentially unknown domain without sacrificing performance on the original domain. We adapt sequentially across two Spanish-English and three English-German tasks, comparing unregularized fine-tuning, L2 and Elastic Weight Consolidation. We then report a novel scheme for adaptive NMT ensemble decoding by extending Bayesian Interpolation with source information, and report strong improvements across test domains without access to the domain label.

pdf bib
Neural Grammatical Error Correction with Finite State Transducers
Felix Stahlberg | Christopher Bryant | Bill Byrne
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

Grammatical error correction (GEC) is one of the areas in natural language processing in which purely neural models have not yet superseded more traditional symbolic models. Hybrid systems combining phrase-based statistical machine translation (SMT) and neural sequence models are currently among the most effective approaches to GEC. However, both SMT and neural sequence-to-sequence models require large amounts of annotated data. Language model based GEC (LM-GEC) is a promising alternative which does not rely on annotated training data. We show how to improve LM-GEC by applying modelling techniques based on finite state transducers. We report further gains by rescoring with neural language models. We show that our methods developed for LM-GEC can also be used with SMT systems if annotated training data is available. Our best system outperforms the best published result on the CoNLL-2014 test set, and achieves far better relative improvements over the SMT baselines than previous hybrid systems.

pdf bib
Semi-Supervised Bootstrapping of Dialogue State Trackers for Task-Oriented Modelling
Bo-Hsiang Tseng | Marek Rei | Paweł Budzianowski | Richard Turner | Bill Byrne | Anna Korhonen
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Dialogue systems benefit greatly from optimizing on detailed annotations, such as transcribed utterances, internal dialogue state representations and dialogue act labels. However, collecting these annotations is expensive and time-consuming, holding back development in the area of dialogue modelling. In this paper, we investigate semi-supervised learning methods that are able to reduce the amount of required intermediate labelling. We find that by leveraging un-annotated data instead, the amount of turn-level annotations of dialogue state can be significantly reduced when building a neural dialogue system. Our analysis on the MultiWOZ corpus, covering a range of domains and topics, finds that annotations can be reduced by up to 30% while maintaining equivalent system performance. We also describe and evaluate the first end-to-end dialogue model created for the MultiWOZ corpus.

pdf bib
On NMT Search Errors and Model Errors: Cat Got Your Tongue?
Felix Stahlberg | Bill Byrne
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

We report on search errors and model errors in neural machine translation (NMT). We present an exact inference procedure for neural sequence models based on a combination of beam search and depth-first search. We use our exact search to find the global best model scores under a Transformer base model for the entire WMT15 English-German test set. Surprisingly, beam search fails to find these global best model scores in most cases, even with a very large beam size of 100. For more than 50% of the sentences, the model in fact assigns its global best score to the empty translation, revealing a massive failure of neural models in properly accounting for adequacy. We show by constraining search with a minimum translation length that at the root of the problem of empty translations lies an inherent bias towards shorter translations. We conclude that vanilla NMT in its current form requires just the right amount of beam search errors, which, from a modelling perspective, is a highly unsatisfactory conclusion indeed, as the model often prefers an empty translation.

pdf bib
The CUED’s Grammatical Error Correction Systems for BEA-2019
Felix Stahlberg | Bill Byrne
Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications

We describe two entries from the Cambridge University Engineering Department to the BEA 2019 Shared Task on grammatical error correction. Our submission to the low-resource track is based on prior work on using finite state transducers together with strong neural language models. Our system for the restricted track is a purely neural system consisting of neural language models and neural machine translation models trained with back-translation and a combination of checkpoint averaging and fine-tuning – without the help of any additional tools like spell checkers. The latter system has been used inside a separate system combination entry in cooperation with the Cambridge University Computer Lab.

pdf bib
Neural and FST-based approaches to grammatical error correction
Zheng Yuan | Felix Stahlberg | Marek Rei | Bill Byrne | Helen Yannakoudakis
Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications

In this paper, we describe our submission to the BEA 2019 shared task on grammatical error correction. We present a system pipeline that utilises both error detection and correction models. The input text is first corrected by two complementary neural machine translation systems: one using convolutional networks and multi-task learning, and another using a neural Transformer-based system. Training is performed on publicly available data, along with artificial examples generated through back-translation. The n-best lists of these two machine translation systems are then combined and scored using a finite state transducer (FST). Finally, an unsupervised re-ranking system is applied to the n-best output of the FST. The re-ranker uses a number of error detection features to re-rank the FST n-best list and identify the final 1-best correction hypothesis. Our system achieves 66.75% F 0.5 on error correction (ranking 4th), and 82.52% F 0.5 on token-level error detection (ranking 2nd) in the restricted track of the shared task.

pdf bib
CUED@WMT19:EWC&LMs
Felix Stahlberg | Danielle Saunders | Adrià de Gispert | Bill Byrne
Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)

Two techniques provide the fabric of the Cambridge University Engineering Department’s (CUED) entry to the WMT19 evaluation campaign: elastic weight consolidation (EWC) and different forms of language modelling (LMs). We report substantial gains by fine-tuning very strong baselines on former WMT test sets using a combination of checkpoint averaging and EWC. A sentence-level Transformer LM and a document-level LM based on a modified Transformer architecture yield further gains. As in previous years, we also extract n-gram probabilities from SMT lattices which can be seen as a source-conditioned n-gram LM.

pdf bib
UCAM Biomedical Translation at WMT19: Transfer Learning Multi-domain Ensembles
Danielle Saunders | Felix Stahlberg | Bill Byrne
Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)

The 2019 WMT Biomedical translation task involved translating Medline abstracts. We approached this using transfer learning to obtain a series of strong neural models on distinct domains, and combining them into multi-domain ensembles. We further experimented with an adaptive language-model ensemble weighting scheme. Our submission achieved the best submitted results on both directions of English-Spanish.

2018

pdf bib
Neural Machine Translation Decoding with Terminology Constraints
Eva Hasler | Adrià de Gispert | Gonzalo Iglesias | Bill Byrne
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)

Despite the impressive quality improvements yielded by neural machine translation (NMT) systems, controlling their translation output to adhere to user-provided terminology constraints remains an open problem. We describe our approach to constrained neural decoding based on finite-state machines and multi-stack decoding which supports target-side constraints as well as constraints with corresponding aligned input text spans. We demonstrate the performance of our framework on multiple translation tasks and motivate the need for constrained decoding with attentions as a means of reducing misplacement and duplication when translating user constraints.

pdf bib
Accelerating NMT Batched Beam Decoding with LMBR Posteriors for Deployment
Gonzalo Iglesias | William Tambellini | Adrià De Gispert | Eva Hasler | Bill Byrne
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers)

We describe a batched beam decoding algorithm for NMT with LMBR n-gram posteriors, showing that LMBR techniques still yield gains on top of the best recently reported results with Transformers. We also discuss acceleration strategies for deployment, and the effect of the beam size and batching on memory and speed.

pdf bib
Multi-representation ensembles and delayed SGD updates improve syntax-based NMT
Danielle Saunders | Felix Stahlberg | Adrià de Gispert | Bill Byrne
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

We explore strategies for incorporating target syntax into Neural Machine Translation. We specifically focus on syntax in ensembles containing multiple sentence representations. We formulate beam search over such ensembles using WFSTs, and describe a delayed SGD update training procedure that is especially effective for long representations like linearized syntax. Our approach gives state-of-the-art performance on a difficult Japanese-English task.

pdf bib
Why not be Versatile? Applications of the SGNMT Decoder for Machine Translation
Felix Stahlberg | Danielle Saunders | Gonzalo Iglesias | Bill Byrne
Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track)

pdf bib
An Operation Sequence Model for Explainable Neural Machine Translation
Felix Stahlberg | Danielle Saunders | Bill Byrne
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP

We propose to achieve explainable neural machine translation (NMT) by changing the output representation to explain itself. We present a novel approach to NMT which generates the target sentence by monotonically walking through the source sentence. Word reordering is modeled by operations which allow setting markers in the target sentence and move a target-side write head between those markers. In contrast to many modern neural models, our system emits explicit word alignment information which is often crucial to practical machine translation as it improves explainability. Our technique can outperform a plain text system in terms of BLEU score under the recent Transformer architecture on Japanese-English and Portuguese-English, and is within 0.5 BLEU difference on Spanish-English.

pdf bib
The University of Cambridge’s Machine Translation Systems for WMT18
Felix Stahlberg | Adrià de Gispert | Bill Byrne
Proceedings of the Third Conference on Machine Translation: Shared Task Papers

The University of Cambridge submission to the WMT18 news translation task focuses on the combination of diverse models of translation. We compare recurrent, convolutional, and self-attention-based neural models on German-English, English-German, and Chinese-English. Our final system combines all neural models together with a phrase-based SMT system in an MBR-based scheme. We report small but consistent gains on top of strong Transformer ensembles.

2017

pdf bib
Unfolding and Shrinking Neural Machine Translation Ensembles
Felix Stahlberg | Bill Byrne
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

Ensembling is a well-known technique in neural machine translation (NMT) to improve system performance. Instead of a single neural net, multiple neural nets with the same topology are trained separately, and the decoder generates predictions by averaging over the individual models. Ensembling often improves the quality of the generated translations drastically. However, it is not suitable for production systems because it is cumbersome and slow. This work aims to reduce the runtime to be on par with a single system without compromising the translation quality. First, we show that the ensemble can be unfolded into a single large neural network which imitates the output of the ensemble system. We show that unfolding can already improve the runtime in practice since more work can be done on the GPU. We proceed by describing a set of techniques to shrink the unfolded network by reducing the dimensionality of layers. On Japanese-English we report that the resulting network has the size and decoding speed of a single NMT network but performs on the level of a 3-ensemble system.

pdf bib
Break it Down for Me: A Study in Automated Lyric Annotation
Lucas Sterckx | Jason Naradowsky | Bill Byrne | Thomas Demeester | Chris Develder
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

Comprehending lyrics, as found in songs and poems, can pose a challenge to human and machine readers alike. This motivates the need for systems that can understand the ambiguity and jargon found in such creative texts, and provide commentary to aid readers in reaching the correct interpretation. We introduce the task of automated lyric annotation (ALA). Like text simplification, a goal of ALA is to rephrase the original text in a more easily understandable manner. However, in ALA the system must often include additional information to clarify niche terminology and abstract concepts. To stimulate research on this task, we release a large collection of crowdsourced annotations for song lyrics. We analyze the performance of translation and retrieval models on this task, measuring performance with both automated and human evaluation. We find that each model captures a unique type of information important to the task.

pdf bib
SGNMT – A Flexible NMT Decoding Platform for Quick Prototyping of New Models and Search Strategies
Felix Stahlberg | Eva Hasler | Danielle Saunders | Bill Byrne
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations

This paper introduces SGNMT, our experimental platform for machine translation research. SGNMT provides a generic interface to neural and symbolic scoring modules (predictors) with left-to-right semantic such as translation models like NMT, language models, translation lattices, n-best lists or other kinds of scores and constraints. Predictors can be combined with other predictors to form complex decoding tasks. SGNMT implements a number of search strategies for traversing the space spanned by the predictors which are appropriate for different predictor constellations. Adding new predictors or decoding strategies is particularly easy, making it a very efficient tool for prototyping new research ideas. SGNMT is actively being used by students in the MPhil program in Machine Learning, Speech and Language Technology at the University of Cambridge for course work and theses, as well as for most of the research work in our group.

pdf bib
Neural Machine Translation by Minimising the Bayes-risk with Respect to Syntactic Translation Lattices
Felix Stahlberg | Adrià de Gispert | Eva Hasler | Bill Byrne
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers

We present a novel scheme to combine neural machine translation (NMT) with traditional statistical machine translation (SMT). Our approach borrows ideas from linearised lattice minimum Bayes-risk decoding for SMT. The NMT score is combined with the Bayes-risk of the translation according the SMT lattice. This makes our approach much more flexible than n-best list or lattice rescoring as the neural decoder is not restricted to the SMT search space. We show an efficient and simple way to integrate risk estimation into the NMT decoder which is suitable for word-level as well as subword-unit-level NMT. We test our method on English-German and Japanese-English and report significant gains over lattice rescoring on several data sets for both single and ensembled NMT. The MBR decoder produces entirely new hypotheses far beyond simply rescoring the SMT search space or fixing UNKs in the NMT output.

pdf bib
A Comparison of Neural Models for Word Ordering
Eva Hasler | Felix Stahlberg | Marcus Tomalin | Adrià de Gispert | Bill Byrne
Proceedings of the 10th International Conference on Natural Language Generation

We compare several language models for the word-ordering task and propose a new bag-to-sequence neural model based on attention-based sequence-to-sequence models. We evaluate the model on a large German WMT data set where it significantly outperforms existing models. We also describe a novel search strategy for LM-based word ordering and report results on the English Penn Treebank. Our best model setup outperforms prior work both in terms of speed and quality.

2016

pdf bib
Speed-Constrained Tuning for Statistical Machine Translation Using Bayesian Optimization
Daniel Beck | Adrià de Gispert | Gonzalo Iglesias | Aurelien Waite | Bill Byrne
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
The Edit Distance Transducer in Action: The University of Cambridge English-German System at WMT16
Felix Stahlberg | Eva Hasler | Bill Byrne
Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers

pdf bib
Syntactically Guided Neural Machine Translation
Felix Stahlberg | Eva Hasler | Aurelien Waite | Bill Byrne
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

2015

pdf bib
The Geometry of Statistical Machine Translation
Aurelien Waite | Bill Byrne
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Fast and Accurate Preordering for SMT using Neural Networks
Adrià de Gispert | Gonzalo Iglesias | Bill Byrne
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Transducer Disambiguation with Sparse Topological Features
Gonzalo Iglesias | Adrià de Gispert | Bill Byrne
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

pdf bib
Hierarchical Statistical Semantic Realization for Minimal Recursion Semantics
Matic Horvat | Ann Copestake | Bill Byrne
Proceedings of the 11th International Conference on Computational Semantics

2014

pdf bib
Proceedings of the ACL 2014 Student Research Workshop
Ekaterina Kochmar | Annie Louis | Svitlana Volkova | Jordan Boyd-Graber | Bill Byrne
Proceedings of the ACL 2014 Student Research Workshop

pdf bib
Effective Incorporation of Source Syntax into Hierarchical Phrase-based Translation
Tong Xiao | Adrià de Gispert | Jingbo Zhu | Bill Byrne
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers

pdf bib
Source-side Preordering for Translation using Logistic Regression and Depth-first Branch-and-Bound Search
Laura Jehl | Adrià de Gispert | Mark Hopkins | Bill Byrne
Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics

pdf bib
Word Ordering with Phrase-Based Grammars
Adrià de Gispert | Marcus Tomalin | Bill Byrne
Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics

pdf bib
A Graph-Based Approach to String Regeneration
Matic Horvat | William Byrne
Proceedings of the Student Research Workshop at the 14th Conference of the European Chapter of the Association for Computational Linguistics

pdf bib
Pushdown Automata in Statistical Machine Translation
Cyril Allauzen | Bill Byrne | Adrià de Gispert | Gonzalo Iglesias | Michael Riley
Computational Linguistics, Volume 40, Issue 3 - September 2014

2013

pdf bib
The University of Cambridge Russian-English System at WMT13
Juan Pino | Aurelien Waite | Tong Xiao | Adrià de Gispert | Federico Flego | William Byrne
Proceedings of the Eighth Workshop on Statistical Machine Translation

pdf bib
FAUST: Feedback Analysis for User Adaptive Statistical Translation
William Byrne | Lluis Marquez
Proceedings of Machine Translation Summit XIV: European projects

2012

pdf bib
Lattice-Based Minimum Error Rate Training Using Weighted Finite-State Transducers with Tropical Polynomial Weights
Aurelien Waite | Graeme Blackwood | William Byrne
Proceedings of the 10th International Workshop on Finite State Methods and Natural Language Processing

2011

pdf bib
Hierarchical Phrase-based Translation Representations
Gonzalo Iglesias | Cyril Allauzen | William Byrne | Adrià de Gispert | Michael Riley
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

2010

pdf bib
Efficient Path Counting Transducers for Minimum Bayes-Risk Decoding of Statistical Machine Translation Lattices
Graeme Blackwood | Adrià de Gispert | William Byrne
Proceedings of the ACL 2010 Conference Short Papers

pdf bib
Personalising Speech-To-Speech Translation in the EMIME Project
Mikko Kurimo | William Byrne | John Dines | Philip N. Garner | Matthew Gibson | Yong Guan | Teemu Hirsimäki | Reima Karhila | Simon King | Hui Liang | Keiichiro Oura | Lakshmi Saheer | Matt Shannon | Sayaki Shiota | Jilei Tian
Proceedings of the ACL 2010 System Demonstrations

pdf bib
Hierarchical Phrase-Based Translation Grammars Extracted from Alignment Posterior Probabilities
Adrià de Gispert | Juan Pino | William Byrne
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf bib
Hierarchical phrase-based translation with weighted finite state transducers
William Byrne
Proceedings of the 7th International Workshop on Spoken Language Translation: Plenaries

pdf bib
The CUED HiFST System for the WMT10 Translation Shared Task
Juan Pino | Gonzalo Iglesias | Adrià de Gispert | Graeme Blackwood | Jamie Brunning | William Byrne
Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR

pdf bib
Hierarchical Phrase-Based Translation with Weighted Finite-State Transducers and Shallow-n Grammars
Adrià de Gispert | Gonzalo Iglesias | Graeme Blackwood | Eduardo R. Banga | William Byrne
Computational Linguistics, Volume 36, Issue 3 - September 2010

pdf bib
Fluency Constraints for Minimum Bayes-Risk Decoding of Statistical Machine Translation Lattices
Graeme Blackwood | Adrià de Gispert | William Byrne
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)

2009

pdf bib
Context-Dependent Alignment Models for Statistical Machine Translation
Jamie Brunning | Adrià de Gispert | William Byrne
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
Hierarchical Phrase-Based Translation with Weighted Finite State Transducers
Gonzalo Iglesias | Adrià de Gispert | Eduardo R. Banga | William Byrne
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
Minimum Bayes Risk Combination of Translation Hypotheses from Alternative Morphological Decompositions
Adrià de Gispert | Sami Virpioja | Mikko Kurimo | William Byrne
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers

pdf bib
Rule Filtering by Pattern for Efficient Hierarchical Translation
Gonzalo Iglesias | Adrià de Gispert | Eduardo R. Banga | William Byrne
Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009)

2008

pdf bib
European Language Translation with Weighted Finite State Transducers: The CUED MT System for the 2008 ACL Workshop on SMT
Graeme Blackwood | Adrià de Gispert | Jamie Brunning | William Byrne
Proceedings of the Third Workshop on Statistical Machine Translation

pdf bib
Phrasal Segmentation Models for Statistical Machine Translation
Graeme Blackwood | Adrià de Gispert | William Byrne
Coling 2008: Companion volume: Posters

2006

pdf bib
MTTK: An Alignment Toolkit for Statistical Machine Translation
Yonggang Deng | William Byrne
Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Demonstrations

2005

pdf bib
Local Phrase Reordering Models for Statistical Machine Translation
Shankar Kumar | William Byrne
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing

pdf bib
HMM Word and Phrase Alignment for Statistical Machine Translation
Yonggang Deng | William Byrne
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing

2004

pdf bib
Issues in Annotation of the Czech Spontaneous Speech Corpus in the MALACH project
Josef Psutka | Pavel Ircing | Jan Hajič | Vlasta Radová | Josef V. Psutka | William J. Byrne | Samuel Gustman
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
Minimum Bayes-Risk Decoding for Statistical Machine Translation
Shankar Kumar | William Byrne
Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004

2003

pdf bib
A Generative Probabilistic OCR Model for NLP Applications
Okan Kolak | William Byrne | Philip Resnik
Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
A Weighted Finite State Transducer Implementation of the Alignment Template Model for Statistical Machine Translation
Shankar Kumar | William Byrne
Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
Desparately Seeking Cebuano
Douglas W. Oard | David Doermann | Bonnie Dorr | Daqing He | Philip Resnik | Amy Weinberg | William Byrne | Sanjeev Khudanpur | David Yarowsky | Anton Leuski | Philipp Koehn | Kevin Knight
Companion Volume of the Proceedings of HLT-NAACL 2003 - Short Papers

pdf bib
The Johns Hopkins University 2003 Chinese-English machine translation system
W. Byrne | S. Khudanpur | W. Kim | S. Kumar | P. Pecina | P. Virga | P. Xu | D. Yarowsky
Proceedings of Machine Translation Summit IX: System Presentations

We describe a Chinese to English Machine Translation system developed at the Johns Hopkins University for the NIST 2003 MT evaluation. The system is based on a Weighted Finite State Transducer implementation of the alignment template translation model for statistical machine translation. The baseline MT system was trained using 100,000 sentence pairs selected from a static bitext training collection. Information retrieval techniques were then used to create specific training collections for each document to be translated. This document-specific training set included bitext and name entities that were then added to the baseline system by augmenting the library of alignment templates. We report translation performance of baseline and IR-based systems on two NIST MT evaluation test sets.

2002

pdf bib
Minimum Bayes-Risk Word Alignments of Bilingual Texts
Shankar Kumar | William Byrne
Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP 2002)

2001

pdf bib
Robust Knowledge Discovery from Parallel Speech and Text Sources
F. Jelinek | W. Byrne | S. Khudanpur | B. Hladká | H. Ney | F. J. Och | J. Cuřín | J. Psutka
Proceedings of the First International Conference on Human Language Technology Research

1989

pdf bib
The Auditory Processing and Recognition of Speech
William Byrne | John Robinson | Shihab Shamma
Speech and Natural Language: Proceedings of a Workshop Held at Cape Cod, Massachusetts, October 15-18, 1989

Search
Co-authors