Anoop Sarkar


2024

pdf bib
Towards Fast Cognate Alignment on Imbalanced Data
Logan Born | M. Willis Monroe | Kathryn Kelley | Anoop Sarkar
Proceedings of the Second Workshop on Computation and Written Language (CAWL) @ LREC-COLING 2024

Cognate alignment models purport to enable decipherment, but their speed and need for clean data can make them unsuitable for realistic decipherment problems. We seek to draw attention to these shortcomings in the hopes that future work may avoid them, and we outline two techniques which begin to overcome the described problems.

pdf bib
Unified Examination of Entity Linking in Absence of Candidate Sets
Nicolas Ong | Hassan Shavarani | Anoop Sarkar
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)

Despite remarkable strides made in the development of entity linking systems in recent years, a comprehensive comparative analysis of these systems using a unified framework is notably absent. This paper addresses this oversight by introducing a new black-box benchmark and conducting a comprehensive evaluation of all state-of-the-art entity linking methods. We use an ablation study to investigate the impact of candidate sets on the performance of entity linking. Our findings uncover exactly how much such entity linking systems depend on candidate sets, and how much this limits the general applicability of each system. We present an alternative approach to candidate sets, demonstrating that leveraging the entire in-domain candidate set can serve as a viable substitute for certain models. We show the trade-off between less restrictive candidate sets, increased inference time and memory footprint for some models.

2023

pdf bib
Learning Nearest Neighbour Informed Latent Word Embeddings to Improve Zero-Shot Machine Translation
Nishant Kambhatla | Logan Born | Anoop Sarkar
Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023)

Multilingual neural translation models exploit cross-lingual transfer to perform zero-shot translation between unseen language pairs. Past efforts to improve cross-lingual transfer have focused on aligning contextual sentence-level representations. This paper introduces three novel contributions to allow exploiting nearest neighbours at the token level during training, including: (i) an efficient, gradient-friendly way to share representations between neighboring tokens; (ii) an attentional semantic layer which extracts latent features from shared embeddings; and (iii) an agreement loss to harmonize predictions across different sentence representations. Experiments on two multilingual datasets demonstrate consistent gains in zero shot translation over strong baselines.

pdf bib
Language Model Based Target Token Importance Rescaling for Simultaneous Neural Machine Translation
Aditi Jain | Nishant Kambhatla | Anoop Sarkar
Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023)

The decoder in simultaneous neural machine translation receives limited information from the source while having to balance the opposing requirements of latency versus translation quality. In this paper, we use an auxiliary target-side language model to augment the training of the decoder model. Under this notion of target adaptive training, generating rare or difficult tokens is rewarded which improves the translation quality while reducing latency. The predictions made by a language model in the decoder are combined with the traditional cross entropy loss which frees up the focus on the source side context. Our experimental results over multiple language pairs show that compared to previous state of the art methods in simultaneous translation, we can use an augmented target side context to improve BLEU scores significantly. We show improvements over the state of the art in the low latency range with lower average lagging values (faster output).

pdf bib
Decipherment as Regression: Solving Historical Substitution Ciphers by Learning Symbol Recurrence Relations
Nishant Kambhatla | Logan Born | Anoop Sarkar
Findings of the Association for Computational Linguistics: EACL 2023

Solving substitution ciphers involves mapping sequences of cipher symbols to fluent text in a target language. This has conventionally been formulated as a search problem, to find the decipherment key using a character-level language model to constrain the search space. This work instead frames decipherment as a sequence prediction task, using a Transformer-based causal language model to learn recurrences between characters in a ciphertext. We introduce a novel technique for transcribing arbitrary substitution ciphers into a common recurrence encoding. By leveraging this technique, we (i) create a large synthetic dataset of homophonic ciphers using random keys, and (ii) train a decipherment model that predicts the plaintext sequence given a recurrence-encoded ciphertext. Our method achieves strong results on synthetic 1:1 and homophonic ciphers, and cracks several real historic homophonic ciphers. Our analysis shows that the model learns recurrence relations between cipher symbols and recovers decipherment keys in its self-attention.

pdf bib
SpEL: Structured Prediction for Entity Linking
Hassan Shavarani | Anoop Sarkar
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Entity linking is a prominent thread of research focused on structured data creation by linking spans of text to an ontology or knowledge source. We revisit the use of structured prediction for entity linking which classifies each individual input token as an entity, and aggregates the token predictions. Our system, called SpEL (Structured prediction for Entity Linking) is a state-of-the-art entity linking system that uses some new ideas to apply structured prediction to the task of entity linking including: two refined fine-tuning steps; a context sensitive prediction aggregation strategy; reduction of the size of the model’s output vocabulary, and; we address a common problem in entity-linking systems where there is a training vs. inference tokenization mismatch. Our experiments show that we can outperform the state-of-the-art on the commonly used AIDA benchmark dataset for entity linking to Wikipedia. Our method is also very compute efficient in terms of number of parameters and speed of inference.

pdf bib
Disambiguating Numeral Sequences to Decipher Ancient Accounting Corpora
Logan Born | M. Willis Monroe | Kathryn Kelley | Anoop Sarkar
Proceedings of the Workshop on Computation and Written Language (CAWL 2023)

A numeration system encodes abstract numeric quantities as concrete strings of written characters. The numeration systems used by modern scripts tend to be precise and unambiguous, but this was not so for the ancient and partially-deciphered proto-Elamite (PE) script, where written numerals can have up to four distinct readings depending on the system that is used to read them. We consider the task of disambiguating between these readings in order to determine the values of the numeric quantities recorded in this corpus. We algorithmically extract a list of possible readings for each PE numeral notation, and contribute two disambiguation techniques based on structural properties of the original documents and classifiers learned with the bootstrapping algorithm. We also contribute a test set for evaluating disambiguation techniques, as well as a novel approach to cautious rule selection for bootstrapped classifiers. Our analysis confirms existing intuitions about this script and reveals previously-unknown correlations between tablet content and numeral magnitude. This work is crucial to understanding and deciphering PE, as the corpus is heavily accounting-focused and contains many more numeric tokens than tokens of text.

pdf bib
Learning the Character Inventories of Undeciphered Scripts Using Unsupervised Deep Clustering
Logan Born | M. Willis Monroe | Kathryn Kelley | Anoop Sarkar
Proceedings of the Workshop on Computation and Written Language (CAWL 2023)

A crucial step in deciphering a text is to identify what set of characters were used to write it. This requires grouping character tokens according to visual and contextual features, which can be challenging for human analysts when the number of tokens or underlying types is large. Prior work has shown that this process can be automated by clustering dense representations of character images, in a task which we call “script clustering”. In this work, we present novel architectures which exploit varying degrees of contextual and visual information to learn representations for use in script clustering. We evaluate on a range of modern and ancient scripts, and find that our models produce representations which are more effective for script recovery than the current state-of-the-art, despite using just ~2% as many parameters. Our analysis fruitfully applies these models to assess hypotheses about the character inventory of the partially-deciphered proto-Elamite script.

2022

pdf bib
CipherDAug: Ciphertext based Data Augmentation for Neural Machine Translation
Nishant Kambhatla | Logan Born | Anoop Sarkar
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

We propose a novel data-augmentation technique for neural machine translation based on ROT-k ciphertexts. ROT-k is a simple letter substitution cipher that replaces a letter in the plaintext with the kth letter after it in the alphabet. We first generate multiple ROT-k ciphertexts using different values of k for the plaintext which is the source side of the parallel data. We then leverage this enciphered training data along with the original parallel data via multi-source training to improve neural machine translation. Our method, CipherDAug, uses a co-regularization-inspired training procedure, requires no external data sources other than the original training data, and uses a standard Transformer to outperform strong data augmentation techniques on several datasets by a significant margin. This technique combines easily with existing approaches to data augmentation, and yields particularly strong results in low-resource settings.

pdf bib
Sequence Models for Document Structure Identification in an Undeciphered Script
Logan Born | M. Monroe | Kathryn Kelley | Anoop Sarkar
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

This work describes the first thorough analysis of “header” signs in proto-Elamite, an undeciphered script from 3100-2900 BCE. Headers are a category of signs which have been provisionally identified through painstaking manual analysis of this script by domain experts. We use unsupervised neural and statistical sequence modeling techniques to provide new and independent evidence for the existence of headers, without supervision from domain experts. Having affirmed the existence of headers as a legitimate structural feature, we next arrive at a richer understanding of their possible meaning and purpose by (i) examining which features predict their presence; (ii) identifying correlations between these features and other document properties; and (iii) examining cases where these features predict the presence of a header in texts where domain experts do not expect one (or vice versa). We provide more concrete processes for labeling headers in this corpus and a clearer justification for existing intuitions about document structure in proto-Elamite.

pdf bib
Auxiliary Subword Segmentations as Related Languages for Low Resource Multilingual Translation
Nishant Kambhatla | Logan Born | Anoop Sarkar
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation

We propose a novel technique that combines alternative subword tokenizations of a single source-target language pair that allows us to leverage multilingual neural translation training methods. These alternate segmentations function like related languages in multilingual translation. Overall this improves translation accuracy for low-resource languages and produces translations that are lexically diverse and morphologically rich. We also introduce a cross-teaching technique which yields further improvements in translation accuracy and cross-lingual transfer between high- and low-resource language pairs. Compared to other strong multilingual baselines, our approach yields average gains of +1.7 BLEU across the four low-resource datasets from the multilingual TED-talks dataset. Our technique does not require additional training data and is a drop-in improvement for any existing neural translation system.

2021

pdf bib
Better Neural Machine Translation by Extracting Linguistic Information from BERT
Hassan S. Shavarani | Anoop Sarkar
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

Adding linguistic information (syntax or semantics) to neural machine translation (NMT) have mostly focused on using point estimates from pre-trained models. Directly using the capacity of massive pre-trained contextual word embedding models such as BERT(Devlin et al., 2019) has been marginally useful in NMT because effective fine-tuning is difficult to obtain for NMT without making training brittle and unreliable. We augment NMT by extracting dense fine-tuned vector-based linguistic information from BERT instead of using point estimates. Experimental results show that our method of incorporating linguistic information helps NMT to generalize better in a variety of training contexts and is no more difficult to train than conventional Transformer-based NMT.

pdf bib
Measuring and Improving Faithfulness of Attention in Neural Machine Translation
Pooya Moradi | Nishant Kambhatla | Anoop Sarkar
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

While the attention heatmaps produced by neural machine translation (NMT) models seem insightful, there is little evidence that they reflect a model’s true internal reasoning. We provide a measure of faithfulness for NMT based on a variety of stress tests where attention weights which are crucial for prediction are perturbed and the model should alter its predictions if the learned weights are a faithful explanation of the predictions. We show that our proposed faithfulness measure for NMT models can be improved using a novel differentiable objective that rewards faithful behaviour by the model through probability divergence. Our experimental results on multiple language pairs show that our objective function is effective in increasing faithfulness and can lead to a useful analysis of NMT model behaviour and more trustworthy attention heatmaps. Our proposed objective improves faithfulness without reducing the translation quality and has a useful regularization effect on the NMT model and can even improve translation quality in some cases.

pdf bib
Compositionality of Complex Graphemes in the Undeciphered Proto-Elamite Script using Image and Text Embedding Models
Logan Born | Kathryn Kelley | M. Willis Monroe | Anoop Sarkar
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf bib
Translation-based Supervision for Policy Generation in Simultaneous Neural Machine Translation
Ashkan Alinejad | Hassan S. Shavarani | Anoop Sarkar
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

In simultaneous machine translation, finding an agent with the optimal action sequence of reads and writes that maintain a high level of translation quality while minimizing the average lag in producing target tokens remains an extremely challenging problem. We propose a novel supervised learning approach for training an agent that can detect the minimum number of reads required for generating each target token by comparing simultaneous translations against full-sentence translations during training to generate oracle action sequences. These oracle sequences can then be used to train a supervised model for action generation at inference time. Our approach provides an alternative to current heuristic methods in simultaneous translation by introducing a new training objective, which is easier to train than previous attempts at training the agent using reinforcement learning techniques for this task. Our experimental results show that our novel training method for action generation produces much higher quality translations while minimizing the average lag in simultaneous translation.

2020

pdf bib
Training with Adversaries to Improve Faithfulness of Attention in Neural Machine Translation
Pooya Moradi | Nishant Kambhatla | Anoop Sarkar
Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: Student Research Workshop

Can we trust that the attention heatmaps produced by a neural machine translation (NMT) model reflect its true internal reasoning? We isolate and examine in detail the notion of faithfulness in NMT models. We provide a measure of faithfulness for NMT based on a variety of stress tests where model parameters are perturbed and measuring faithfulness based on how often the model output changes. We show that our proposed faithfulness measure for NMT models can be improved using a novel differentiable objective that rewards faithful behaviour by the model through probability divergence. Our experimental results on multiple language pairs show that our objective function is effective in increasing faithfulness and can lead to a useful analysis of NMT model behaviour and more trustworthy attention heatmaps. Our proposed objective improves faithfulness without reducing the translation quality and it also seems to have a useful regularization effect on the NMT model and can even improve translation quality in some cases.

pdf bib
Effectively pretraining a speech translation decoder with Machine Translation data
Ashkan Alinejad | Anoop Sarkar
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Directly translating from speech to text using an end-to-end approach is still challenging for many language pairs due to insufficient data. Although pretraining the encoder parameters using the Automatic Speech Recognition (ASR) task improves the results in low resource settings, attempting to use pretrained parameters from the Neural Machine Translation (NMT) task has been largely unsuccessful in previous works. In this paper, we will show that by using an adversarial regularizer, we can bring the encoder representations of the ASR and NMT tasks closer even though they are in different modalities, and how this helps us effectively use a pretrained NMT decoder for speech translation.

2019

pdf bib
Deconstructing Supertagging into Multi-Task Sequence Prediction
Zhenqi Zhu | Anoop Sarkar
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)

Supertagging is a sequence prediction task where each word is assigned a piece of complex syntactic structure called a supertag. We provide a novel approach to multi-task learning for Tree Adjoining Grammar (TAG) supertagging by deconstructing these complex supertags in order to define a set of related but auxiliary sequence prediction tasks. Our multi-task prediction framework is trained over the exactly same training data used to train the original supertagger where each auxiliary task provides an alternative view on the original prediction task. Our experimental results show that our multi-task approach significantly improves TAG supertagging with a new state-of-the-art accuracy score of 91.39% on the Penn Treebank supertagging dataset.

pdf bib
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorials
Anoop Sarkar | Michael Strube
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorials

pdf bib
Interrogating the Explanatory Power of Attention in Neural Machine Translation
Pooya Moradi | Nishant Kambhatla | Anoop Sarkar
Proceedings of the 3rd Workshop on Neural Generation and Translation

Attention models have become a crucial component in neural machine translation (NMT). They are often implicitly or explicitly used to justify the model’s decision in generating a specific token but it has not yet been rigorously established to what extent attention is a reliable source of information in NMT. To evaluate the explanatory power of attention for NMT, we examine the possibility of yielding the same prediction but with counterfactual attention models that modify crucial aspects of the trained attention model. Using these counterfactual attention mechanisms we assess the extent to which they still preserve the generation of function and content words in the translation process. Compared to a state of the art attention model, our counterfactual attention models produce 68% of function words and 21% of content words in our German-English dataset. Our experiments demonstrate that attention models by themselves cannot reliably explain the decisions made by a NMT model.

pdf bib
Sign Clustering and Topic Extraction in Proto-Elamite
Logan Born | Kate Kelley | Nishant Kambhatla | Carolyn Chen | Anoop Sarkar
Proceedings of the 3rd Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature

We describe a first attempt at using techniques from computational linguistics to analyze the undeciphered proto-Elamite script. Using hierarchical clustering, n-gram frequencies, and LDA topic models, we both replicate results obtained by manual decipherment and reveal previously-unobserved relationships between signs. This demonstrates the utility of these techniques as an aid to manual decipherment.

2018

pdf bib
Prefix Lexicalization of Synchronous CFGs using Synchronous TAG
Logan Born | Anoop Sarkar
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

We show that an epsilon-free, chain-free synchronous context-free grammar (SCFG) can be converted into a weakly equivalent synchronous tree-adjoining grammar (STAG) which is prefix lexicalized. This transformation at most doubles the grammar’s rank and cubes its size, but we show that in practice the size increase is only quadratic. Our results extend Greibach normal form from CFGs to SCFGs and prove new formal properties about SCFG, a formalism with many applications in natural language processing.

pdf bib
Simultaneous Translation using Optimized Segmentation
Maryam Siahbani | Hassan Shavarani | Ashkan Alinejad | Anoop Sarkar
Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track)

pdf bib
Decipherment for Adversarial Offensive Language Detection
Zhelun Wu | Nishant Kambhatla | Anoop Sarkar
Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)

Automated filters are commonly used by online services to stop users from sending age-inappropriate, bullying messages, or asking others to expose personal information. Previous work has focused on rules or classifiers to detect and filter offensive messages, but these are vulnerable to cleverly disguised plaintext and unseen expressions especially in an adversarial setting where the users can repeatedly try to bypass the filter. In this paper, we model the disguised messages as if they are produced by encrypting the original message using an invented cipher. We apply automatic decipherment techniques to decode the disguised malicious text, which can be then filtered using rules or classifiers. We provide experimental results on three different datasets and show that decipherment is an effective tool for this task.

pdf bib
In-domain Context-aware Token Embeddings Improve Biomedical Named Entity Recognition
Golnar Sheikhshabbafghi | Inanc Birol | Anoop Sarkar
Proceedings of the Ninth International Workshop on Health Text Mining and Information Analysis

Rapidly expanding volume of publications in the biomedical domain makes it increasingly difficult for a timely evaluation of the latest literature. That, along with a push for automated evaluation of clinical reports, present opportunities for effective natural language processing methods. In this study we target the problem of named entity recognition, where texts are processed to annotate terms that are relevant for biomedical studies. Terms of interest in the domain include gene and protein names, and cell lines and types. Here we report on a pipeline built on Embeddings from Language Models (ELMo) and a deep learning package for natural language processing (AllenNLP). We trained context-aware token embeddings on a dataset of biomedical papers using ELMo, and incorporated these embeddings in the LSTM-CRF model used by AllenNLP for named entity recognition. We show these representations improve named entity recognition for different types of biomedical named entities. We also achieve a new state of the art in gene mention detection on the BioCreative II gene mention shared task.

pdf bib
Top-down Tree Structured Decoding with Syntactic Connections for Neural Machine Translation and Parsing
Jetic Gū | Hassan S. Shavarani | Anoop Sarkar
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

The addition of syntax-aware decoding in Neural Machine Translation (NMT) systems requires an effective tree-structured neural network, a syntax-aware attention model and a language generation model that is sensitive to sentence structure. Recent approaches resort to sequential decoding by adding additional neural network units to capture bottom-up structural information, or serialising structured data into sequence. We exploit a top-down tree-structured model called DRNN (Doubly-Recurrent Neural Networks) first proposed by Alvarez-Melis and Jaakola (2017) to create an NMT model called Seq2DRNN that combines a sequential encoder with tree-structured decoding augmented with a syntax-aware attention model. Unlike previous approaches to syntax-based NMT which use dependency parsing models our method uses constituency parsing which we argue provides useful information for translation. In addition, we use the syntactic structure of the sentence to add new connections to the tree-structured decoder neural network (Seq2DRNN+SynC). We compare our NMT model with sequential and state of the art syntax-based NMT models and show that our model produces more fluent translations with better reordering. Since our model is capable of doing translation and constituency parsing at the same time we also compare our parsing accuracy against other neural parsing models.

pdf bib
Decipherment of Substitution Ciphers with Neural Language Models
Nishant Kambhatla | Anahita Mansouri Bigvand | Anoop Sarkar
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Decipherment of homophonic substitution ciphers using language models is a well-studied task in NLP. Previous work in this topic scores short local spans of possible plaintext decipherments using n-gram language models. The most widely used technique is the use of beam search with n-gram language models proposed by Nuhn et al.(2013). We propose a beam search algorithm that scores the entire candidate plaintext at each step of the decipherment using a neural language model. We augment beam search with a novel rest cost estimation that exploits the prediction power of a neural language model. We compare against the state of the art n-gram based methods on many different decipherment tasks. On challenging ciphers such as the Beale cipher we provide significantly better error rates with much smaller beam sizes.

pdf bib
Prediction Improves Simultaneous Neural Machine Translation
Ashkan Alinejad | Maryam Siahbani | Anoop Sarkar
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Simultaneous speech translation aims to maintain translation quality while minimizing the delay between reading input and incrementally producing the output. We propose a new general-purpose prediction action which predicts future words in the input to improve quality and minimize delay in simultaneous translation. We train this agent using reinforcement learning with a novel reward function. Our agent with prediction has better translation quality and less delay compared to an agent-based simultaneous translation system without prediction.

2017

pdf bib
Lexicalized Reordering for Left-to-Right Hierarchical Phrase-based Translation
Maryam Siahbani | Anoop Sarkar
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers

Phrase-based and hierarchical phrase-based (Hiero) translation models differ radically in the way reordering is modeled. Lexicalized reordering models play an important role in phrase-based MT and such models have been added to CKY-based decoders for Hiero. Watanabe et al. (2006) proposed a promising decoding algorithm for Hiero (LR-Hiero) that visits input spans in arbitrary order and produces the translation in left to right (LR) order which leads to far fewer language model calls and leads to a considerable speedup in decoding. We introduce a novel shift-reduce algorithm to LR-Hiero to decode with our lexicalized reordering model (LRM) and show that it improves translation quality for Czech-English, Chinese-English and German-English.

pdf bib
Joint Prediction of Word Alignment with Alignment Types
Anahita Mansouri Bigvand | Te Bu | Anoop Sarkar
Transactions of the Association for Computational Linguistics, Volume 5

Current word alignment models do not distinguish between different types of alignment links. In this paper, we provide a new probabilistic model for word alignment where word alignments are associated with linguistically motivated alignment types. We propose a novel task of joint prediction of word alignment and alignment types and propose novel semi-supervised learning algorithms for this task. We also solve a sub-task of predicting the alignment type given an aligned word pair. In our experimental results, the generative models we introduce to model alignment types significantly outperform the models without alignment types.

pdf bib
Coordination in TAG without the Conjoin Operation
Chung-hye Han | Anoop Sarkar
Proceedings of the 13th International Workshop on Tree Adjoining Grammars and Related Formalisms

2016

pdf bib
The Challenge of Simultaneous Speech Translation
Anoop Sarkar
Proceedings of the 30th Pacific Asia Conference on Language, Information and Computation: Keynote Speeches and Invited Talks

pdf bib
Graph-based Semi-supervised Gene Mention Tagging
Golnar Sheikhshab | Elizabeth Starks | Aly Karsan | Anoop Sarkar | Inanc Birol
Proceedings of the 15th Workshop on Biomedical Natural Language Processing

2015

pdf bib
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Rada Mihalcea | Joyce Chai | Anoop Sarkar
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Improving Statistical Machine Translation with a Multilingual Paraphrase Database
Ramtin Mehdizadeh Seraj | Maryam Siahbani | Anoop Sarkar
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

pdf bib
Learning segmentations that balance latency versus quality in spoken language translation
Hassan Shavarani | Maryam Siahbani | Ramtin Mehdizadeh Seraj | Anoop Sarkar
Proceedings of the 12th International Workshop on Spoken Language Translation: Papers

2014

pdf bib
Two Improvements to Left-to-Right Decoding for Hierarchical Phrase-based Machine Translation
Maryam Siahbani | Anoop Sarkar
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

pdf bib
Expressive hierarchical rule extraction for left-to-right translation
Maryam Siahbani | Anoop Sarkar
Proceedings of the 11th Conference of the Association for Machine Translation in the Americas: MT Researchers Track

Left-to-right (LR) decoding Watanabe et al. (2006) is a promising decoding algorithm for hierarchical phrase-based translation (Hiero) that visits input spans in arbitrary order producing the output translation in left to right order. This leads to far fewer language model calls. But the constrained SCFG grammar used in LR-Hiero (GNF) with at most two non-terminals is unable to account for some complex phrasal reordering. Allowing more non-terminals in the rules results in a more expressive grammar. LR-decoding can be used to decode with SCFGs with more than two non-terminals, but the CKY decoders used for Hiero systems cannot deal with such expressive grammars due to a blowup in computational complexity. In this paper we present a dynamic programming algorithm for GNF rule extraction which efficiently extracts sentence level SCFG rule sets with an arbitrary number of non-terminals. We analyze the performance of the obtained grammar for statistical machine translation on three language pairs.

pdf bib
Bayesian iterative-cascade framework for hierarchical phrase-based translation
Baskaran Sankaran | Anoop Sarkar
Proceedings of the 11th Conference of the Association for Machine Translation in the Americas: MT Researchers Track

The typical training of a hierarchical phrase-based machine translation involves a pipeline of multiple steps where mistakes in early steps of the pipeline are propagated without any scope for rectifying them. Additionally the alignments are trained independent of and without being informed of the end goal and hence are not optimized for translation. We introduce a novel Bayesian iterative-cascade framework for training Hiero-style model that learns the alignments together with the synchronous translation grammar in an iterative setting. Our framework addresses the above mentioned issues and provides an elegant and principled alternative to the existing training pipeline. Based on the validation experiments involving two language pairs, our proposed iterative-cascade framework shows consistent gains over the traditional training pipeline for hierarchical translation.

pdf bib
Pivot-based triangulation for low-resource languages
Rohit Dholakia | Anoop Sarkar
Proceedings of the 11th Conference of the Association for Machine Translation in the Americas: MT Researchers Track

This paper conducts a comprehensive study on the use of triangulation for four very low-resource languages: Mawukakan and Maninkakan, Haitian Kreyol and Malagasy. To the best of our knowledge, ours is the first effective translation system for the first two of these languages. We improve translation quality by adding data using pivot languages and exper- imentally compare previously proposed triangulation design options. Furthermore, since the low-resource language pair and pivot language pair data typically come from very different domains, we use insights from domain adaptation to tune the weighted mixture of direct and pivot based phrase pairs to improve translation quality.

2013

pdf bib
Efficient Left-to-Right Hierarchical Phrase-Based Translation with Improved Reordering
Maryam Siahbani | Baskaran Sankaran | Anoop Sarkar
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf bib
Graph Propagation for Paraphrasing Out-of-Vocabulary Words in Statistical Machine Translation
Majid Razmara | Maryam Siahbani | Gholamreza Haffari | Anoop Sarkar
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Stacking for Statistical Machine Translation
Majid Razmara | Anoop Sarkar
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf bib
Multi-Metric Optimization Using Ensemble Tuning
Baskaran Sankaran | Anoop Sarkar | Kevin Duh
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Ensemble Triangulation for Statistical Machine Translation
Majid Razmara | Anoop Sarkar
Proceedings of the Sixth International Joint Conference on Natural Language Processing

pdf bib
Scalable Variational Inference for Extracting Hierarchical Phrase-based Translation Rules
Baskaran Sankaran | Gholamreza Haffari | Anoop Sarkar
Proceedings of the Sixth International Joint Conference on Natural Language Processing

pdf bib
An Online Algorithm for Learning over Constrained Latent Representations using Multiple Views
Ann Clifton | Max Whitney | Anoop Sarkar
Proceedings of the Sixth International Joint Conference on Natural Language Processing

2012

pdf bib
Improved Reordering for Shallow-n Grammar based Hierarchical Phrase-based Translation
Baskaran Sankaran | Anoop Sarkar
Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Bootstrapping via Graph Propagation
Max Whitney | Anoop Sarkar
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Mixing Multiple Translation Models in Statistical Machine Translation
Majid Razmara | George Foster | Baskaran Sankaran | Anoop Sarkar
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Kriya - The SFU System for Translation Task at WMT-12
Majid Razmara | Baskaran Sankaran | Ann Clifton | Anoop Sarkar
Proceedings of the Seventh Workshop on Statistical Machine Translation

pdf bib
Compact Rule Extraction for Hierarchical Phrase-based Translation
Baskaran Sankaran | Gholamreza Haffari | Anoop Sarkar
Proceedings of the 10th Conference of the Association for Machine Translation in the Americas: Research Papers

This paper introduces two novel approaches for extracting compact grammars for hierarchical phrase-based translation. The first is a combinatorial optimization approach and the second is a Bayesian model over Hiero grammars using Variational Bayes for inference. In contrast to the conventional Hiero (Chiang, 2007) rule extraction algorithm , our methods extract compact models reducing model size by 17.8% to 57.6% without impacting translation quality across several language pairs. The Bayesian model is particularly effective for resource-poor languages with evidence from Korean-English translation. To our knowledge, this is the first alternative to Hiero-style rule extraction that finds a more compact synchronous grammar without hurting translation performance.

2011

pdf bib
Combining Morpheme-based Machine Translation with Post-processing Morpheme Prediction
Ann Clifton | Anoop Sarkar
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf bib
An Ensemble Model that Combines Syntactic and Semantic Clustering for Discriminative Dependency Parsing
Gholamreza Haffari | Marzieh Razavi | Anoop Sarkar
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Bayesian Extraction of Minimal SCFG Rules for Hierarchical Phrase-based Translation
Baskaran Sankaran | Gholamreza Haffari | Anoop Sarkar
Proceedings of the Sixth Workshop on Statistical Machine Translation

pdf bib
Book Reviews: Parsing Schemata for Practical Text Analysis by Carlos Gómez Rodríguez
Anoop Sarkar
Computational Linguistics, Volume 37, Issue 4 - December 2011

2010

pdf bib
Incremental Decoding for Phrase-Based Statistical Machine Translation
Baskaran Sankaran | Ajeet Grewal | Anoop Sarkar
Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR

pdf bib
Proceedings of the Fourteenth Conference on Computational Natural Language Learning
Mirella Lapata | Anoop Sarkar
Proceedings of the Fourteenth Conference on Computational Natural Language Learning

2009

pdf bib
Active Learning for Statistical Phrase-based Machine Translation
Gholamreza Haffari | Maxim Roy | Anoop Sarkar
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Student Research Workshop and Doctoral Consortium
Ulrich Germann | Chirag Shah | Svetlana Stoyanchev | Carolyn Penstein Rosé | Anoop Sarkar
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Student Research Workshop and Doctoral Consortium

pdf bib
Exploration of the LTAG-Spinal Formalism and Treebank for Semantic Role Labeling
Yudong Liu | Anoop Sarkar
Proceedings of the 2009 Workshop on Grammar Engineering Across Frameworks (GEAF 2009)

pdf bib
Active Learning for Multilingual Statistical Machine Translation
Gholamreza Haffari | Anoop Sarkar
Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP

2008

pdf bib
Proceedings of the Ninth International Workshop on Tree Adjoining Grammar and Related Frameworks (TAG+9)
Claire Gardent | Anoop Sarkar
Proceedings of the Ninth International Workshop on Tree Adjoining Grammar and Related Frameworks (TAG+9)

pdf bib
Training a Perceptron with Global and Local Features for Chinese Word Segmentation
Dong Song | Anoop Sarkar
Proceedings of the Sixth SIGHAN Workshop on Chinese Language Processing

pdf bib
Homotopy-Based Semi-Supervised Hidden Markov Models for Sequence Labeling
Gholamreza Haffari | Anoop Sarkar
Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008)

2007

pdf bib
Exploiting Rich Syntactic Information for Relationship Extraction from Biomedical Articles
Yudong Liu | Zhongmin Shi | Anoop Sarkar
Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers

pdf bib
Simultaneous Identification of Biomedical Named-Entity and Functional Relation Using Statistical Parsing Techniques
Zhongmin Shi | Anoop Sarkar | Fred Popowich
Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers

pdf bib
Transductive learning for statistical machine translation
Nicola Ueffing | Gholamreza Haffari | Anoop Sarkar
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics

pdf bib
Active Learning for the Identification of Nonliteral Language
Julia Birke | Anoop Sarkar
Proceedings of the Workshop on Computational Approaches to Figurative Language

pdf bib
Experimental Evaluation of LTAG-Based Features for Semantic Role Labeling
Yudong Liu | Anoop Sarkar
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)

2006

pdf bib
A Clustering Approach for Nearly Unsupervised Recognition of Nonliteral Language
Julia Birke | Anoop Sarkar
11th Conference of the European Chapter of the Association for Computational Linguistics

pdf bib
Tutorial on Inductive Semi-supervised Learning Methods: with Applicability to Natural Language Processing
Anoop Sarkar | Gholamreza Haffari
Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Tutorial Abstracts

pdf bib
Voting between Dictionary-Based and Subword Tagging Models for Chinese Word Segmentation
Dong Song | Anoop Sarkar
Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing

pdf bib
Using LTAG-Based Features for Semantic Role Labeling
Yudong Liu | Anoop Sarkar
Proceedings of the Eighth International Workshop on Tree Adjoining Grammar and Related Formalisms

2004

pdf bib
A Smorgasbord of Features for Statistical Machine Translation
Franz Josef Och | Daniel Gildea | Sanjeev Khudanpur | Anoop Sarkar | Kenji Yamada | Alex Fraser | Shankar Kumar | Libin Shen | David Smith | Katherine Eng | Viren Jain | Zhen Jin | Dragomir Radev
Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004

pdf bib
Discriminative Reranking for Machine Translation
Libin Shen | Anoop Sarkar | Franz Josef Och
Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004

2003

pdf bib
Example Selection for Bootstrapping Statistical Parsers
Mark Steedman | Rebecca Hwa | Stephen Clark | Miles Osborne | Anoop Sarkar | Julia Hockenmaier | Paul Ruhlen | Steven Baker | Jeremiah Crim
Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
Bootstrapping statistical parsers from small datasets
Mark Steedman | Miles Osborne | Anoop Sarkar | Stephen Clark | Rebecca Hwa | Julia Hockenmaier | Paul Ruhlen | Steven Baker | Jeremiah Crim
10th Conference of the European Chapter of the Association for Computational Linguistics

pdf bib
Using LTAG Based Features in Parse Reranking
Libin Shen | Anoop Sarkar | Aravind Joshi
Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing

2002

pdf bib
Statistical Morphological Tagging and Parsing of Korean with an LTAG Grammar
Anoop Sarkar | Chung-Hye Han
Proceedings of the Sixth International Workshop on Tree Adjoining Grammar and Related Frameworks (TAG+6)

pdf bib
Learning Verb Argument Structure from Minimally Annotated Corpora
Anoop Sarkar | Woottiporn Tripasai
COLING 2002: The 19th International Conference on Computational Linguistics

pdf bib
Squibs and Discussions: A Note on Typing Feature Structures
Shuly Wintner | Anoop Sarkar
Computational Linguistics, Volume 28, Number 3, September 2002

2001

pdf bib
Applying Co-Training Methods to Statistical Parsing
Anoop Sarkar
Second Meeting of the North American Chapter of the Association for Computational Linguistics

2000

pdf bib
Some Experiments on Indicators of Parsing Complexity for Lexicalized Grammars
Anoop Sarkar | Fei Xia | Aravind Joshi
Proceedings of the COLING-2000 Workshop on Efficiency In Large-Scale Parsing Systems

pdf bib
Practical experiments in parsing using Tree Adjoining Grammars
Anoop Sarkar
Proceedings of the Fifth International Workshop on Tree Adjoining Grammar and Related Frameworks (TAG+5)

pdf bib
Learning Verb Subcategorization from Corpora: Counting Frame Subsets
Daniel Zeman | Anoop Sarkar
Proceedings of the Second International Conference on Language Resources and Evaluation (LREC’00)

pdf bib
Automatic Extraction of Subcategorization Frames for Czech
Anoop Sarkar | Daniel Zeman
COLING 2000 Volume 2: The 18th International Conference on Computational Linguistics

1998

pdf bib
Prefix Probabilities from Stochastic Free Adjoining Grammars
Mark-Jan Nederhof | Anoop Sarkar | Giorgio Satta
36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 2

pdf bib
Conditions on Consistency of Probabilistic Tree Adjoining Grammars
Anoop Sarkar
36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 2

pdf bib
Prefix Probabilities from Stochastic Tree Adjoining Grammars
Mark-Jan Nederhof | Anoop Sarkar | Giorgio Satta
COLING 1998 Volume 2: The 17th International Conference on Computational Linguistics

pdf bib
Conditions on Consistency of Probabilistic Tree Adjoining Grammars
Anoop Sarkar
COLING 1998 Volume 2: The 17th International Conference on Computational Linguistics

pdf bib
Prefix probabilities for linear indexed grammars
Mark-Jan Nederhof | Anoop Sarkar | Giorgio Satta
Proceedings of the Fourth International Workshop on Tree Adjoining Grammars and Related Frameworks (TAG+4)

1997

pdf bib
Maintaining the Forest and Burning out the Underbrush in XTAG
Christine Doran | Beth Hockey | Philip Hopely | Joseph Rosenzweig | Anoop Sarkar | B. Srinivas | Fei Xia
Computational Environments for Grammar Development and Linguistic Engineering

1996

pdf bib
Incremental Parser Generation for Tree Adjoining Grammars
Anoop Sarkar
34th Annual Meeting of the Association for Computational Linguistics

pdf bib
Coordination in Tree Adjoining Grammars: Formalization and Implementation
Anoop Sarkar | Aravind Joshi
COLING 1996 Volume 2: The 16th International Conference on Computational Linguistics

1995

pdf bib
University of Pennsylvania: Description of the University of Pennsylvania System Used for MUC-6
Breck Baldwin | Jeff Reynar | Mike Collins | Jason Eisner | Adwait Ratnaparkhi | Joseph Rosenzweig | Anoop Sarkar | Srinivas
Sixth Message Understanding Conference (MUC-6): Proceedings of a Conference Held in Columbia, Maryland, November 6-8, 1995

1993

pdf bib
Extending Kimmo’s Two-Level Model of Morphology
Anoop Sarkar
31st Annual Meeting of the Association for Computational Linguistics

Search