Sriram Venkatapathy


2023

pdf bib
Adversarial Robustness for Large Language NER models using Disentanglement and Word Attributions
Xiaomeng Jin | Bhanukiran Vinzamuri | Sriram Venkatapathy | Heng Ji | Pradeep Natarajan
Findings of the Association for Computational Linguistics: EMNLP 2023

Large language models (LLM’s) have been widely used for several applications such as question answering, text classification and clustering. While the preliminary results across the aforementioned tasks looks promising, recent work has dived deep into LLM’s performing poorly for complex Named Entity Recognition (NER) tasks in comparison to fine-tuned pre-trained language models (PLM’s). To enhance wider adoption of LLM’s, our paper investigates the robustness of such LLM NER models and its instruction fine-tuned variants to adversarial attacks. In particular, we propose a novel attack which relies on disentanglement and word attribution techniques where the former aids in learning an embedding capturing both entity and non-entity influences separately, and the latter aids in identifying important words across both components. This is in stark contrast to most techniques which primarily leverage non-entity words for perturbations limiting the space being explored to synthesize effective adversarial examples. Adversarial training results based on our method improves the F1 score over original LLM NER model by 8% and 18% on CoNLL-2003 and Ontonotes 5.0 datasets respectively.

2022

pdf bib
Improving Large-Scale Conversational Assistants using Model Interpretation based Training Sample Selection
Stefan Schroedl | Manoj Kumar | Kiana Hajebi | Morteza Ziyadi | Sriram Venkatapathy | Anil Ramakrishna | Rahul Gupta | Pradeep Natarajan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track

This paper presents an approach to identify samples from live traffic where the customer implicitly communicated satisfaction with Alexa’s responses, by leveraging interpretations of model behavior. Such customer signals are noisy and adding a large number of samples from live traffic to training set makes re-training infeasible. Our work addresses these challenges by identifying a small number of samples that grow training set by ~0.05% while producing statistically significant improvements in both offline and online tests.

pdf bib
Unsupervised Syntactically Controlled Paraphrase Generation with Abstract Meaning Representations
Kuan-Hao Huang | Varun Iyer | Anoop Kumar | Sriram Venkatapathy | Kai-Wei Chang | Aram Galstyan
Findings of the Association for Computational Linguistics: EMNLP 2022

Syntactically controlled paraphrase generation has become an emerging research direction in recent years. Most existing approaches require annotated paraphrase pairs for training and are thus costly to extend to new domains. Unsupervised approaches, on the other hand, do not need paraphrase pairs but suffer from relatively poor performance in terms of syntactic control and quality of generated paraphrases. In this paper, we demonstrate that leveraging Abstract Meaning Representations (AMR) can greatly improve the performance of unsupervised syntactically controlled paraphrase generation.Our proposed model, AMR-enhanced Paraphrase Generator (AMRPG), separately encodes the AMR graph and the constituency parse of the input sentence into two disentangled semantic and syntactic embeddings. A decoder is then learned to reconstruct the input sentence from the semantic and syntactic embeddings. Our experiments show that AMRPG generates more accurate syntactically controlled paraphrases, both quantitatively and qualitatively, compared to the existing unsupervised approaches. We also demonstrate that the paraphrases generated by AMRPG can be used for data augmentation to improve the robustness of NLP models.

2020

pdf bib
Evaluating the Effectiveness of Efficient Neural Architecture Search for Sentence-Pair Tasks
Ansel MacLaughlin | Jwala Dhamala | Anoop Kumar | Sriram Venkatapathy | Ragav Venkatesan | Rahul Gupta
Proceedings of the First Workshop on Insights from Negative Results in NLP

Neural Architecture Search (NAS) methods, which automatically learn entire neural model or individual neural cell architectures, have recently achieved competitive or state-of-the-art (SOTA) performance on variety of natural language processing and computer vision tasks, including language modeling, natural language inference, and image classification. In this work, we explore the applicability of a SOTA NAS algorithm, Efficient Neural Architecture Search (ENAS) (Pham et al., 2018) to two sentence pair tasks, paraphrase detection and semantic textual similarity. We use ENAS to perform a micro-level search and learn a task-optimized RNN cell architecture as a drop-in replacement for an LSTM. We explore the effectiveness of ENAS through experiments on three datasets (MRPC, SICK, STS-B), with two different models (ESIM, BiLSTM-Max), and two sets of embeddings (Glove, BERT). In contrast to prior work applying ENAS to NLP tasks, our results are mixed – we find that ENAS architectures sometimes, but not always, outperform LSTMs and perform similarly to random architecture search.

2015

pdf bib
Reversibility reconsidered: finite-state factors for efficient probabilistic sampling in parsing and generation
Marc Dymetman | Sriram Venkatapathy | Chunyang Xiao
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

2014

pdf bib
Fast Domain Adaptation of SMT models without in-Domain Parallel Data
Prashant Mathur | Sriram Venkatapathy | Nicola Cancedda
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers

2013

pdf bib
SORT: An Interactive Source-Rewriting Tool for Improved Translation
Shachar Mirkin | Sriram Venkatapathy | Marc Dymetman | Ioan Calapodescu
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations

pdf bib
Investigations in Exact Inference for Hierarchical Translation
Wilker Aziz | Marc Dymetman | Sriram Venkatapathy
Proceedings of the Eighth Workshop on Statistical Machine Translation

pdf bib
Confidence-driven Rewriting for Improved Translation
Shachar Mirkin | Sriram Venkatapathy | Marc Dymetman
Proceedings of Machine Translation Summit XIV: Posters

2012

pdf bib
Prediction of Learning Curves in Machine Translation
Prasanth Kolachina | Nicola Cancedda | Marc Dymetman | Sriram Venkatapathy
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
An SMT-driven Authoring Tool
Sriram Venkatapathy | Shachar Mirkin
Proceedings of COLING 2012: Demonstration Papers

2010

pdf bib
Phrase Based Decoding using a Discriminative Model
Prasanth Kolachina | Sriram Venkatapathy | Srinivas Bangalore | Sudheer Kolachina | Avinesh PVS
Proceedings of the 4th Workshop on Syntax and Structure in Statistical Translation

pdf bib
A Discriminative Approach for Dependency Based Statistical Machine Translation
Sriram Venkatapathy | Rajeev Sangal | Aravind Joshi | Karthik Gali
Proceedings of the 4th Workshop on Syntax and Structure in Statistical Translation

2009

pdf bib
Sentence Realisation from Bag of Words with Dependency Constraints
Karthik Gali | Sriram Venkatapathy
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Student Research Workshop and Doctoral Consortium

2007

pdf bib
Discriminative word alignment by learning the alignment structure and syntactic divergence between a language pair
Sriram Venkatapathy | Aravind Joshi
Proceedings of SSST, NAACL-HLT 2007 / AMTA Workshop on Syntax and Structure in Statistical Translation

pdf bib
Three models for discriminative machine translation using Global Lexical Selection and Sentence Reconstruction
Sriram Venkatapathy | Srinivas Bangalore
Proceedings of SSST, NAACL-HLT 2007 / AMTA Workshop on Syntax and Structure in Statistical Translation

pdf bib
Detecting Compositionality of Verb-Object Combinations using Selectional Preferences
Diana McCarthy | Sriram Venkatapathy | Aravind Joshi
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)

2006

pdf bib
Using Information about Multi-word Expressions for the Word-Alignment Task
Sriram Venkatapathy | Aravind K. Joshi
Proceedings of the Workshop on Multiword Expressions: Identifying and Exploiting Underlying Properties

2005

pdf bib
Relative Compositionality of Multi-word Expressions: A Study of Verb-Noun (V-N) Collocations
Sriram Venkatapathy | Aravind K. Joshi
Second International Joint Conference on Natural Language Processing: Full Papers

pdf bib
Inferring Semantic Roles Using Sub-Categorization Frames and Maximum Entropy Model
Akshar Bharati | Sriram Venkatapathy | Prashanth Reddy
Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005)

pdf bib
Measuring the Relative Compositionality of Verb-Noun (V-N) Collocations by Integrating Features
Sriram Venkatapathy | Aravind Joshi
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing