Karthik Radhakrishnan


2023

pdf bib
Overcoming Catastrophic Forgetting in Massively Multilingual Continual Learning
Genta Winata | Lingjue Xie | Karthik Radhakrishnan | Shijie Wu | Xisen Jin | Pengxiang Cheng | Mayank Kulkarni | Daniel Preotiuc-Pietro
Findings of the Association for Computational Linguistics: ACL 2023

Real-life multilingual systems should be able to efficiently incorporate new languages as data distributions fed to the system evolve and shift over time. To do this, systems need to handle the issue of catastrophic forgetting, where the model performance drops for languages or tasks seen further in its past. In this paper, we study catastrophic forgetting, as well as methods to minimize this, in a massively multilingual continual learning framework involving up to 51 languages and covering both classification and sequence labeling tasks. We present LR ADJUST, a learning rate scheduling method that is simple, yet effective in preserving new information without strongly overwriting past knowledge. Furthermore, we show that this method is effective across multiple continual learning approaches. Finally, we provide further insights into the dynamics of catastrophic forgetting in this massively multilingual setup.

pdf bib
Towards a Unified Multi-Domain Multilingual Named Entity Recognition Model
Mayank Kulkarni | Daniel Preotiuc-Pietro | Karthik Radhakrishnan | Genta Indra Winata | Shijie Wu | Lingjue Xie | Shaohua Yang
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics

Named Entity Recognition is a key Natural Language Processing task whose performance is sensitive to choice of genre and language. A unified NER model across multiple genres and languages is more practical and efficient by leveraging commonalities across genres or languages. In this paper, we propose a novel setup for NER which includes multi-domain and multilingual training and evaluation across 13 domains and 4 languages. We explore a range of approaches to building a unified model using domain and language adaptation techniques. Our experiments highlight multiple nuances to consider while building a unified model, including that naive data pooling fails to obtain good performance, that domain-specific adaptations are more important than language-specific ones and that including domain-specific adaptations in a unified model nears the performance of training multiple dedicated monolingual models at a fraction of their parameter count.

pdf bib
Efficient Zero-Shot Cross-lingual Inference via Retrieval
Genta Winata | Lingjue Xie | Karthik Radhakrishnan | Yifan Gao | Daniel Preotiuc-Pietro
Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 2: Short Papers)

2021

pdf bib
Detecting Community Sensitive Norm Violations in Online Conversations
Chan Young Park | Julia Mendelsohn | Karthik Radhakrishnan | Kinjal Jain | Tushar Kanakagiri | David Jurgens | Yulia Tsvetkov
Findings of the Association for Computational Linguistics: EMNLP 2021

Online platforms and communities establish their own norms that govern what behavior is acceptable within the community. Substantial effort in NLP has focused on identifying unacceptable behaviors and, recently, on forecasting them before they occur. However, these efforts have largely focused on toxicity as the sole form of community norm violation. Such focus has overlooked the much larger set of rules that moderators enforce. Here, we introduce a new dataset focusing on a more complete spectrum of community norms and their violations in the local conversational and global community contexts. We introduce a series of models that use this data to develop context- and community-sensitive norm violation detection, showing that these changes give high performance.

pdf bib
Task-Oriented Dialog Systems for Dravidian Languages
Tushar Kanakagiri | Karthik Radhakrishnan
Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages

Task-oriented dialog systems help a user achieve a particular goal by parsing user requests to execute a particular action. These systems typically require copious amounts of training data to effectively understand the user intent and its corresponding slots. Acquiring large training corpora requires significant manual effort in annotation, rendering its construction infeasible for low-resource languages. In this paper, we present a two-step approach for automatically constructing task-oriented dialogue data in such languages by making use of annotated data from high resource languages. First, we use a machine translation (MT) system to translate the utterance and slot information to the target language. Second, we use token prefix matching and mBERT based semantic matching to align the slot tokens to the corresponding tokens in the utterance. We hand-curate a new test dataset in two low-resource Dravidian languages and show the significance and impact of our training dataset construction using a state-of-the-art mBERT model - achieving a Slot F1 of 81.51 (Kannada) and 78.82 (Tamil) on our test sets.

2020

pdf bib
CiteQA@CLSciSumm 2020
Anjana Umapathy | Karthik Radhakrishnan | Kinjal Jain | Rahul Singh
Proceedings of the First Workshop on Scholarly Document Processing

In academic publications, citations are used to build context for a concept by highlighting relevant aspects from reference papers. Automatically identifying referenced snippets can help researchers swiftly isolate principal contributions of scientific works. In this paper, we exploit the underlying structure of scientific articles to predict reference paper spans and facets corresponding to a citation. We propose two methods to detect citation spans - keyphrase overlap, BERT along with structural priors. We fine-tune FastText embeddings and leverage textual, positional features to predict citation facets.

pdf bib
“A Little Birdie Told Me ... ” - Inductive Biases for Rumour Stance Detection on Social Media
Karthik Radhakrishnan | Tushar Kanakagiri | Sharanya Chakravarthy | Vidhisha Balachandran
Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)

The rise in the usage of social media has placed it in a central position for news dissemination and consumption. This greatly increases the potential for proliferation of rumours and misinformation. In an effort to mitigate the spread of rumours, we tackle the related task of identifying the stance (Support, Deny, Query, Comment) of a social media post. Unlike previous works, we impose inductive biases that capture platform specific user behavior. These biases, coupled with social media fine-tuning of BERT allow for better language understanding, thus yielding an F1 score of 58.7 on the SemEval 2019 task on rumour stance detection.

pdf bib
ColloQL: Robust Text-to-SQL Over Search Queries
Karthik Radhakrishnan | Arvind Srikantan | Xi Victoria Lin
Proceedings of the First Workshop on Interactive and Executable Semantic Parsing

Translating natural language utterances to executable queries is a helpful technique in making the vast amount of data stored in relational databases accessible to a wider range of non-tech-savvy end users. Prior work in this area has largely focused on textual input that is linguistically correct and semantically unambiguous. However, real-world user queries are often succinct, colloquial, and noisy, resembling the input of a search engine. In this work, we introduce data augmentation techniques and a sampling-based content-aware BERT model (ColloQL) to achieve robust text-to-SQL modeling over natural language search (NLS) questions. Due to the lack of evaluation data, we curate a new dataset of NLS questions and demonstrate the efficacy of our approach. ColloQL’s superior performance extends to well-formed text, achieving an 84.9% (logical) and 90.7% (execution) accuracy on the WikiSQL dataset, making it, to the best of our knowledge, the highest performing model that does not use execution guided decoding.

pdf bib
Domino at FinCausal 2020, Task 1 and 2: Causal Extraction System
Sharanya Chakravarthy | Tushar Kanakagiri | Karthik Radhakrishnan | Anjana Umapathy
Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation

Automatic identification of cause-effect relationships from data is a challenging but important problem in artificial intelligence. Identifying semantic relationships has become increasingly important for multiple downstream applications like Question Answering, Information Retrieval and Event Prediction. In this work, we tackle the problem of causal relationship extraction from financial news using the FinCausal 2020 dataset. We tackle two tasks - 1) Detecting the presence of causal relationships and 2) Extracting segments corresponding to cause and effect from news snippets. We propose Transformer based sequence and token classification models with post-processing rules which achieve an F1 score of 96.12 and 79.60 on Tasks 1 and 2 respectively.