2023
pdf
bib
PhraseSumm: Abstractive Short Phrase Summarization
Kasturi Bhattacharjee
|
Kathleen McKeown
|
Rashmi Gangadharaiah
Findings of the Association for Computational Linguistics: IJCNLP-AACL 2023 (Findings)
2022
pdf
bib
abs
What Do Users Care About? Detecting Actionable Insights from User Feedback
Kasturi Bhattacharjee
|
Rashmi Gangadharaiah
|
Kathleen McKeown
|
Dan Roth
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track
Users often leave feedback on a myriad of aspects of a product which, if leveraged successfully, can help yield useful insights that can lead to further improvements down the line. Detecting actionable insights can be challenging owing to large amounts of data as well as the absence of labels in real-world scenarios. In this work, we present an aggregation and graph-based ranking strategy for unsupervised detection of these insights from real-world, noisy, user-generated feedback. Our proposed approach significantly outperforms strong baselines on two real-world user feedback datasets and one academic dataset.
pdf
bib
abs
PerKGQA: Question Answering over Personalized Knowledge Graphs
Ritam Dutt
|
Kasturi Bhattacharjee
|
Rashmi Gangadharaiah
|
Dan Roth
|
Carolyn Rose
Findings of the Association for Computational Linguistics: NAACL 2022
Previous studies on question answering over knowledge graphs have typically operated over a single knowledge graph (KG). This KG is assumed to be known a priori and is lever- aged similarly for all users’ queries during inference. However, such an assumption is not applicable to real-world settings, such as health- care, where one needs to handle queries of new users over unseen KGs during inference. Furthermore, privacy concerns and high computational costs render it infeasible to query the single KG that has information about all users while answering a specific user’s query. The above concerns motivate our question answer- ing setting over personalized knowledge graphs (PERKGQA) where each user has restricted access to their KG. We observe that current state-of-the-art KGQA methods that require learning prior node representations fare poorly. We propose two complementary approaches, PATHCBR and PATHRGCN for PERKGQA. The former is a simple non-parametric technique that employs case-based reasoning, while the latter is a parametric approach using graph neural networks. Our proposed methods circumvent learning prior representations, can generalize to unseen KGs, and outperform strong baselines on an academic and an internal dataset by 6.5% and 10.5%.
pdf
bib
abs
Towards Cross-Domain Transferability of Text Generation Models for Legal Text
Vinayshekhar Bannihatti Kumar
|
Kasturi Bhattacharjee
|
Rashmi Gangadharaiah
Proceedings of the Natural Legal Language Processing Workshop 2022
Legalese can often be filled with verbose domain-specific jargon which can make it challenging to understand and use for non-experts. Creating succinct summaries of legal documents often makes it easier for user comprehension. However, obtaining labeled data for every domain of legal text is challenging, which makes cross-domain transferability of text generation models for legal text, an important area of research. In this paper, we explore the ability of existing state-of-the-art T5 & BART-based summarization models to transfer across legal domains. We leverage publicly available datasets across four domains for this task, one of which is a new resource for summarizing privacy policies, that we curate and release for academic research. Our experiments demonstrate the low cross-domain transferability of these models, while also highlighting the benefits of combining different domains. Further, we compare the effectiveness of standard metrics for this task and illustrate the vast differences in their performance.
2021
pdf
bib
Domain and Task-Informed Sample Selection for Cross-Domain Target-based Sentiment Analysis
Kasturi Bhattacharjee
|
Rashmi Gangadharaiah
|
Smaranda Muresan
Proceedings of the 4th International Conference on Natural Language and Speech Processing (ICNLSP 2021)
pdf
bib
Multi-Task Learning and Adapted Knowledge Models for Emotion-Cause Extraction
Elsbeth Turcan
|
Shuai Wang
|
Rishita Anubhai
|
Kasturi Bhattacharjee
|
Yaser Al-Onaizan
|
Smaranda Muresan
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
2020
pdf
bib
abs
To BERT or Not to BERT: Comparing Task-specific and Task-agnostic Semi-Supervised Approaches for Sequence Tagging
Kasturi Bhattacharjee
|
Miguel Ballesteros
|
Rishita Anubhai
|
Smaranda Muresan
|
Jie Ma
|
Faisal Ladhak
|
Yaser Al-Onaizan
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Leveraging large amounts of unlabeled data using Transformer-like architectures, like BERT, has gained popularity in recent times owing to their effectiveness in learning general representations that can then be further fine-tuned for downstream tasks to much success. However, training these models can be costly both from an economic and environmental standpoint. In this work, we investigate how to effectively use unlabeled data: by exploring the task-specific semi-supervised approach, Cross-View Training (CVT) and comparing it with task-agnostic BERT in multiple settings that include domain and task relevant English data. CVT uses a much lighter model architecture and we show that it achieves similar performance to BERT on a set of sequence tagging tasks, with lesser financial and environmental impact.
2019
pdf
bib
abs
Neural Word Decomposition Models for Abusive Language Detection
Sravan Bodapati
|
Spandana Gella
|
Kasturi Bhattacharjee
|
Yaser Al-Onaizan
Proceedings of the Third Workshop on Abusive Language Online
The text we see in social media suffers from lots of undesired characterstics like hatespeech, abusive language, insults etc. The nature of this text is also very different compared to the traditional text we see in news with lots of obfuscated words, intended typos. This poses several robustness challenges to many natural language processing (NLP) techniques developed for traditional text. Many techniques proposed in the recent times such as charecter encoding models, subword models, byte pair encoding to extract subwords can aid in dealing with few of these nuances. In our work, we analyze the effectiveness of each of the above techniques, compare and contrast various word decomposition techniques when used in combination with others. We experiment with recent advances of finetuning pretrained language models, and demonstrate their robustness to domain shift. We also show our approaches achieve state of the art performance on Wikipedia attack, toxicity datasets, and Twitter hatespeech dataset.