Siddhartha Mukherjee


2022

pdf bib
Efficient Dialog State Tracking Using Gated- Intent based Slot Operation Prediction for On-device Dialog Systems
Pranamya Patil | Hyungtak Choi | Ranjan Samal | Gurpreet Kaur | Manisha Jhawar | Aniruddha Tammewar | Siddhartha Mukherjee
Proceedings of the 19th International Conference on Natural Language Processing (ICON)

Conversational agents on smart devices need to be efficient concerning latency in responding, for enhanced user experience and real-time utility. This demands on-device processing (as on-device processing is quicker), which limits the availability of resources such as memory and processing. Most state-of-the-art Dialog State Tracking (DST) systems make use of large pre-trained language models that require high resource computation, typically available on high-end servers. Whereas, on-device systems are memory efficient, have reduced time/latency, preserve privacy, and don’t rely on network. A recent approach tries to reduce the latency by splitting the task of slot prediction into two subtasks of State Operation Prediction (SOP) to select an action for each slot, and Slot Value Generation (SVG) responsible for producing values for the identified slots. SVG being computationally expensive, is performed only for a small subset of actions predicted in the SOP. Motivated from this optimization technique, we build a similar system and work on multi-task learning to achieve significant improvements in DST performance, while optimizing the resource consumption. We propose a quadruplet (Domain, Intent, Slot, and Slot Value) based DST, which significantly boosts the performance. We experiment with different techniques to fuse different layers of representations from intent and slot prediction tasks. We obtain the best joint accuracy of 53.3% on the publicly available MultiWOZ 2.2 dataset, using BERT-medium along with a gating mechanism. We also compare the cost efficiency of our system with other large models and find that our system is best suited for an on-device based production environment.

2020

pdf bib
Proceedings of the Workshop on Joint NLP Modelling for Conversational AI @ ICON 2020
Praveen Kumar G S | Siddhartha Mukherjee | Ranjan Samal
Proceedings of the Workshop on Joint NLP Modelling for Conversational AI @ ICON 2020

2019

pdf bib
Robust Deep Learning Based Sentiment Classification of Code-Mixed Text
Siddhartha Mukherjee | Vinuthkumar Prasan | Anish Nediyanchath | Manan Shah | Nikhil Kumar
Proceedings of the 16th International Conference on Natural Language Processing

India is one of unique countries in the world that has the legacy of diversity of languages. Most of these languages are influenced by English. This causes a large presence of code-mixed text in Social Media. Enormous presence of this code-mixed text provides an important research area for Natural Language Processing (NLP). This paper proposes a novel Attention based deep learning technique for Sentiment Classification on Code-Mixed Text (ACCMT) of Hindi-English. The proposed architecture uses fusion of character and word features. Non availability of suitable Word Embedding to represent these Code-Mixed texts is another important hurdle for this league of NLP tasks. This paper also proposes a novel technique for preparing Word Embedding of Code-Mixed text. This embedding is prepared with two separately trained word-embedding on Romanized Hindi and English respectively. This embedding is further used in the proposed deep learning based architecture for robust classification. The Proposed technique achieves 71.97% accuracy, which exceeds the baseline accuracy.