Priyank Chhipa


2020

pdf bib
Unified Multi Intent Order and Slot Prediction using Selective Learning Propagation
Bharatram Natarajan | Priyank Chhipa | Kritika Yadav | Divya Verma Gogoi
Proceedings of the Workshop on Joint NLP Modelling for Conversational AI @ ICON 2020

Natural Language Understanding (NLU) involves two important task namely Intent Determination(ID) and Slot Filling (SF). With recent advancements in Intent Determination and Slot Filling tasks, explorations on handling of multiple intent information in a single utterance is increasing to make the NLU more conversation-based rather than command execution-based. Many have proven this task with huge multi-intent training data. In addition, lots of research have addressed multi intent problem only. The problem of multi intent also poses the challenge of addressing the order of execution of intents found. Hence, we are proposing a unified architecture to address multi-intent detection, associated slotsdetection and order of execution of found intents using low proportion multi-intent corpusin the training data. This architecture consists of Multi Word Importance relation propagator using Multi-Head GRU and Importance learner propagator module using self-attention. This architecture has beaten state-of-the-art by 2.58% on the MultiIntentData dataset.

2019

pdf bib
Robust Text Classification using Sub-Word Information in Input Word Representations.
Bhanu Prakash Mahanti | Priyank Chhipa | Vivek Sridhar | Vinuthkumar Prasan
Proceedings of the 16th International Conference on Natural Language Processing

Word based deep learning approaches have been used with increasing success recently to solve Natural Language Processing problems like Machine Translation, Language Modelling and Text Classification. However, performance of these word based models is limited by the vocabulary of the training corpus. Alternate approaches using character based models have been proposed to overcome the unseen word problems arising for a variety of reasons. However, character based models fail to capture the sequential relationship of words inherently present in texts. Hence, there is scope for improvement by addressing the unseen word problem while also maintaining the sequential context through word based models. In this work, we propose a method where the input embedding vector incorporates sub-word information but is also suitable for use with models which successfully capture the sequential nature of text. We further attempt to establish that using such a word representation as input makes the model robust to unseen words, particularly arising due to tokenization and spelling errors, which is a common problem in systems where a typing interface is one of the input modalities.