Proceedings of the 18th International Conference on Natural Language Processing (ICON)

Sivaji Bandyopadhyay, Sobha Lalitha Devi, Pushpak Bhattacharyya (Editors)


Anthology ID:
2021.icon-main
Month:
December
Year:
2021
Address:
National Institute of Technology Silchar, Silchar, India
Venue:
ICON
SIG:
Publisher:
NLP Association of India (NLPAI)
URL:
https://aclanthology.org/2021.icon-main
DOI:
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
https://aclanthology.org/2021.icon-main.pdf

pdf bib
Proceedings of the 18th International Conference on Natural Language Processing (ICON)
Sivaji Bandyopadhyay | Sobha Lalitha Devi | Pushpak Bhattacharyya

pdf bib
Constrained Decoding for Technical Term Retention in English-Hindi MT
Niyati Bafna | Martin Vastl | Ondřej Bojar

Technical terms may require special handling when the target audience is bilingual, depending on the cultural and educational norms of the society in question. In particular, certain translation scenarios may require “term retention” i.e. preserving of the source language technical terms in the target language output to produce a fluent and comprehensible code-switched sentence. We show that a standard transformer-based machine translation model can be adapted easily to perform this task with little or no damage to the general quality of its output. We present an English-to-Hindi model that is trained to obey a “retain” signal, i.e. it can perform the required code-mixing on a list of terms, possibly unseen, provided at runtime. We perform automatic evaluation using BLEU as well as F1 metrics on the list of retained terms; we also collect manual judgments on the quality of the output sentences.

pdf bib
Named Entity-Factored Transformer for Proper Noun Translation
Kohichi Takai | Gen Hattori | Akio Yoneyama | Keiji Yasuda | Katsuhito Sudoh | Satoshi Nakamura

Subword-based neural machine translation decreases the number of out-of-vocabulary (OOV) words and also keeps the translation quality if input sentences include OOV words. The subword-based NMT decomposes a word into shorter units to solve the OOV problem, but it does not work well for non-compositional proper nouns due to the construction of the shorter unit from words. Furthermore, the lack of translation also occurs in proper noun translation. The proposed method applies the Named Entity (NE) fea-ture vector to Factored Transformer for accurate proper noun translation. The proposed method uses two features which are input sentences in subwords unit and the feature obtained from Named Entity Recognition (NER). The pro-posed method improves the problem of non-compositional proper nouns translation included a low-frequency word. According to the experiments, the proposed method using the best NE feature vector outperformed the baseline sub-word-based transformer model by more than 9.6 points in proper noun accuracy and 2.5 points in the BLEU score.

pdf bib
Multi-Task Learning for Improving Gender Accuracy in Neural Machine Translation
Carlos Escolano | Graciela Ojeda | Christine Basta | Marta R. Costa-jussa

Machine Translation is highly impacted by social biases present in data sets, indicating that it reflects and amplifies stereotypes. In this work, we study mitigating gender bias by jointly learning the translation, the part-of-speech, and the gender of the target language with different morphological complexity. This approach has shown improvements up to 6.8 points in gender accuracy without significantly impacting the translation quality.

pdf bib
Small Batch Sizes Improve Training of Low-Resource Neural MT
Àlex Atrio | Andrei Popescu-Belis

We study the role of an essential hyper-parameter that governs the training of Transformers for neural machine translation in a low-resource setting: the batch size. Using theoretical insights and experimental evidence, we argue against the widespread belief that batch size should be set as large as allowed by the memory of the GPUs. We show that in a low-resource setting, a smaller batch size leads to higher scores in a shorter training time, and argue that this is due to better regularization of the gradients during training.

pdf bib
lakṣyārtha (Indicated Meaning) of Śabdavyāpāra (Function of a Word) framework from kāvyaśāstra (The Science of Literary Studies) in Samskṛtam : Its application to Literary Machine Translation and other NLP tasks
Sripathi Sripada | Anupama Ryali | Raghuram Sheshadri

A key challenge in Literary Machine Translation is that the meaning of a sentence can be different from the sum of meanings of all the words it possesses. This poses the problem of requiring large amounts of consistently labelled training data across a variety of usages and languages. In this paper, we propose that we can economically train machine translation models to identify and paraphrase such sentences by leveraging the language independent framework of Śabdavyāpāra (Function of a Word), from Literary Sciences in Saṃskṛtam, and its definition of lakṣyārtha (‘Indicated’ meaning). An Indicated meaning exists where there is incompatibility among the literal meanings of the words in a sentence (irrespective of language). The framework defines seven categories of Indicated meaning and their characteristics. As a pilot, we identified 300 such sentences from literary and regular usage, labelled them and trained a 2d Convolutional Neural Network to categorise a sentence based on the category of Indicated meaning and finetuned a T5 to paraphrase them. We compared these paraphrased sentences with those paraphrased by a T5 finetuned on Quora Paraphrase dataset of 400,000 sentence pairs. The T5 finetuned on the Indicated meaning examples performed consistently better. Moreover, a Google Translate translates these paraphrased sentences accurately and consistently across languages

pdf bib
EduMT: Developing Machine Translation System for Educational Content in Indian Languages
Ramakrishna Appicharla | Asif Ekbal | Pushpak Bhattacharyya

In this paper, we explore various approaches to build Hindi to Bengali Neural Machine Translation (NMT) systems for the educational domain. Translation of educational content poses several challenges, such as unavailability of gold standard data for model building, extensive uses of domain-specific terms, as well as the presence of noise in the form of spontaneous speech as the corpus is prepared from subtitle data and noise due to the process of corpus creation through back-translation. We create an educational parallel corpus by crawling lecture subtitles and translating them into Hindi and Bengali using Google translate. We also create a clean parallel corpus by post-editing synthetic corpus via annotation and crowd-sourcing. We build NMT systems on the prepared corpus with domain adaptation objectives. We also explore data augmentation methods by automatically cleaning synthetic corpus and using it to further train the models. We experiment with combining domain adaptation objective with multilingual NMT. We report BLEU and TER scores of all the models on a manually created Hindi-Bengali educational testset. Our experiments show that the multilingual domain adaptation model outperforms all the other models by achieving 34.8 BLEU and 0.466 TER scores.

pdf bib
Assessing Post-editing Effort in the English-Hindi Direction
Arafat Ahsan | Vandan Mujadia | Dipti Misra Sharma

We present findings from a first in-depth post-editing effort estimation study in the English-Hindi direction along multiple effort indicators. We conduct a controlled experiment involving professional translators, who complete assigned tasks alternately, in a translation from scratch and a post-edit condition. We find that post-editing reduces translation time (by 63%), utilizes fewer keystrokes (by 59%), and decreases the number of pauses (by 63%) when compared to translating from scratch. We further verify the quality of translations thus produced via a human evaluation task in which we do not detect any discernible quality differences.

pdf bib
An Experiment on Speech-to-Text Translation Systems for Manipuri to English on Low Resource Setting
Loitongbam Sanayai Meetei | Laishram Rahul | Alok Singh | Salam Michael Singh | Thoudam Doren Singh | Sivaji Bandyopadhyay

In this paper, we report the experimental findings of building Speech-to-Text translation systems for Manipuri-English on low resource setting which is first of its kind in this language pair. For this purpose, a new dataset consisting of a Manipuri-English parallel corpus along with the corresponding audio version of the Manipuri text is built. Based on this dataset, a benchmark evaluation is reported for the Manipuri-English Speech-to-Text translation using two approaches: 1) a pipeline model consisting of ASR (Automatic Speech Recognition) and Machine translation, and 2) an end-to-end Speech-to-Text translation. Gaussian Mixture Model-Hidden Markov Model (GMM-HMM) and Time delay neural network (TDNN) Acoustic models are used to build two different pipeline systems using a shared MT system. Experimental result shows that the TDNN model outperforms GMM-HMM model significantly by a margin of 2.53% WER. However, their evaluation of Speech-to-Text translation differs by a small margin of 0.1 BLEU. Both the pipeline translation models outperform the end-to-end translation model by a margin of 2.6 BLEU score.

pdf bib
On the Transferability of Massively Multilingual Pretrained Models in the Pretext of the Indo-Aryan and Tibeto-Burman Languages
Salam Michael Singh | Loitongbam Sanayai Meetei | Alok Singh | Thoudam Doren Singh | Sivaji Bandyopadhyay

In recent times, machine translation models can learn to perform implicit bridging between language pairs never seen explicitly during training and showing that transfer learning helps for languages with constrained resources. This work investigates the low resource machine translation via transfer learning from multilingual pre-trained models i.e. mBART-50 and mT5-base in the pretext of Indo-Aryan (Assamese and Bengali) and Tibeto-Burman (Manipuri) languages via finetuning as a downstream task. Assamese and Manipuri were absent in the pretraining of both mBART-50 and the mT5 models. However, the experimental results attest that the finetuning from these pre-trained models surpasses the multilingual model trained from scratch.

pdf bib
Generating Slogans with Linguistic Features using Sequence-to-Sequence Transformer
Yeoun Yi | Hyopil Shin

Previous work generating slogans depended on templates or summaries of company descriptions, making it difficult to generate slogans with linguistic features. We present LexPOS, a sequence-to-sequence transformer model that generates slogans given phonetic and structural information. Our model searches for phonetically similar words given user keywords. Both the sound-alike words and user keywords become lexical constraints for generation. For structural repetition, we use POS constraints. Users can specify any repeated phrase structure by POS tags. Our model-generated slogans are more relevant to the original slogans than those of baseline models. They also show phonetic and structural repetition during inference, representative features of memorable slogans.

pdf bib
Using Integrated Gradients and Constituency Parse Trees to explain Linguistic Acceptability learnt by BERT
Anmol Nayak | Hari Prasad Timmapathini

Linguistic Acceptability is the task of determining whether a sentence is grammatical or ungrammatical. It has applications in several use cases like Question-Answering, Natural Language Generation, Neural Machine Translation, where grammatical correctness is crucial. In this paper we aim to understand the decision-making process of BERT (Devlin et al., 2019) in distinguishing between Linguistically Acceptable sentences (LA) and Linguistically Unacceptable sentences (LUA).We leverage Layer Integrated Gradients Attribution Scores (LIG) to explain the Linguistic Acceptability criteria that are learnt by BERT on the Corpus of Linguistic Acceptability (CoLA) (Warstadt et al., 2018) benchmark dataset. Our experiments on 5 categories of sentences lead to the following interesting findings: 1) LIG for LA are significantly smaller in comparison to LUA, 2) There are specific subtrees of the Constituency Parse Tree (CPT) for LA and LUA which contribute larger LIG, 3) Across the different categories of sentences we observed around 88% to 100% of the Correctly classified sentences had positive LIG, indicating a strong positive relationship to the prediction confidence of the model, and 4) Around 43% of the Misclassified sentences had negative LIG, which we believe can become correctly classified sentences if the LIG are parameterized in the loss function of the model.

pdf bib
The Importance of Context in Very Low Resource Language Modeling
Lukas Edman | Antonio Toral | Gertjan van Noord

This paper investigates very low resource language model pretraining, when less than 100 thousand sentences are available. We find that, in very low-resource scenarios, statistical n-gram language models outperform state-of-the-art neural models. Our experiments show that this is mainly due to the focus of the former on a local context. As such, we introduce three methods to improve a neural model’s performance in the low-resource setting, finding that limiting the model’s self-attention is the most effective one, improving on downstream tasks such as NLI and POS tagging by up to 5% for the languages we test on: English, Hindi, and Turkish.

pdf bib
Stylistic MR-to-Text Generation Using Pre-trained Language Models
Kunal Pagarey | Kanika Kalra | Abhay Garg | Saumajit Saha | Mayur Patidar | Shirish Karande

We explore the ability of pre-trained language models BART, an encoder-decoder model, GPT2 and GPT-Neo, both decoder-only models for generating sentences from structured MR tags as input. We observe best results on several metrics for the YelpNLG and E2E datasets. Style based implicit tags such as emotion, sentiment, length etc., allows for controlled generation but it is typically not present in MR. We present an analysis on YelpNLG showing BART can express the content with stylistic variations in the structure of the sentence. Motivated with the results, we define a new task of emotional situation generation from various POS tags and emotion label values as MR using EmpatheticDialogues dataset and report a baseline. Encoder-Decoder attention analysis shows that BART learns different aspects in MR at various layers and heads.

pdf bib
Deep Learning Based Approach For Detecting Suicidal Ideation in Hindi-English Code-Mixed Text: Baseline and Corpus
Kaustubh Agarwal | Bhavya Dhingra

Suicide rates are rising among the youth, and the high association with suicidal ideation expression on social media necessitates further research into models for detecting suicidal ideation in text, such as tweets, to enable mitigation. Existing research has proven the feasibility of detecting suicidal ideation on social media in a particular language. However, studies have shown that bilingual and multilingual speakers tend to use code-mixed text on social media making the detection of suicidal ideation on code-mixed data crucial, even more so with the increasing number of bilingual and multilingual speakers. In this study we create a code-mixed Hindi-English (Hinglish) dataset for detection of suicidal ideation and evaluate the performance of traditional classifiers, deep learning architectures, and transformers on it. Among the tested classifier architectures, Indic BERT gave the best results with an accuracy of 98.54%.

pdf bib
On the Universality of Deep Contextual Language Models
Shaily Bhatt | Poonam Goyal | Sandipan Dandapat | Monojit Choudhury | Sunayana Sitaram

Deep Contextual Language Models (LMs) like ELMO, BERT, and their successors dominate the landscape of Natural Language Processing due to their ability to scale across multiple tasks rapidly by pre-training a single model, followed by task-specific fine-tuning. Furthermore, multilingual versions of such models like XLM-R and mBERT have given promising results in zero-shot cross-lingual transfer, potentially enabling NLP applications in many under-served and under-resourced languages. Due to this initial success, pre-trained models are being used as ‘Universal Language Models’ as the starting point across diverse tasks, domains, and languages. This work explores the notion of ‘Universality’ by identifying seven dimensions across which a universal model should be able to scale, that is, perform equally well or reasonably well, to be useful across diverse settings. We outline the current theoretical and empirical results that support model performance across these dimensions, along with extensions that may help address some of their current limitations. Through this survey, we lay the foundation for understanding the capabilities and limitations of massive contextual language models and help discern research gaps and directions for future work to make these LMs inclusive and fair to diverse applications, users, and linguistic phenomena.

pdf bib
Towards Explainable Dialogue System: Explaining Intent Classification using Saliency Techniques
Ratnesh Joshi | Arindam Chatterjee | Asif Ekbal

Deep learning based methods have shown tremendous success in several Natural Language Processing (NLP) tasks. The recent trends in the usage of Deep Learning based models for natural language tasks have definitely produced incredible performance for several application areas. However, one major problem that most of these models face is the lack of transparency, i.e. the actual decision process of the underlying model is not explainable. In this paper, at first we solve a very fundamental problem of Natural Language Understanding (NLU), i.e. intent detection using a Bi-directional Long Short Term Memory (BiLSTM). In order to determine the defining features that lead to a specific intent class, we use the Layerwise Relevance Propagation (LRP) algorithm to find the defining feature(s). In the process, we conclude that saliency method of eLRP (epsilon Layerwise Relevance Propagation) is a prominent process for highlighting the important features of the input responsible for the current classification which results in significant insights to the inner workings, such as the reasons for misclassification by the black box model.

pdf bib
Comparing in context: Improving cosine similarity measures with a metric tensor
Isa M. Apallius de Vos | Ghislaine L. van den Boogerd | Mara D. Fennema | Adriana Correia

Cosine similarity is a widely used measure of the relatedness of pre-trained word embeddings, trained on a language modeling goal. Datasets such as WordSim-353 and SimLex-999 rate how similar words are according to human annotators, and as such are often used to evaluate the performance of language models. Thus, any improvement on the word similarity task requires an improved word representation. In this paper, we propose instead the use of an extended cosine similarity measure to improve performance on that task, with gains in interpretability. We explore the hypothesis that this approach is particularly useful if the word-similarity pairs share the same context, for which distinct contextualized similarity measures can be learned. We first use the dataset of Richie et al. (2020) to learn contextualized metrics and compare the results with the baseline values obtained using the standard cosine similarity measure, which consistently shows improvement. We also train a contextualized similarity measure for both SimLex-999 and WordSim-353, comparing the results with the corresponding baselines, and using these datasets as independent test sets for the all-context similarity measure learned on the contextualized dataset, obtaining positive results for a number of tests.

pdf bib
Context Matters in Semantically Controlled Language Generation for Task-oriented Dialogue Systems
Ye Liu | Wolfgang Maier | Wolfgang Minker | Stefan Ultes

This work combines information about the dialogue history encoded by pre-trained model with a meaning representation of the current system utterance to realise contextual language generation in task-oriented dialogues. We utilise the pre-trained multi-context ConveRT model for context representation in a model trained from scratch; and leverage the immediate preceding user utterance for context generation in a model adapted from the pre-trained GPT-2. Both experiments with the MultiWOZ dataset show that contextual information encoded by pre-trained model improves the performance of response generation both in automatic metrics and human evaluation. Our presented contextual generator enables higher variety of generated responses that fit better to the ongoing dialogue. Analysing the context size shows that longer context does not automatically lead to better performance, but the immediate preceding user utterance plays an essential role for contextual generation. In addition, we also propose a re-ranker for the GPT-based generation model. The experiments show that the response selected by the re-ranker has a significant improvement on automatic metrics.

pdf bib
Data Augmentation for Mental Health Classification on Social Media
Gunjan Ansari | Muskan Garg | Chandni Saxena

The mental disorder of online users is determined using social media posts. The major challenge in this domain is to avail the ethical clearance for using the user-generated text on social media platforms. Academic researchers identified the problem of insufficient and unlabeled data for mental health classification. To handle this issue, we have studied the effect of data augmentation techniques on domain-specific user-generated text for mental health classification. Among the existing well-established data augmentation techniques, we have identified Easy Data Augmentation (EDA), conditional BERT, and Back-Translation (BT) as the potential techniques for generating additional text to improve the performance of classifiers. Further, three different classifiers- Random Forest (RF), Support Vector Machine (SVM) and Logistic Regression (LR) are employed for analyzing the impact of data augmentation on two publicly available social media datasets. The experimental results show significant improvements in classifiers’ performance when trained on the augmented data.

pdf bib
VAE based Text Style Transfer with Pivot Words Enhancement Learning
Haoran Xu | Sixing Lu | Zhongkai Sun | Chengyuan Ma | Chenlei Guo

Text Style Transfer (TST) aims to alter the underlying style of the source text to another specific style while keeping the same content. Due to the scarcity of high-quality parallel training data, unsupervised learning has become a trending direction for TST tasks. In this paper, we propose a novel VAE based Text Style Transfer with pivOt Words Enhancement leaRning (VT-STOWER) method which utilizes Variational AutoEncoder (VAE) and external style embeddings to learn semantics and style distribution jointly. Additionally, we introduce pivot words learning, which is applied to learn decisive words for a specific style and thereby further improve the overall performance of the style transfer. The proposed VT-STOWER can be scaled to different TST scenarios given very limited and non-parallel training data with a novel and flexible style strength control mechanism. Experiments demonstrate that the VT-STOWER outperforms the state-of-the-art on sentiment, formality, and code-switching TST tasks.

pdf bib
MRE : Multi Relationship Extractor for Persona based Empathetic Conversational Model
Bharatram Natarajan | Abhijit Nargund

Artificial intelligence(AI) has come a long way in aiding the user requirements in many fields and domains. However, the current AI systems do not generate human- like response for user query. Research in these areas have started gaining traction recently with explorations on persona or empathy based response selection. But the combination of both the parameters in an open domain haven’t been explored in detail by the research community. The current work highlights the effect of persona on empathetic response. This research paper concentrates on improving the response selection model for PEC dataset, containing both persona information and empathetic response. This is achieved using an enhanced multi relationship extractor and phrase based information for the selection of response.

pdf bib
An End-to-End Speech Recognition for the Nepali Language
Sunil Regmi | Bal Krishna Bal

In this era of AI and Deep Learning, Speech Recognition has achieved fairly good levels of accuracy and is bound to change the way humans interact with computers, which happens mostly through texts today. Most of the speech recognition systems for the Nepali language to date use conventional approaches which involve separately trained acoustic, pronunciation and language model components. Creating a pronunciation lexicon from scratch and defining phoneme sets for the language requires expert knowledge, and at the same time is time-consuming. In this work, we present an End-to-End ASR approach, which uses a joint CTC- attention-based encoder-decoder and a Recurrent Neural Network based language modeling which eliminates the need of creating a pronunciation lexicon from scratch. ESPnet toolkit which uses Kaldi Style of data preparation is the framework used for this work. The speech and transcription data used for this research is freely available on the Open Speech and Language Resources (OpenSLR). We use about 159k transcribed speech data to train the speech recognition model which currently recognizes speech input with the CER of 10.3%.

pdf bib
Impact of Microphone position Measurement Error on Multi Channel Distant Speech Recognition & Intelligibility
Karan Nathwani | Sunil Kumar Kopparapu

It was shown in (Raikar et al., 2020) that the measurement error in the microphone position affected the room impulse response (RIR) which in turn affected the single channel speech recognition. In this paper, we ex-tend this to study the more complex and realistic scenario of multi channel distant speech recognition. Specifically we simulate m speakers in a given room with n microphones speaking without overlap. Then channel audio is beamformed and passed through a speech to text (s2t) engine. We compare the s2t accuracy when the microphone locations are known exactly (ground truth) with the s2t accuracy when there is a measurement error in the location of the microphone. We report the performance of an end-to-end s2t on beamformed input in terms of character error rate (CER) and and also speech intelligibility and quality in terms of STOI and PESQ respectively.

pdf bib
IE-CPS Lexicon: An Automatic Speech Recognition Oriented Indian-English Pronunciation Dictionary
Shelly Jain | Aditya Yadavalli | Ganesh Mirishkar | Chiranjeevi Yarra | Anil Kumar Vuppala

Indian English (IE), on the surface, seems quite similar to standard English. However, closer observation shows that it has actually been influenced by the surrounding vernacular languages at several levels from phonology to vocabulary and syntax. Due to this, automatic speech recognition (ASR) systems developed for American or British varieties of English result in poor performance on Indian English data. The most prominent feature of Indian English is the characteristic pronunciation of the speakers. The systems are unable to learn these acoustic variations while modelling and cannot parse the non-standard articulation of non-native speakers. For this purpose, we propose a new phone dictionary developed based on the Indian language Common Phone Set (CPS). The dictionary maps the phone set of American English to existing Indian phones based on perceptual similarity. This dictionary is named Indian English Common Phone Set (IE-CPS). Using this, we build an Indian English ASR system and compare its performance with an American English ASR system on speech data of both varieties of English. Our experiments on the IE-CPS show that it is quite effective at modelling the pronunciation of the average speaker of Indian English. ASR systems trained on Indian English data perform much better when modelled using IE-CPS, achieving a reduction in the word error rate (WER) of upto 3.95% when used in place of CMUdict. This shows the need for a different lexicon for Indian English.

pdf bib
An Investigation of Hybrid architectures for Low Resource Multilingual Speech Recognition system in Indian context
Ganesh Mirishkar | Aditya Yadavalli | Anil Kumar Vuppala

India is a land of language diversity. There are approximately 2000 languages spoken around, and among which officially registered are 23. In those, there are very few with Automatic Speech Recognition (ASR) capability. The reason for this is the fact that building an ASR system requires thousands of hours of annotated speech data, a vast amount of text, and a lexicon that can span all the words in the language. At the same time, it is observed that Indian languages share a common phonetic base. In this work, we build a multilingual speech recognition system for low-resource languages by leveraging the shared phonetic space. Deep Neural architectures play a vital role in improving the performance of low-resource ASR systems. The typical strategy used to train the multilingual acoustic model is merging various languages as a unified group. In this paper, the speech recognition system is built using six Indian languages, namely Gujarati, Hindi, Marathi, Odia, Tamil, and Telugu. Various state-of-the-art experiments were performed using different acoustic modeling and language modeling techniques.

pdf bib
Improve Sinhala Speech Recognition Through e2e LF-MMI Model
Buddhi Gamage | Randil Pushpananda | Thilini Nadungodage | Ruwan Weerasinghe

Automatic speech recognition (ASR) has experienced several paradigm shifts over the years from template-based approaches and statistical modeling to the popular GMM-HMM approach and then to deep learning hybrid model DNN-HMM. The latest shift is to end-to-end (e2e) DNN architecture. We present a study to build an e2e ASR system using state-of-the-art deep learning models to verify the applicability of e2e ASR models for the highly inflected and yet low-resource Sinhala language. We experimented on e2e Lattice-Free Maximum Mutual Information (e2e LF-MMI) model with the baseline statistical models with 40 hours of training data to evaluate. We used the same corpus for creating language models and lexicon in our previous study, which resulted in the best accuracy for the Sinhala language. We were able to achieve a Word-error-rate (WER) of 28.55% for Sinhala, only slightly worse than the existing best hybrid model. Our model, however, is more context-independent and faster for Sinhala speech recognition and so more suitable for general purpose speech-to-text translation.

pdf bib
Towards Multimodal Vision-Language Models Generating Non-Generic Text
Wes Robbins | Zanyar Zohourianshahzadi | Jugal Kalita

Vision-language models can assess visual context in an image and generate descriptive text. While the generated text may be accurate and syntactically correct, it is often overly general. To address this, recent work has used optical character recognition to supplement visual information with text extracted from an image. In this work, we contend that vision-language models can benefit from information that can be extracted from an image, but are not used by current models. We modify previous multimodal frameworks to accept relevant information from any number of auxiliary classifiers. In particular, we focus on person names as an additional set of tokens and create a novel image-caption dataset to facilitate captioning with person names. The dataset, Politicians and Athletes in Captions (PAC), consists of captioned images of well-known people in context. By fine-tuning pretrained models with this dataset, we demonstrate a model that can naturally integrate facial recognition tokens into generated text by training on limited data. For the PAC dataset, we provide a discussion on collection and baseline benchmark scores.

pdf bib
Image Caption Generation Framework for Assamese News using Attention Mechanism
Ringki Das | Thoudam Doren Singh

Automatic caption generation is an artificial intelligence problem that falls at the intersection of computer vision and natural language processing. Although significant works have been reported in image captioning, the contribution is limited to English and few major languages with sufficient resources. But, no work on image captioning has been reported in a resource-constrained language like Assamese. With this inspiration, we propose an encoder-decoder based framework for image caption generation in the Assamese news domain. The VGG-16 pre-trained model at the encoder side and LSTM with an attention mechanism are employed at the decoder side to generate the Assamese caption. We train the proposed model on the dataset built in-house consisting of 10,000 images with a single caption for each image. We describe our experimental methodology, quantitative and qualitative results which validate the effectiveness of our model for caption generation. The proposed model shows a BLEU score of 12.1 outperforming the baseline model.

pdf bib
An Efficient Keyframes Selection Based Framework for Video Captioning
Alok Singh | Loitongbam Sanayai Meetei | Salam Michael Singh | Thoudam Doren Singh | Sivaji Bandyopadhyay

Describing a video is a challenging yet attractive task since it falls into the intersection of computer vision and natural language generation. The attention-based models have reported the best performance. However, all these models follow similar procedures, such as segmenting videos into chunks of frames or sampling frames at equal intervals for visual encoding. The process of segmenting video into chunks or sampling frames at equal intervals causes encoding of redundant visual information and requires additional computational cost since a video consists of a sequence of similar frames and suffers from inescapable noise such as uneven illumination, occlusion and motion effects. In this paper, a boundary-based keyframes selection approach for video description is proposed that allow the system to select a compact subset of keyframes to encode the visual information and generate a description for a video without much degradation. The proposed approach uses 3 4 frames per video and yields competitive performance over two benchmark datasets MSVD and MSR-VTT (in both English and Hindi).

pdf bib
A Scaled Encoder Decoder Network for Image Captioning in Hindi
Santosh Kumar Mishra | Sriparna Saha | Pushpak Bhattacharyya

Image captioning is a prominent research area in computer vision and natural language processing, which automatically generates natural language descriptions for images. Most of the existing works have focused on developing models for image captioning in the English language. The current paper introduces a novel deep learning architecture based on encoder-decoder with an attention mechanism for image captioning in the Hindi language. For encoder, decoder, and attention, several deep learning-based architectures have been explored. Hindi, the fourth-most spoken language globally, is widely spoken in India and South Asia and is one of India’s official languages. The proposed encoder-decoder architecture utilizes scaling in convolution neural networks to achieve better accuracy than state-of-the-art image captioning methods in Hindi. The proposed method’s performance is compared with state-of-the-art methods in terms of BLEU scores and manual evaluation (in terms of adequacy and fluency). The obtained results demonstrate the efficacy of the proposed method.

pdf bib
Co-attention based Multimodal Factorized Bilinear Pooling for Internet Memes Analysis
Gitanjali Kumari | Amitava Das | Asif Ekbal

Social media platforms like Facebook, Twitter, and Instagram have a significant impact on several aspects of society. Memes are a new type of social media communication found on social platforms. Even though memes are primarily used to distribute humorous content, certain memes propagate hate speech through dark humor. It is critical to properly analyze and filter out these toxic memes from social media. But the presence of sarcasm and humor in an implicit way analyzes memes more challenging. This paper proposes an end-to-end neural network architecture that learns the complex association between text and image of a meme. For this purpose, we use a recent SemEval-2020 Task-8 multimodal dataset. We proposed an end-to-end CNN-based deep neural network architecture with two sub-modules viz. (i)Co-attention based sub-module and (ii) Multimodal Factorized Bilinear Pooling(MFB) sub-module to represent the textual and visual features of a meme in a more fine-grained way. We demonstrated the effectiveness of our proposed work through extensive experiments. The experimental results show that our proposed model achieves a 36.81% macro F1-score, outperforming all the baseline models.

pdf bib
How effective is incongruity? Implications for code-mixed sarcasm detection
Aditya Shah | Chandresh Maurya

The presence of sarcasm in conversational systems and social media like chatbots, Facebook, Twitter, etc. poses several challenges for downstream NLP tasks. This is attributed to the fact that the intended meaning of a sarcastic text is contrary to what is expressed. Further, the use of code-mix language to express sarcasm is increasing day by day. Current NLP techniques for code-mix data have limited success due to the use of different lexicon, syntax, and scarcity of labeled corpora. To solve the joint problem of code-mixing and sarcasm detection, we propose the idea of capturing incongruity through sub-word level embeddings learned via fastText. Empirical results show that our proposed model achieves an F1-score on code-mix Hinglish dataset comparable to pretrained multilingual models while training 10x faster and using a lower memory footprint.

pdf bib
Contrastive Learning of Sentence Representations
Hefei Qiu | Wei Ding | Ping Chen

Learning sentence representations which capture rich semantic meanings has been crucial for many NLP tasks. Pre-trained language models such as BERT have achieved great success in NLP, but sentence embeddings extracted directly from these models do not perform well without fine-tuning. We propose Contrastive Learning of Sentence Representations (CLSR), a novel approach which applies contrastive learning to learn universal sentence representations on top of pre-trained language models. CLSR utilizes semantic similarity of two sentences to construct positive instance for contrastive learning. Semantic information that has been captured by the pre-trained models is kept by getting sentence embeddings from these models with proper pooling strategy. An encoder followed by a linear projection takes these embeddings as inputs and is trained under a contrastive objective. To evaluate the performance of CLSR, we run experiments on a range of pre-trained language models and their variants on a series of Semantic Contextual Similarity tasks. Results show that CLSR gains significant performance improvements over existing SOTA language models.

pdf bib
Classifying Verses of the Quran using Doc2vec
Menwa Alshammeri | Eric Atwell | Mohammad Alsalka

The Quran, as a significant religious text, bears important spiritual and linguistic values. Understanding the text and inferring the underlying meanings entails semantic similarity analysis. We classified the verses of the Quran into 15 pre-defined categories or concepts, based on the Qurany corpus, using Doc2Vec and Logistic Regression. Our classifier scored 70% accuracy, and 60% F1-score using the distributed bag-of-words architecture. We then measured how similar the documents within the same category are to each other semantically and use this information to evaluate our model. We calculated the mean difference and average similarity values for each category to indicate how well our model describes that category.

pdf bib
ABB-BERT: A BERT model for disambiguating abbreviations and contractions
Prateek Kacker | Andi Cupallari | Aswin Subramanian | Nimit Jain

Abbreviations and contractions are commonly found in text across different domains. For example, doctors’ notes contain many contractions that can be personalized based on their choices. Existing spelling correction models are not suitable to handle expansions because of many reductions of characters in words. In this work, we propose ABB-BERT, a BERT-based model, which deals with an ambiguous language containing abbreviations and contractions. ABB-BERT can rank them from thousands of options and is designed for scale. It is trained on Wikipedia text, and the algorithm allows it to be fine-tuned with little compute to get better performance for a domain or person. We are publicly releasing the training dataset for abbreviations and contractions derived from Wikipedia.

pdf bib
Training data reduction for multilingual Spoken Language Understanding systems
Anmol Bansal | Anjali Shenoy | Krishna Chaitanya Pappu | Kay Rottmann | Anurag Dwarakanath

Fine-tuning self-supervised pre-trained language models such as BERT has significantly improved state-of-the-art performance on natural language processing tasks. Similar finetuning setups can also be used in commercial large scale Spoken Language Understanding (SLU) systems to perform intent classification and slot tagging on user queries. Finetuning such powerful models for use in commercial systems requires large amounts of training data and compute resources to achieve high performance. This paper is a study on the different empirical methods of identifying training data redundancies for the fine tuning paradigm. Particularly, we explore rule based and semantic techniques to reduce data in a multilingual fine tuning setting and report our results on key SLU metrics. Through our experiments, we show that we can achieve on par/better performance on fine-tuning using a reduced data set as compared to a model finetuned on the entire data set.

pdf bib
Leveraging Expectation Maximization for Identifying Claims in Low Resource Indian Languages
Rudra Dhar | Dipankar Das

Identification of the checkable claims is one of the important prior tasks while dealing with infinite amount of data streaming from social web and the task becomes a compulsory one when we analyze them on behalf of a multilingual country like India that contains more than 1 billion people. In the present work, we describe our system which is made for detecting check-worthy claim sentences in resource scarce Indian languages (e.g., Bengali and Hindi). Firstly, we collected sentences from various sources in Bengali and Hindi and vectorized them with several NLP features. We labeled a small portion of them for check-worthy claims manually. However, in order to label rest amount of data in a semi-supervised fashion, we employed the Expectation Maximization (EM) algorithm tuned with the Multivariate Gaussian Mixture Model (GMM) to assign weakly labels. The optimal number of Gaussians in this algorithm is traced by using Logistic Regression. Furthermore, we used different ratios of manually labeled data and weakly labeled data to train our various machine learning models. We tabulated and plotted the performances of the models along with the stepwise decrement in proportion of manually labeled data. The experimental results were at par with our theoretical understanding, and we conclude that the weakly labeling of check-worthy claim sentences in low resource languages with EM algorithm has true potential.

pdf bib
Performance of BERT on Persuasion for Good
Saumajit Saha | Kanika Kalra | Manasi Patwardhan | Shirish Karande

We consider the task of automatically classifying the persuasion strategy employed by an utterance in a dialog. We base our work on the PERSUASION-FOR-GOOD dataset, which is composed of conversations between crowdworkers trying to convince each other to make donations to a charity. Currently, the best known performance on this dataset, for classification of persuader’s strategy, is not derived by employing pretrained language models like BERT. We observe that a straightforward fine-tuning of BERT does not provide significant performance gain. Nevertheless, nonuniformly sampling to account for the class imbalance and a cost function enforcing a hierarchical probabilistic structure on the classes provides an absolute improvement of 10.79% F1 over the previously reported results. On the same dataset, we replicate the framework for classifying the persuadee’s response.

pdf bib
Multi-Turn Target-Guided Topic Prediction with Monte Carlo Tree Search
Jingxuan Yang | Si Li | Jun Guo

This paper concerns the problem of topic prediction in target-guided conversation, which requires the system to proactively and naturally guide the topic thread of the conversation, ending up with achieving a designated target subject. Existing studies usually resolve the task with a sequence of single-turn topic prediction. Greedy decision is made at each turn since it is impossible to explore the topics in future turns under the single-turn topic prediction mechanism. As a result, these methods often suffer from generating sub-optimal topic threads. In this paper, we formulate the target-guided conversation as a problem of multi-turn topic prediction and model it under the framework of Markov decision process (MDP). To alleviate the problem of generating sub-optimal topic thread, Monte Carlo tree search (MCTS) is employed to improve the topic prediction by conducting long-term planning. At online topic prediction, given a target and a start utterance, our proposed MM-TP (MCTS-enhanced MDP for Topic Prediction) firstly performs MCTS to enhance the policy for predicting the topic for each turn. Then, two retrieval models are respectively used to generate the responses of the agent and the user. Quantitative evaluation and qualitative study showed that MM-TP significantly improved the state-of-the-art baselines.

pdf bib
Resolving Prepositional Phrase Attachment Ambiguities with Contextualized Word Embeddings
Adwait Ratnaparkhi | Atul Kumar

This paper applies contextualized word embedding models to a long-standing problem in the natural language parsing community, namely prepositional phrase attachment. Following past formulations of this problem, we use data sets in which the attachment decision is both a binary-valued choice as well as a multi-valued choice. We present a deep learning architecture that fine-tunes the output of a contextualized word embedding model for the purpose of predicting attachment decisions. We present experiments on two commonly used datasets that outperform the previous best results, using only the original training data and the unannotated full sentence context.

pdf bib
Multi-Source Cross-Lingual Constituency Parsing
Hour Kaing | Chenchen Ding | Katsuhito Sudoh | Masao Utiyama | Eiichiro Sumita | Satoshi Nakamura

Pretrained multilingual language models have become a key part of cross-lingual transfer for many natural language processing tasks, even those without bilingual information. This work further investigates the cross-lingual transfer ability of these models for constituency parsing and focuses on multi-source transfer. Addressing structure and label set diversity problems, we propose the integration of typological features into the parsing model and treebank normalization. We trained the model on eight languages with diverse structures and use transfer parsing for an additional six low-resource languages. The experimental results show that the treebank normalization is essential for cross-lingual transfer performance and the typological features introduce further improvement. As a result, our approach improves the baseline F1 of multi-source transfer by 5 on average.

pdf bib
Kannada Sandhi Generator for Lopa and Adesha Sandhi
Musica Supriya | Dinesh U. Acharya | Ashalatha Nayak | Arjuna S. R

Kannada is one of the major spoken classical languages in India. It is morphologically rich and highly agglutinative in nature. One of the important grammatical aspects is the concept of sandhi(euphonic change). There has not been a sandhi generator for Kannada and this work aims at basic sandhi generation. In this paper, we present algorithms for lopa and Adesha sandhi using a rule-based approach. The proposed method generates the sandhied word and corresponding sandhi without any help of dictionary. This work is significant for agglutinative languages especially to Dravidian languages and can be used to enhance the vocabulary for language related tasks.

pdf bib
Data Augmentation for Low-Resource Named Entity Recognition Using Backtranslation
Usama Yaseen | Stefan Langer

The state of art natural language processing systems relies on sizable training datasets to achieve high performance. Lack of such datasets in the specialized low resource domains lead to suboptimal performance. In this work, we adapt backtranslation to generate high quality and linguistically diverse synthetic data for low-resource named entity recognition. We perform experiments on two datasets from the materials science (MaSciP) and biomedical (S800) domains. The empirical results demonstrate the effectiveness of our proposed augmentation strategy, particularly in the low-resource scenario.

pdf bib
Semantics of Spatio-Directional Geometric Terms of Indian Languages
Sukhada Sukhada | Paul Soma | Rahul Kumar | Karthik Puranik

This paper examines widely prevalent yet little-studied expressions in Indian languages which are known as geometrical terms be-cause “they engage locations along the axes of the reference object”. These terms are andara (inside), b ̄ahara (outside), ̄age (in front of), s ̄amane (in front of), p ̄ıche (back), ̄upara (above/over), n ̄ıce (under/below), d ̄ayem. (right), b ̄ayem. (left), p ̄asa (near), d ̄ura (away/far) in Hindi. The way these terms have been interpreted by the scholars of the Hindi language and handled in the Hindi Dependency treebank is misleading. This paper proposes an alternative analysis of these terms focusing on their triple – nominal, modifier and relational - functions and presents abstract semantic representations of these terms following the proposed analysis. The semantic representation will be explicit, unambiguous abstract and therefore universal in nature. The correspondence of these terms in Bangla and Kannada are also identified. Disambiguation of geometric terms will facilitate parsing and machine translation especially from Indian Language to English because these geometric terms of Indian languages are variedly translated in English de-pending on context.

pdf bib
Morpheme boundary Detection & Grammatical feature Prediction for Gujarati : Dataset & Model
Jatayu Baxi | Brijesh Bhatt

Developing Natural Language Processing resources for a low resource language is a challenging but essential task. In this paper, we present a Morphological Analyzer for Gujarati. We have used a Bi-Directional LSTM based approach to perform morpheme boundary detection and grammatical feature tagging. We have created a data set of Gujarati words with lemma and grammatical features. The Bi-LSTM based model of Morph Analyzer discussed in the paper handles the language morphology effectively without the knowledge of any hand-crafted suffix rules. To the best of our knowledge, this is the first dataset and morph analyzer model for the Gujarati language which performs both grammatical feature tagging and morpheme boundary detection tasks.

pdf bib
Auditing Keyword Queries Over Text Documents
Bharath Kumar Reddy Apparreddy | Sailaja Rajanala | Manish Singh

Data security and privacy is an issue of growing importance in the healthcare domain. In this paper, we present an auditing system to detect privacy violations for unstructured text documents such as healthcare records. Given a sensitive document, we present an anomaly detection algorithm that can find the top-k suspicious keyword queries that may have accessed the sensitive document. Since unstructured healthcare data, such as medical reports and query logs, are not easily available for public research, in this paper, we show how one can use the publicly available DBLP data to create an equivalent healthcare data and query log, which can then be used for experimental evaluation.

pdf bib
A Method to Disambiguate a Word by Using Restricted Boltzmann Machine
Nazreena Rahman | Bhogeswar Borah

Finding the correct sense of a word is of great importance in many textual data related applications such as information retrieval, text mining and natural language processing. We have proposed one novel Word Sense Disambiguation (WSD) method according to its context. Based on collocation extraction score, the proposed method extracts three different features for each sense definition of a target word. These features create a feature vector and all the feature vectors create a sense matrix. Here, Restricted Boltzmann Machine (RBM) is used to enhance the sense matrix. Comparison of the proposed WSD method is made with current state-of-the-art systems using SENSEVAL and Sem Eval datasets. The proposed WSD method shows the practical implementation by applying on query-based text summary. For evaluation on query-based text summary, the proposed WSD method uses DUC datasets containing news-wire articles. Finally, the experimental analysis shows that our proposed WSD method performs better as compared to the current systems.

pdf bib
Encoder Decoder Approach to Automated Essay Scoring For Deeper Semantic Analysis
Priyatam Naravajhula | Sreedeep Rayavarapu | Srujana Inturi

Descriptive or essay type of answers have always played a major role in education. They clearly capture the student’s grasp on knowledge and presentation skills. Manual essay scoring can be a daunting process to human evaluators; assessing descriptive answers can present a huge overhead owing to limited numbers of evaluators and an out of proportional number of essays to be graded hence leading to an inefficient or an inaccurate score. There has been a major shift in paradigm from traditional classroom education to online education engendered by COVID-19 pandemic; it seems plausible to infer that future assessment of education shall be online, making the solution of automatic essay scorer not only relevant, but of paramount importance. We explore several neural architecture models for the task of automated essay scoring system. Results and Experimental analysis exhibit that our model based on recurrent encoder-decoder provides for a deeper semantic analysis hence, outperforming a strong baseline in terms of quadratic weighted kappa score.

pdf bib
Temporal Question Generation from History Text
Harsimran Bedi | Sangameshwar Patil | Girish Palshikar

Temporal analysis of history text has always held special significance to students, historians and the Social Sciences community in general. We observe from experimental data that existing deep learning (DL) models of ProphetNet and UniLM for question generation (QG) task do not perform satisfactorily when used directly for temporal QG from history text. We propose linguistically motivated templates for generating temporal questions that probe different aspects of history text and show that finetuning the DL models using the temporal questions significantly improves their performance on temporal QG task. Using automated metrics as well as human expert evaluation, we show that performance of the DL models finetuned with the template-based questions is better than finetuning done with temporal questions from SQuAD.

pdf bib
CAWESumm: A Contextual and Anonymous Walk Embedding Based Extractive Summarization of Legal Bills
Deepali Jain | Malaya Dutta Borah | Anupam Biswas

Extractive summarization of lengthy legal documents requires an appropriate sentence scoring mechanism. This mechanism should capture both the local semantics of a sentence as well as the global document-level context of a sentence. The search for an appropriate sentence embedding that can enable an effective scoring mechanism has been the focus of several research works in this domain. In this work, we propose an improved sentence embedding approach that combines a Legal Bert-based local embedding of the sentence with an anonymous random walk-based entire document embedding. Such combined features help effectively capture the local and global information present in a sentence. The experimental results suggest that the proposed sentence embedding approach can be very beneficial for the appropriate representation of sentences in legal documents, improving the sentence scoring mechanism required for extractive summarization of these documents.

pdf bib
Multi-document Text Summarization using Semantic Word and Sentence Similarity: A Combined Approach
Rajendra Roul

The exponential growth in the number of text documents produced daily on the web poses several difficulties to people who are responsible for collecting, organizing, and searching different textual content related to a particular topic. Automatic Text Summarization works well in this direction, which can review many documents and pull out the relevant information. But the limitations associated with automatic text summarization need to be removed by finding efficient workarounds. Although current research works have focused on this direction for further improvements, they still face many challenges. This paper proposes a combined semantic-based word and sentence similarity approach to summarize a corpus of text documents. To arrange the sentences in the final summary, KL-divergence technique is used. The experimental work is conducted using DUC datasets, and the obtained results are promising.

pdf bib
#covid is war and #vaccine is weapon? COVID-19 metaphors in India
Mohammed Khaliq | Rohan Joseph | Sunny Rai

Metaphors are creative cognitive constructs that are employed in everyday conversation to describe abstract concepts and feelings. Prevalent conceptual metaphors such as WAR, MONSTER, and DARKNESS in COVID-19 online discourse sparked a multi-faceted debate over their efficacy in communication, resultant psychological impact on listeners, and their appropriateness in social discourse. In this work, we investigate metaphors used in discussions around COVID-19 on Indian Twitter. We observe subtle transitions in metaphorical mappings as the pandemic progressed. Our experiments, however, didn’t indicate any affective impact of WAR metaphors on the COVID-19 discourse.

pdf bib
Studies Towards Language Independent Fake News Detection
Soumayan Majumder | Dipankar Das

We have studied that fake news is currently one of the trending topic and it causes problem to many people and organization. We use COVID19 domain and 7 languages to work on. We collect our data from twitter. We build two types of model one is language dependent and other one is language independent. We get better result in language independent model for English, Hindi and Bengali language. Results of European languages like German, Italian, French and Spanish are comparable in both language dependent and independent model.

pdf bib
Wikipedia Current Events Summarization using Particle Swarm Optimization
Santosh Kumar Mishra | Darsh Kaushik | Sriparna Saha | Pushpak Bhattacharyya

This paper proposes a method to summarize news events from multiple sources. We pose event summarization as a clustering-based optimization problem and solve it using particle swarm optimization. The proposed methodology uses the search capability of particle swarm optimization, detecting the number of clusters automatically. Experiments are conducted with the Wikipedia Current Events Portal dataset and evaluated using the well-known ROUGE-1, ROUGE-2, and ROUGE-L scores. The obtained results show the efficacy of the proposed methodology over the state-of-the-art methods. It attained improvement of 33.42%, 81.75%, and 57.58% in terms of ROUGE-1, ROUGE-2, and ROUGE-L, respectively.

pdf bib
Automated Evidence Collection for Fake News Detection
Mrinal Rawat | Diptesh Kanojia

Fake news, misinformation, and unverifiable facts on social media platforms propagate disharmony and affect society, especially when dealing with an epidemic like COVID-19. The task of Fake News Detection aims to tackle the effects of such misinformation by classifying news items as fake or real. In this paper, we propose a novel approach that improves over the current automatic fake news detection approaches by automatically gathering evidence for each claim. Our approach extracts supporting evidence from the web articles and then selects appropriate text to be treated as evidence sets. We use a pre-trained summarizer on these evidence sets and then use the extracted summary as supporting evidence to aid the classification task. Our experiments, using both machine learning and deep learning-based methods, help perform an extensive evaluation of our approach. The results show that our approach outperforms the state-of-the-art methods in fake news detection to achieve an F1-score of 99.25 over the dataset provided for the CONSTRAINT-2021 Shared Task. We also release the augmented dataset, our code and models for any further research.

pdf bib
Prediction of Video Game Development Problems Based on Postmortems using Different Word Embedding Techniques
Anirudh A | Aman RAJ Singh | Anjali Goyal | Lov Kumar | N L Bhanu Murthy

The interactive entertainment industry is being actively involved with the development, marketing and sale of video games in the past decade. The increasing interest in video games has led to an increase in video game development techniques and methods. It has emerged as an immensely large sector, and now it has grown to be larger than the movie and music industry combined. The postmortem of a game outlines and analyzes the game’s history, team goals, what went right, and what went wrong with the game. Despite its significance, there is little understanding related to the challenges encountered by the programmers. Postmortems are not properly maintained and are informally written, leading to a lack of trustworthiness. In this study, we perform a systematic analysis on different problems faced in the video game development. The need for automation and ML techniques arises because it could help game developers easily identify the exact problem from the description, and hence be able to easily find a solution. This work could also help developers in identifying frequent mistakes that could be avoided, and will provide researchers a beginning point to further consider game development in context of software engineering.

pdf bib
Multi-task pre-finetuning for zero-shot cross lingual transfer
Moukthika Yerramilli | Pritam Varma | Anurag Dwarakanath

Building machine learning models for low resource languages is extremely challenging due to the lack of available training data (either un-annotated or annotated). To support such scenarios, zero-shot cross lingual transfer is used where the machine learning model is trained on a resource rich language and is directly tested on the resource poor language. In this paper, we present a technique which improves the performance of zero-shot cross lingual transfer. Our method performs multi-task pre-finetuning on a resource rich language using a multilingual pre-trained model. The pre-finetuned model is then tested in a zero-shot manner on the resource poor languages. We test the performance of our method on 8 languages and for two tasks, namely, Intent Classification (IC) & Named Entity Recognition (NER) using the MultiAtis++ dataset. The results showed that our method improves IC performance in 7 out of 8 languages and NER performance in 4 languages. Our method also leads to faster convergence during finetuning. The usage of pre-finetuning demonstrates a data efficient way for supporting new languages and geographies across the world.

pdf bib
Sentiment Analysis For Bengali Using Transformer Based Models
Anirban Bhowmick | Abhik Jana

Sentiment analysis is one of the key Natural Language Processing (NLP) tasks that has been attempted by researchers extensively for resource-rich languages like English. But for low resource languages like Bengali very few attempts have been made due to various reasons including lack of corpora to train machine learning models or lack of gold standard datasets for evaluation. However, with the emergence of transformer models pre-trained in several languages, researchers are showing interest to investigate the applicability of these models in several NLP tasks, especially for low resource languages. In this paper, we investigate the usefulness of two pre-trained transformers models namely multilingual BERT and XLM-RoBERTa (with fine-tuning) for sentiment analysis for the Bengali Language. We use three datasets for the Bengali language for evaluation and produce state-of-the-art performance, even reaching a maximum of 95% accuracy for a two-class sentiment classification task. We believe, this work can serve as a good benchmark as far as sentiment analysis for the Bengali language is concerned.

pdf bib
IndicFed: A Federated Approach for Sentiment Analysis in Indic Languages
Jash Mehta | Deep Gandhi | Naitik Rathod | Sudhir Bagul

The task of sentiment analysis has been extensively studied in high-resource languages. Even though sentiment analysis is studied for some resource-constrained languages, the corpora and the datasets available in other low resource languages are scarce and fragmented. This prevents further research of resource-constrained languages and also inhibits model performance for these languages. Privacy concerns may also be raised while aggregating some datasets for training central models. Our work tries to steer the research of sentiment analysis for resource-constrained languages in the direction of Federated Learning. We conduct various experiments to compare server based and federated approaches for 4 Indic Languages - Marathi, Hindi, Bengali, and Telugu. Specifically, we show that a privacy preserving approach, Federated Learning surpasses traditional server trained LSTM model and exhibits comparable performance to other servers-side transformer models.

pdf bib
An Efficient BERT Based Approach to Detect Aggression and Misogyny
Sandip Dutta | Utso Majumder | Sudip Naskar

Social media is bustling with ever growing cases of trolling, aggression and hate. A huge amount of social media data is generated each day which is insurmountable for manual inspection. In this work, we propose an efficient and fast method to detect aggression and misogyny in social media texts. We use data from the Second Workshop on Trolling, Aggression and Cyber Bullying for our task. We employ a BERT based model to augment our data. Next we employ Tf-Idf and XGBoost for detecting aggression and misogyny. Our model achieves 0.73 and 0.85 Weighted F1 Scores on the 2 prediction tasks, which are comparable to the state of the art. However, the training time, model size and resource requirements of our model are drastically lower compared to the state of the art models, making our model useful for fast inference.

pdf bib
How vulnerable are you? A Novel Computational Psycholinguistic Analysis for Phishing Influence Detection
Anik Chatterjee | Sagnik Basu

This document contains our work and progress regarding phishing detection by searching for proper influential sentences. Currently, the world is becoming smart, as a result most of the transactions and posting offers happen online. So, human beings have become the most vulnerable to security breach or hacking through phishing attacks, or being persuaded through influential texts in social media sites. We have analyzed influential and non-influential sentences and populated our dataset with those. We have proposed a computational model for implementing Cialdini and we got state of the art accuracy with our model. Our approach is language independent and domain independent and it is applicable to any problem where persuation detection is important. Our dataset and proposed computational psycholinguistic approach will motivate researchers to work more in the area of persuasion detection.

pdf bib
Aspect Based Sentiment Analysis Using Spectral Temporal Graph Neural Network
Abir Chakraborty

The objective of Aspect Based Sentiment Analysis is to capture the sentiment of reviewers associated with different aspects. However, complexity of the review sentences, presence of double negation and specific usage of words found in different domains make it difficult to predict the sentiment accurately and overall a challenging natural language understanding task. While recurrent neural network, attention mechanism and more recently, graph attention based models are prevalent, in this paper we propose graph Fourier transform based network with features created in the spectral domain. While this approach has found considerable success in the forecasting domain, it has not been explored earlier for any natural language processing task. The method relies on creating and learning an underlying graph from the raw data and thereby using the adjacency matrix to shift to the graph Fourier domain. Subsequently, Fourier transform is used to switch to the frequency (spectral) domain where new features are created. These series of transformation proved to be extremely efficient in learning the right representation as we have found that our model achieves the best result on both the SemEval-2014 datasets, i.e., “Laptop” and “Restaurants” domain. Our proposed model also found competitive results on the two other recently proposed datasets from the e-commerce domain.

pdf bib
Using Random Perturbations to Mitigate Adversarial Attacks on Sentiment Analysis Models
Abigail Swenor | Jugal Kalita

Attacks on deep learning models are often difficult to identify and therefore are difficult to protect against. This problem is exacerbated by the use of public datasets that typically are not manually inspected before use. In this paper, we offer a solution to this vulnerability by using, during testing, random perturbations such as spelling correction if necessary, substitution by random synonym, or simply drop-ping the word. These perturbations are applied to random words in random sentences to defend NLP models against adversarial attacks. Our Random Perturbations Defense andIncreased Randomness Defense methods are successful in returning attacked models to similar accuracy of models before attacks. The original accuracy of the model used in this work is 80% for sentiment classification. After undergoing attacks, the accuracy drops to an accuracy between 0% and 44%. After applying our defense methods, the accuracy of the model is returned to the original accuracy within statistical significance.

pdf bib
Retrofitting of Pre-trained Emotion Words with VAD-dimensions and the Plutchik Emotions
Manasi Kulkarni | Pushpak Bhattacharyya

The word representations are based on distributional hypothesis according to which words that occur in the similar contexts, tend to have a similar meaning and appear closer in vector space. For example, the emotionally dissimilar words ”joy” and ”sadness” have higher cosine similarity. The existing pre-trained embedding models lack in emotional words interpretations. For creating our VAD-Emotion embeddings, we modify the pre-trained word embeddings with emotion information. This is a lexicons based approach that uses the Valence, Arousal and Dominance (VAD) values, and the Plutchik’s emotions to incorporate the emotion information in pre-trained word embeddings using post-training processing. This brings emotionally similar words nearer and emotionally dissimilar words away from each other in the proposed vector space. We demonstrate the performance of proposed embedding through NLP downstream task - Emotion Recognition.

pdf bib
Evaluating Pretrained Transformer Models for Entity Linking inTask-Oriented Dialog
Sai Muralidhar Jayanthi | Varsha Embar | Karthik Raghunathan

The wide applicability of pretrained transformer models (PTMs) for natural language tasks is well demonstrated, but their ability to comprehend short phrases of text is less explored. To this end, we evaluate different PTMs from the lens of unsupervised Entity Linking in task-oriented dialog across 5 characteristics– syntactic, semantic, short-forms, numeric and phonetic. Our results demonstrate that several of the PTMs produce sub-par results when compared to traditional techniques, albeit competitive to other neural baselines. We find that some of their shortcomings can be addressed by using PTMs fine-tuned for text-similarity tasks, which illustrate an improved ability in comprehending semantic and syntactic correspondences, as well as some improvements for short-forms, numeric and phonetic variations in entity mentions. We perform qualitative analysis to understand nuances in their predictions and discuss scope for further improvements.

pdf bib
Cascading Adaptors to Leverage English Data to Improve Performance of Question Answering for Low-Resource Languages
Hariom Pandya | Bhavik Ardeshna | Brijesh Bhatt

Transformer based architectures have shown notable results on many down streaming tasks including question answering. The availability of data, on the other hand, impedes obtaining legitimate performance for low-resource languages. In this paper, we investigate the applicability of pre-trained multilingual models to improve the performance of question answering in low-resource languages. We tested four combinations of language and task adapters using multilingual transformer architectures on seven languages similar to MLQA dataset. Additionally, we have also proposed zero-shot transfer learning of low-resource question answering using language and task adapters. We observed that stacking the language and the task adapters improves the multilingual transformer models’ performance significantly for low-resource languages. Our code and trained models are available at: https://github.com/CALEDIPQALL/

pdf bib
eaVQA: An Experimental Analysis on Visual Question Answering Models
Souvik Chowdhury | Badal Soni

Visual Question Answering (VQA) has recently become a popular research area. VQA problem lies in the boundary of Computer Vision and Natural Language Processing research domains. In VQA research, the dataset is a very important aspect because of its variety in image types i.e. natural and synthetic and also question answer source i.e. originated from human source or computer-generated question answer. Various details about each dataset is given in this paper, which can help future researchers to a great extent. In this paper, we discussed and compared the experimental performance of Stacked Attention Network Model (SANM) and bidirectional LSTM and MUTAN based fusion models. As per the experimental results, MUTAN accuracy and loss are 29% and 3.5 respectively. SANM model is giving 55% accuracy and a loss of 2.2 whereas VQA model is giving 59% accuracy and 1.9 loss.

pdf bib
Deep Embedding of Conversation Segments
Abir Chakraborty | Anirban Majumder

We introduce a novel conversation embedding by extending Bidirectional Encoder Representations from Transformers (BERT) framework. Specifically, information related to “turn” and “role” that are unique to conversations are augmented to the word tokens and the next sentence prediction task predicts a segment of a conversation possibly spanning across multiple roles and turns. It is observed that the addition of role and turn substantially increases the next sentence prediction accuracy. Conversation embeddings obtained in this fashion are applied to (a) conversation clustering, (b) conversation classification and (c) as a context for automated conversation generation on new datasets (unseen by the pre-training model). We found that clustering accuracy is greatly improved if embeddings are used as features as opposed to conventional tf-idf based features that do not take role or turn information into account. On classification task, a fine-tuned model on conversation embedding achieves accuracy comparable to an optimized linear SVM model on tf-idf based features. Finally, we present a way of capturing variable length context in sequence-to-sequence models by utilizing this conversation embedding and show that BLEU score improves over a vanilla sequence to sequence model without context.

pdf bib
DialogActs based Search and Retrieval for Response Generation in Conversation Systems
Nidhi Arora | Rashmi Prasad | Srinivas Bangalore

Designing robust conversation systems with great customer experience requires a team of design experts to think of all probable ways a customer can interact with the system and then author responses for each use case individually. The responses are authored from scratch for each new client and application even though similar responses have been created in the past. This happens largely because the responses are encoded using domain specific set of intents and entities. In this paper, we present preliminary work to define a dialog act schema to merge and map responses from different domains and applications using a consistent domain-independent representation. These representations are stored and maintained using an Elasticsearch system to facilitate generation of responses through a search and retrieval process. We experimented generating different surface realizations for a response given a desired information state of the dialog.

pdf bib
An On-device Deep-Learning Approach for Attribute Extraction from Heterogeneous Unstructured Text
Mahesh Gorijala | Aniruddha Bala | Pinaki Bhaskar | Krishnaditya | Vikram Mupparthi

Mobile devices, with their rapidly growing usage, have turned into rich sources of user information, holding critical insights for betterment of user experience and personalization. Creating, receiving and storing important information in the form of unstructured text has become a part and parcel of daily routine of users. From purchase deliveries in Short Message Service (SMS) or Notifications, to event booking details in Calendar applications, mobile devices serve as a portal for understanding user interests, behaviours and activities through information extraction. In this paper, we address the challenge of on-device extraction of user information from unstructured data in natural language from heterogeneous sources like messages, notification, calendar etc. The issue of privacy concern is effectively eliminated by the on-device nature of the proposed solution. Our proposed solution consists of 3 components – A Na ̈ıve-Bayes based classifier for domain identification, a Dual Character andWord based Bidirectional Long Short Term Memory (Bi-LSTM) and Conditional Random Field (CRF) model for attribute extraction and a rule-based Entity Linker. Our solution achieved a 93.29% F1 score on five domains (shopping, travel, event, service and personal). Since on-device deployment has memory and latency constraints, we ensure minimal model size and optimal inference latency. To demonstrate the efficacy of our approach, we have experimented on CoNLL- 2003 dataset and achieved comparable performance to existing benchmark results.

pdf bib
Weakly Supervised Extraction of Tasks from Text
Sachin Pawar | Girish Palshikar | Anindita Sinha Banerjee

In this paper, we propose a novel problem of automatic extraction of tasks from text. A task is a well-defined knowledge-based volitional action. We describe various characteristics of tasks as well as compare and contrast them with events. We propose two techniques for task extraction – i) using linguistic patterns and ii) using a BERT-based weakly supervised neural model. We evaluate our techniques with other competent baselines on 4 datasets from different domains. Overall, the BERT-based weakly supervised neural model generalizes better across multiple domains as compared to the purely linguistic patterns based approach.

pdf bib
A German Corpus of Reflective Sentences
Veronika Solopova | Oana-Iuliana Popescu | Margarita Chikobava | Ralf Romeike | Tim Landgraf | Christoph Benzmüller

Reflection about a learning process is beneficial to students in higher education (Bub-nys, 2019). The importance of machine understanding of reflective texts grows as applications supporting students become more widespread. Nevertheless, due to the sensitive content, there is no public corpus available yet for the classification of text reflectiveness. We provide the first open-access corpus of reflective student essays in German. We collected essays from three different disciplines (Software Development, Ethics of Artificial Intelligence, and Teacher Training). We annotated the corpus at sentence level with binary reflective/non-reflective labels, using an iterative annotation process with linguistic and didactic specialists, mapping the reflective components found in the data to existing schemes and complementing them. We propose and evaluate linguistic features of reflectiveness and analyse their distribution within the resulted sentences according to their labels. Our contribution constitutes the first open-access corpus to help the community towards a unified approach for reflection detection.

pdf bib
Analysis of Manipuri Tones in ManiTo: A Tonal Contrast Database
Thiyam Susma Devi | Pradip K. Das

Manipuri is a low-resource, tonal language spoken predominantly in Manipur, a northeastern state of India. It has two tones - level and falling tones. For an acceptable Automatic Speech Recognition (ASR) system, integration of tonal information from a robust Tone Recognition model is essential. Research work on ASR has been done on Asian, African and Indo-European tonal languages such as Mandarin, Thai, Vietnamese and Chinese but Manipuri is largely unexplored. This paper focuses on the fundamental analysis of the developed hand-crafted tonal contrast dataset, ManiTo. It is observed that the height and slope of the pitch contour can be used to distinguish the two tones of the Manipuri language.

pdf bib
Building a Linguistic Resource : A Word Frequency List for Sinhala
Aloka Fernando | Gihan Dias

A word frequency list is a list of unique words in a language along with their frequency count. It is generally sorted by frequency. Such a list is essential for many NLP tasks, including building language models, POS taggers, spelling checkers, word separation guides, etc., in addition to assisting language learners. Such lists are available for many languages, but a large-scale word list is still not available for Sinhala. We have developed a comprehensive list of words, together with their frequency and part-of-speech (POS), from a large textbase. Unlike many other such lists, our list includes a large number of low-frequency words (many of which are erroneous), which enables the analysis of such words, including the frequencies of errors. In addition to the main list, we have also prepared a list of linguistically verified words. The word frequency list and the verified word list are the largest collections of words lists that are available for the Sinhala language.

pdf bib
Part of Speech Tagging for a Resource Poor Language : Sindhi in Devanagari Script using HMM and CRF
Bharti Nathani | Nisheeth Joshi

Part of speech tagging is a pre-processing step of various NLP applications. Mainly it is used in Machine Translation. This research proposes two POS taggers, i.e., an HMM-based and CRF based tagger. To develop this tagger, the corpus of manually annotated 30,000 sentences has been prepared with the help of language experts. In this paper, we have developed POS taggers for Sindhi Language (in Devanagari Script), a resource poor language, using HMM (Hidden Markov Model) and Conditional Random Field (CRF).Evaluation results demonstrated the accuracies of 76.60714% and 88.79% in the HMM, and CRF, respectively.

pdf bib
Stress Rules from Surface Forms: Experiments with Program Synthesis
Saujas Vaduguru | Partho Sarthi | Monojit Choudhury | Dipti Sharma

Learning linguistic generalizations from only a few examples is a challenging task. Recent work has shown that program synthesis – a method to learn rules from data in the form of programs in a domain-specific language – can be used to learn phonological rules in highly data-constrained settings. In this paper, we use the problem of phonological stress placement as a case to study how the design of the domain-specific language influences the generalization ability when using the same learning algorithm. We find that encoding the distinction between consonants and vowels results in much better performance, and providing syllable-level information further improves generalization. Program synthesis, thus, provides a way to investigate how access to explicit linguistic information influences what can be learnt from a small number of examples.

pdf bib
Cross-lingual Alignment of Knowledge Graph Triples with Sentences
Swayatta Daw | Shivprasad Sagare | Tushar Abhishek | Vikram Pudi | Vasudeva Varma

The pairing of natural language sentences with knowledge graph triples is essential for many downstream tasks like data-to-text generation, facts extraction from sentences (semantic parsing), knowledge graph completion, etc. Most existing methods solve these downstream tasks using neural-based end-to-end approaches that require a large amount of well-aligned training data, which is difficult and expensive to acquire. Recently various unsupervised techniques have been proposed to alleviate this alignment step by automatically pairing the structured data (knowledge graph triples) with textual data. However, these approaches are not well suited for low resource languages that provide two major challenges: (1) unavailability of pair of triples and native text with the same content distribution and (2) limited Natural language Processing (NLP) resources. In this paper, we address the unsupervised pairing of knowledge graph triples with sentences for low resource languages, selecting Hindi as the low resource language. We propose cross-lingual pairing of English triples with Hindi sentences to mitigate the unavailability of content overlap. We propose two novel approaches: NER-based filtering with Semantic Similarity and Key-phrase Extraction with Relevance Ranking. We use our best method to create a collection of 29224 well-aligned English triples and Hindi sentence pairs. Additionally, we have also curated 350 human-annotated golden test datasets for evaluation. We make the code and dataset publicly available.

pdf bib
Introduction to ProverbNet: An Online Multilingual Database of Proverbs and Comprehensive Metadata
Shreyas Pimpalgaonkar | Dhanashree Lele | Malhar Kulkarni | Pushpak Bhattacharyya

Proverbs are unique linguistic expressions used by humans in the process of communication. They are frozen expressions and have the capacity to convey deep semantic aspects of a given language. This paper describes ProverbNet, a novel online multilingual database of proverbs and comprehensive metadata equipped with a multipurpose search engine to store, explore, understand, classify and analyze proverbs and their metadata. ProverbNet has immense applications including machine translation, cognitive studies and learning tools. We have 2320 Sanskrit Proverbs and 1136 Marathi proverbs and their metadata in ProverbNet and are adding more proverbs in different languages to the network.

pdf bib
Bypassing Optimization Complexity through Transfer Learning & Deep Neural Nets for Speech Intelligibility Improvement
Ritujoy Biswas

This extended abstract highlights the research ventures and findings in the domain of speech intelligibility improvement. Till this point, an effort has been to simulate the Lombard effect, which is the deliberate human attempt to make a speech more intelligible when speaking in the presence of interfering background noise. To that end, an attempt has been made to shift the formants away from the noisy regions in spectrum both sub-optimally and optimally. The sub-optimal shifting methods were based upon Kalman filtering and EM approach. The optimal shifting involved the use of optimization to maximize an objective intelligibility index after shifting the formants. A transfer learning framework was also set up to bring down the computational complexity.

pdf bib
Design and Development of Spoken Dialogue System in Indic Languages
Shrikant Malviya

Based on the modular architecture of a task-oriented Spoken Dialogue System (SDS), the presented work focussed on constructing all the system components as statistical models with parameters learned directly from the data by resolving various language-specific and language-independent challenges. In order to understand the research questions that underlie the SLU and DST module in the perspective of Indic languages (Hindi), we collect a dialogue corpus: Hindi Dialogue Restaurant Search (HDRS) corpus and compare various state-of-the-art SLU and DST models on it. For the dialogue manager (DM), we investigate the deep-learning reinforcement learning (RL) methods, e.g. actor-critic algorithms with experience replay. Next, for the dialogue generation, we incorporated Recurrent Neural Network Language Generation (RNNLG) framework based models. For speech synthesisers as a last component in the dialogue pipeline, we not only train several TTS systems but also propose a quality assessment framework to evaluate them.

pdf bib
FinRead: A Transfer Learning Based Tool to Assess Readability of Definitions of Financial Terms
Sohom Ghosh | Shovon Sengupta | Sudip Naskar | Sunny Kumar Singh

Simplified definitions of complex terms help learners to understand any content better. Comprehending readability is critical for the simplification of these contents. In most cases, the standard formula based readability measures do not hold good for measuring the complexity of definitions of financial terms. Furthermore, some of them works only for corpora of longer length which have at least 30 sentences. In this paper, we present a tool for evaluating readability of definitions of financial terms. It consists of a Light GBM based classification layer over sentence embeddings (Reimers et al., 2019) of FinBERT (Araci, 2019). It is trained on glossaries of several financial textbooks and definitions of various financial terms which are available on the web. The extensive evaluation shows that it outperforms the standard benchmarks by achieving a AU-ROC score of 0.993 on the validation set.

pdf bib
Demo of the Linguistic Field Data Management and Analysis System - LiFE
Siddharth Singh | Ritesh Kumar | Shyam Ratan | Sonal Sinha

In the proposed demo, we will present a new software - Linguistic Field Data Management and Analysis System - LiFE - an open-source, web-based linguistic data management and analysis application that allows for systematic storage, management, sharing and usage of linguistic data collected from the field. The application allows users to store lexical items, sentences, paragraphs, audio-visual content including photographs, video clips, speech recordings, etc, along with rich glossing / annotation; generate interactive and print dictionaries; and also train and use natural language processing tools and models for various purposes using this data. Since its a web-based application, it also allows for seamless collaboration among multiple persons and sharing the data, models, etc with each other. The system uses the Python-based Flask framework and MongoDB (as database) in the backend and HTML, CSS and Javascript at the frontend. The interface allows creation of multiple projects that could be shared with the other users. At the backend, the application stores the data in RDF format so as to allow its release as Linked Data over the web using semantic web technologies - as of now it makes use of the OntoLex-Lemon for storing the lexical data and Ligt for storing the interlinear glossed text and then internally linking it to the other linked lexicons and databases such as DBpedia and WordNet. Furthermore it provides support for training the NLP systems using scikit-learn and HuggingFace Transformers libraries as well as make use of any model trained using these libraries - while the user interface itself provides limited options for tuning the system, an externally-trained model could be easily incorporated within the application; similarly the dataset itself could be easily exported into a standard machine-readable format like JSON or CSV that could be consumed by other programs and pipelines. The system is built as an online platform; however since we are making the source code available, it could be installed by users on their internal / personal servers as well.

pdf bib
Text Based Smart Answering System in Agriculture using RNN
Raji Sukumar | Hemalatha N | Sarin S | Rose Mary C A

Agriculture is an important aspect of India’s economy, and the country currently has one of the highest rates of farm producers in the world. Farmers need hand holding with support of technology. A chatbot is a tool or assistant that you may communicate with via instant messages. The goal of this project is to create a Chatbot that uses Natural Language Processing with a Deep Learning model. In this project we have tried implementing Multi-Layer Perceptron model and Recurrent Neural Network models on the dataset. The accuracy given by RNN was 97.83%.

pdf bib
Image2tweet: Datasets in Hindi and English for Generating Tweets from Images
Rishabh Jha | Varshith Kaki | Varuna Kolla | Shubham Bhagat | Parth Patwa | Amitava Das | Santanu Pal

Image Captioning as a task that has seen major updates over time. In recent methods, visual-linguistic grounding of the image-text pair is leveraged. This includes either generating the textual description of the objects and entities present within the image in constrained manner, or generating detailed description of these entities as a paragraph. But there is still a long way to go towards being able to generate text that is not only semantically richer, but also contains real world knowledge in it. This is the motivation behind exploring image2tweet generation through the lens of existing image-captioning approaches. At the same time, there is little research in image captioning in Indian languages like Hindi. In this paper, we release Hindi and English datasets for the task of tweet generation given an image. The aim is to generate a specialized text like a tweet, that is not a direct result of visual-linguistic grounding that is usually leveraged in similar tasks, but conveys a message that factors-in not only the visual content of the image, but also additional real world contextual information associated with the event described within the image as closely as possible. Further, We provide baseline DL models on our data and invite researchers to build more sophisticated systems for the problem.