Balamurali AR

Also published as: Balamurali A.R, Balamurali A.R.


2015

pdf bib
A Computational Approach to Automatic Prediction of Drunk-Texting
Aditya Joshi | Abhijit Mishra | Balamurali AR | Pushpak Bhattacharyya | Mark J. Carman
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

2014

pdf bib
Can the Crowd be Controlled?: A Case Study on Crowd Sourcing and Automatic Validation of Completed Tasks based on User Modeling
Balamurali A.R
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

Annotation is an essential step in the development cycle of many Natural Language Processing (NLP) systems. Lately, crowd-sourcing has been employed to facilitate large scale annotation at a reduced cost. Unfortunately, verifying the quality of the submitted annotations is a daunting task. Existing approaches address this problem either through sampling or redundancy. However, these approaches do have a cost associated with it. Based on the observation that a crowd-sourcing worker returns to do a task that he has done previously, a novel framework for automatic validation of crowd-sourced task is proposed in this paper. A case study based on sentiment analysis is presented to elucidate the framework and its feasibility. The result suggests that validation of the crowd-sourced task can be automated to a certain extent.

2013

pdf bib
The Haves and the Have-Nots: Leveraging Unlabelled Corpora for Sentiment Analysis
Kashyap Popat | Balamurali A.R | Pushpak Bhattacharyya | Gholamreza Haffari
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2012

pdf bib
Cross-Lingual Sentiment Analysis for Indian Languages using Linked WordNets
Balamurali A.R. | Aditya Joshi | Pushpak Bhattacharyya
Proceedings of COLING 2012: Posters

pdf bib
Cost and Benefit of Using WordNet Senses for Sentiment Analysis
Balamurali AR | Aditya Joshi | Pushpak Bhattacharyya
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

Typically, accuracy is used to represent the performance of an NLP system. However, accuracy attainment is a function of investment in annotation. Typically, the more the amount and sophistication of annotation, higher is the accuracy. However, a moot question is """"is the accuracy improvement commensurate with the cost incurred in annotation""""? We present an economic model to assess the marginal benefit accruing from increase in cost of annotation. In particular, as a case in point we have chosen the sentiment analysis (SA) problem. In SA, documents normally are polarity classified by running them through classifiers trained on document vectors constructed from lexeme features, i.e., words. If, however, instead of words, one uses word senses (synset ids in wordnets) as features, the accuracy improves dramatically. But is this improvement significant enough to justify the cost of annotation? This question, to the best of our knowledge, has not been investigated with the seriousness it deserves. We perform a cost benefit study based on a vendor-machine model. By setting up a cost price, selling price and profit scenario, we show that although extra cost is incurred in sense annotation, the profit margin is high, justifying the cost.

2011

pdf bib
C-Feel-It: A Sentiment Analyzer for Micro-blogs
Aditya Joshi | Balamurali AR | Pushpak Bhattacharyya | Rajat Mohanty
Proceedings of the ACL-HLT 2011 System Demonstrations

pdf bib
Robust Sense-based Sentiment Classification
Balamurali AR | Aditya Joshi | Pushpak Bhattacharyya
Proceedings of the 2nd Workshop on Computational Approaches to Subjectivity and Sentiment Analysis (WASSA 2.011)

pdf bib
Harnessing WordNet Senses for Supervised Sentiment Classification
Balamurali AR | Aditya Joshi | Pushpak Bhattacharyya
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing