2024
pdf
bib
abs
Why Generate When You Can Discriminate? A Novel Technique for Text Classification using Language Models
Sachin Pawar
|
Nitin Ramrakhiyani
|
Anubhav Sinha
|
Manoj Apte
|
Girish Palshikar
Findings of the Association for Computational Linguistics: EACL 2024
In this paper, we propose a novel two-step technique for text classification using autoregressive Language Models (LM). In the first step, a set of perplexity and log-likelihood based numeric features are elicited from an LM for a text instance to be classified. Then, in the second step, a classifier based on these features is trained to predict the final label. The classifier used is usually a simple machine learning classifier like Support Vector Machine (SVM) or Logistic Regression (LR) and it is trained using a small set of training examples. We believe, our technique presents a whole new way of exploiting the available training instances, in addition to the existing ways like fine-tuning LMs or in-context learning. Our approach stands out by eliminating the need for parameter updates in LMs, as required in fine-tuning, and does not impose limitations on the number of training examples faced while building prompts for in-context learning. We evaluate our technique across 5 different datasets and compare with multiple competent baselines.
2023
pdf
bib
abs
Audit Report Coverage Assessment using Sentence Classification
Sushodhan Vaishampayan
|
Nitin Ramrakhiyani
|
Sachin Pawar
|
Aditi Pawde
|
Manoj Apte
|
Girish Palshikar
Proceedings of the Sixth Workshop on Financial Technology and Natural Language Processing
Audit reports are a window to the financial health of a company and hence gauging coverage of various audit aspects in them is important. In this paper, we aim at determining an audit report’s coverage through classification of its sentences into multiple domain specific classes. In a weakly supervised setting, we employ a rule-based approach to automatically create training data for a BERT-based multi-label classifier. We then devise an ensemble to combine both the rule based and classifier approaches. Further, we employ two novel ways to improve the ensemble’s generalization: (i) through an active learning based approach and, (ii) through a LLM based review. We demonstrate that our proposed approaches outperform several baselines. We show utility of the proposed approaches to measure audit coverage on a large dataset of 2.8K audit reports.
pdf
bib
abs
Zero-shot Probing of Pretrained Language Models for Geography Knowledge
Nitin Ramrakhiyani
|
Vasudeva Varma
|
Girish Palshikar
|
Sachin Pawar
Proceedings of the 4th Workshop on Evaluation and Comparison of NLP Systems
Gauging the knowledge of Pretrained Language Models (PLMs) about facts in niche domains is an important step towards making them better in those domains. In this paper, we aim at evaluating multiple PLMs for their knowledge about world Geography. We contribute (i) a sufficiently sized dataset of masked Geography sentences to probe PLMs on masked token prediction and generation tasks, (ii) benchmark the performance of multiple PLMs on the dataset. We also provide a detailed analysis of the performance of the PLMs on different Geography facts.
pdf
bib
abs
Evaluation Metrics for Depth and Flow of Knowledge in Non-fiction Narrative Texts
Sachin Pawar
|
Girish Palshikar
|
Ankita Jain
|
Mahesh Singh
|
Mahesh Rangarajan
|
Aman Agarwal
|
Vishal Kumar
|
Karan Singh
Proceedings of the 5th Workshop on Narrative Understanding
In this paper, we describe the problem of automatically evaluating quality of knowledge expressed in a non-fiction narrative text. We focus on a specific type of documents where each document describes a certain technical problem and its solution. The goal is not only to evaluate the quality of knowledge in such a document, but also to automatically suggest possible improvements to the writer so that a better knowledge-rich document is produced. We propose new evaluation metrics to evaluate quality of knowledge contents as well as flow of different types of sentences. The suggestions for improvement are generated based on these metrics. The proposed metrics are completely unsupervised in nature and they are derived from a set of simple corpus statistics. We demonstrate the effectiveness of the proposed metrics as compared to other existing baseline metrics in our experiments.
pdf
bib
abs
Legal Argument Extraction from Court Judgements using Integer Linear Programming
Basit Ali
|
Sachin Pawar
|
Girish Palshikar
|
Anindita Sinha Banerjee
|
Dhirendra Singh
Proceedings of the 10th Workshop on Argument Mining
Legal arguments are one of the key aspects of legal knowledge which are expressed in various ways in the unstructured text of court judgements. A large database of past legal arguments can be created by extracting arguments from court judgements, categorizing them, and storing them in a structured format. Such a database would be useful for suggesting suitable arguments for any new case. In this paper, we focus on extracting arguments from Indian Supreme Court judgements using minimal supervision. We first identify a set of certain sentence-level argument markers which are useful for argument extraction such as whether a sentence contains a claim or not, whether a sentence is argumentative in nature, whether two sentences are part of the same argument, etc. We then model the legal argument extraction problem as a text segmentation problem where we combine multiple weak evidences in the form of argument markers using Integer Linear Programming (ILP), finally arriving at a global document-level solution giving the most optimal legal arguments. We demonstrate the effectiveness of our technique by comparing it against several competent baselines.
2022
pdf
bib
abs
Constructing A Dataset of Support and Attack Relations in Legal Arguments in Court Judgements using Linguistic Rules
Basit Ali
|
Sachin Pawar
|
Girish Palshikar
|
Rituraj Singh
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Argumentation mining is a growing area of research and has several interesting practical applications of mining legal arguments. Support and Attack relations are the backbone of any legal argument. However, there is no publicly available dataset of these relations in the context of legal arguments expressed in court judgements. In this paper, we focus on automatically constructing such a dataset of Support and Attack relations between sentences in a court judgment with reasonable accuracy. We propose three sets of rules based on linguistic knowledge and distant supervision to identify such relations from Indian Supreme Court judgments. The first rule set is based on multiple discourse connectors, the second rule set is based on common semantic structures between argumentative sentences in a close neighbourhood, and the third rule set uses the information about the source of the argument. We also explore a BERT-based sentence pair classification model which is trained on this dataset. We release the dataset of 20506 sentence pairs - 10746 Support (precision 77.3%) and 9760 Attack (precision 65.8%). We believe that this dataset and the ideas explored in designing the linguistic rules and will boost the argumentation mining research for legal arguments.
2021
pdf
bib
abs
Weakly Supervised Extraction of Tasks from Text
Sachin Pawar
|
Girish Palshikar
|
Anindita Sinha Banerjee
Proceedings of the 18th International Conference on Natural Language Processing (ICON)
In this paper, we propose a novel problem of automatic extraction of tasks from text. A task is a well-defined knowledge-based volitional action. We describe various characteristics of tasks as well as compare and contrast them with events. We propose two techniques for task extraction – i) using linguistic patterns and ii) using a BERT-based weakly supervised neural model. We evaluate our techniques with other competent baselines on 4 datasets from different domains. Overall, the BERT-based weakly supervised neural model generalizes better across multiple domains as compared to the purely linguistic patterns based approach.
2020
pdf
bib
abs
Weak Supervision using Linguistic Knowledge for Information Extraction
Sachin Pawar
|
Girish Palshikar
|
Ankita Jain
|
Jyoti Bhat
|
Simi Johnson
Proceedings of the 17th International Conference on Natural Language Processing (ICON)
In this paper, we propose to use linguistic knowledge to automatically augment a small manually annotated corpus to obtain a large annotated corpus for training Information Extraction models. We propose a powerful patterns specification language for specifying linguistic rules for entity extraction. We define an Enriched Text Format (ETF) to represent rich linguistic information about a text in the form of XML-like tags. The patterns in our patterns specification language are then matched on the ETF text rather than raw text to extract various entity mentions. We demonstrate how an entity extraction system can be quickly built for a domain-specific entity type for which there are no readily available annotated datasets.
2019
pdf
bib
abs
Extraction of Message Sequence Charts from Software Use-Case Descriptions
Girish Palshikar
|
Nitin Ramrakhiyani
|
Sangameshwar Patil
|
Sachin Pawar
|
Swapnil Hingmire
|
Vasudeva Varma
|
Pushpak Bhattacharyya
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Industry Papers)
Software Requirement Specification documents provide natural language descriptions of the core functional requirements as a set of use-cases. Essentially, each use-case contains a set of actors and sequences of steps describing the interactions among them. Goals of use-case reviews and analyses include their correctness, completeness, detection of ambiguities, prototyping, verification, test case generation and traceability. Message Sequence Chart (MSC) have been proposed as a expressive, rigorous yet intuitive visual representation of use-cases. In this paper, we describe a linguistic knowledge-based approach to extract MSCs from use-cases. Compared to existing techniques, we extract richer constructs of the MSC notation such as timers, conditions and alt-boxes. We apply this tool to extract MSCs from several real-life software use-case descriptions and show that it performs better than the existing techniques. We also discuss the benefits and limitations of the extracted MSCs to meet the above goals.
pdf
bib
abs
Extraction of Message Sequence Charts from Narrative History Text
Girish Palshikar
|
Sachin Pawar
|
Sangameshwar Patil
|
Swapnil Hingmire
|
Nitin Ramrakhiyani
|
Harsimran Bedi
|
Pushpak Bhattacharyya
|
Vasudeva Varma
Proceedings of the First Workshop on Narrative Understanding
In this paper, we advocate the use of Message Sequence Chart (MSC) as a knowledge representation to capture and visualize multi-actor interactions and their temporal ordering. We propose algorithms to automatically extract an MSC from a history narrative. For a given narrative, we first identify verbs which indicate interactions and then use dependency parsing and Semantic Role Labelling based approaches to identify senders (initiating actors) and receivers (other actors involved) for these interaction verbs. As a final step in MSC extraction, we employ a state-of-the art algorithm to temporally re-order these interactions. Our evaluation on multiple publicly available narratives shows improvements over four baselines.
pdf
bib
Towards Disambiguating Contracts for their Successful Execution - A Case from Finance Domain
Preethu Rose Anish
|
Abhishek Sainani
|
Nitin Ramrakhiyani
|
Sachin Pawar
|
Girish K Palshikar
|
Smita Ghaisas
Proceedings of the First Workshop on Financial Technology and Natural Language Processing
2018
pdf
bib
abs
Identification of Alias Links among Participants in Narratives
Sangameshwar Patil
|
Sachin Pawar
|
Swapnil Hingmire
|
Girish Palshikar
|
Vasudeva Varma
|
Pushpak Bhattacharyya
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Identification of distinct and independent participants (entities of interest) in a narrative is an important task for many NLP applications. This task becomes challenging because these participants are often referred to using multiple aliases. In this paper, we propose an approach based on linguistic knowledge for identification of aliases mentioned using proper nouns, pronouns or noun phrases with common noun headword. We use Markov Logic Network (MLN) to encode the linguistic knowledge for identification of aliases. We evaluate on four diverse history narratives of varying complexity. Our approach performs better than the state-of-the-art approach as well as a combination of standard named entity recognition and coreference resolution techniques.
pdf
bib
Resolving Actor Coreferences in Hindi Narrative Text
Nitin Ramrakhiyani
|
Swapnil Hingmire
|
Sachin Pawar
|
Sangameshwar Patil
|
Girish K. Palshikar
|
Pushpak Bhattacharyya
|
Vasudeva Verma
Proceedings of the 15th International Conference on Natural Language Processing
2017
pdf
bib
abs
End-to-end Relation Extraction using Neural Networks and Markov Logic Networks
Sachin Pawar
|
Pushpak Bhattacharyya
|
Girish Palshikar
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
End-to-end relation extraction refers to identifying boundaries of entity mentions, entity types of these mentions and appropriate semantic relation for each pair of mentions. Traditionally, separate predictive models were trained for each of these tasks and were used in a “pipeline” fashion where output of one model is fed as input to another. But it was observed that addressing some of these tasks jointly results in better performance. We propose a single, joint neural network based model to carry out all the three tasks of boundary identification, entity type classification and relation type classification. This model is referred to as “All Word Pairs” model (AWP-NN) as it assigns an appropriate label to each word pair in a given sentence for performing end-to-end relation extraction. We also propose to refine output of the AWP-NN model by using inference in Markov Logic Networks (MLN) so that additional domain knowledge can be effectively incorporated. We demonstrate effectiveness of our approach by achieving better end-to-end relation extraction performance than all 4 previous joint modelling approaches, on the standard dataset of ACE 2004.
pdf
bib
abs
Measuring Topic Coherence through Optimal Word Buckets
Nitin Ramrakhiyani
|
Sachin Pawar
|
Swapnil Hingmire
|
Girish Palshikar
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
Measuring topic quality is essential for scoring the learned topics and their subsequent use in Information Retrieval and Text classification. To measure quality of Latent Dirichlet Allocation (LDA) based topics learned from text, we propose a novel approach based on grouping of topic words into buckets (TBuckets). A single large bucket signifies a single coherent theme, in turn indicating high topic coherence. TBuckets uses word embeddings of topic words and employs singular value decomposition (SVD) and Integer Linear Programming based optimization to create coherent word buckets. TBuckets outperforms the state-of-the-art techniques when evaluated using 3 publicly available datasets and on another one proposed in this paper.
2015
pdf
bib
Noun Phrase Chunking for Marathi using Distant Supervision
Sachin Pawar
|
Nitin Ramrakhiyani
|
Girish K. Palshikar
|
Pushpak Bhattacharyya
|
Swapnil Hingmire
Proceedings of the 12th International Conference on Natural Language Processing
2014
pdf
bib
LMSim : Computing Domain-specific Semantic Word Similarities Using a Language Modeling Approach
Sachin Pawar
|
Swapnil Hingmire
|
Girish K. Palshikar
Proceedings of the 11th International Conference on Natural Language Processing
2013
pdf
bib
Named Entity Extraction using Information Distance
Sangameshwar Patil
|
Sachin Pawar
|
Girish Palshikar
Proceedings of the Sixth International Joint Conference on Natural Language Processing