Girish Keshav Palshikar
2026
Argumentation and Judgement Factors: LLM-based Discovery and Application in Insurance Disputes
Basit Ali | Anubhav Sinha | Nitin Ramrakhiyani | Sachin Pawar | Girish Keshav Palshikar | Manoj Apte
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Basit Ali | Anubhav Sinha | Nitin Ramrakhiyani | Sachin Pawar | Girish Keshav Palshikar | Manoj Apte
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
In this work, we focus on discovery of legal factors for a specific case type under consideration (e.g., vehicle insurance disputes). We refer to these legal factors more explicitly as "Argumentation and Judgement Factors" (AJFs). AJFs encode specific legal knowledge that is important for legal argumentation and judicial decision making. We propose a multi-step approach for discovering a list of AJFs for a given case type using a set of relevant legal documents (e.g., past judgements, relevant acts) and Symbolic Knowledge Distillation (SKD) from a Large Language Model (LLM). We propose a novel geneRatE-CRitic-reviEW (RECREW) prompting strategy for effective SKD. We construct and evaluate the discovered list of AJFs on two different types of cases (auto-insurance and life insurance) and show their utility in a dispute resolution application.
2025
Broken Words, Broken Performance: Effect of Tokenization on Performance of LLMs
Sachin Pawar | Manoj Apte | Kshitij Jadhav | Girish Keshav Palshikar | Nitin Ramrakhiyani
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
Sachin Pawar | Manoj Apte | Kshitij Jadhav | Girish Keshav Palshikar | Nitin Ramrakhiyani
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
Tokenization is the first step in training any Large Language Model (LLM), where the text is split into a sequence of tokens as per the model’s fixed vocabulary. This tokenization in LLMs is different from the traditional tokenization in NLP where the text is split into a sequence of “natural” words. In LLMs, a natural word may also be broken into multiple tokens due to limited vocabulary size of the LLMs (e.g., Mistral’s tokenizer splits “martial” into “mart” and “ial”). In this paper, we hypothesize that such breaking of natural words negatively impacts LLM performance on various NLP tasks. To quantify this effect, we propose a set of penalty functions that compute a tokenization penalty for a given text for a specific LLM, indicating how “bad” the tokenization is. We establish statistical significance of our hypothesis on multiple NLP tasks for a set of different LLMs.