Ashvini Jindal


2024

pdf bib
Adapting LLM to Multi-lingual ESG Impact and Length Prediction Using In-context Learning and Fine-Tuning with Rationale
Pawan Kumar Rajpoot | Ashvini Jindal | Ankur Parikh
Proceedings of the Joint Workshop of the 7th Financial Technology and Natural Language Processing, the 5th Knowledge Discovery from Unstructured Data in Financial Services, and the 4th Workshop on Economics and Natural Language Processing

The prediction of Environmental, Social, and Governance (ESG) impact and duration (length) of impact from company events, as reported in news articles, hold immense significance for investors, policymakers, and various stakeholders. In this paper, we describe solutions from our team “Upaya” to ESG impact and length prediction tasks on one such dataset ML-ESG-3. ML-ESG-3 dataset was released along with shared task as a part of the Fifth Workshop on Knowledge Discovery from Unstructured Data in Financial Services, co-located with LREC-COLING 2024. We employed two different paradigms to adapt Large Language Models (LLMs) to predict both the ESG impact and length of events. In the first approach, we leverage GPT-4 within the In-context learning (ICL) framework. A learning-free dense retriever identifies top K-relevant In-context learning examples from the training data for a given test example. The second approach involves instruction-tuning Mistral (7B) LLM to predict impact and duration, supplemented with rationale generated using GPT-4. Our models secured second place in French tasks and achieved reasonable results (fifth and ninth rank) in English tasks. These results demonstrate the potential of different LLM-based paradigms for delivering valuable insights within the ESG investing landscape.

pdf bib
Upaya at ArabicNLU Shared-Task: Arabic Lexical Disambiguation using Large Language Models
Pawan Rajpoot | Ashvini Jindal | Ankur Parikh
Proceedings of The Second Arabic Natural Language Processing Conference

Disambiguating a word’s intended meaning(sense) in a given context is important in Nat-ural Language Understanding (NLU). WSDaims to determine the correct sense of ambigu-ous words in context. At the same time, LMD(a WSD variation) focuses on disambiguatinglocation mention. Both tasks are vital in Nat-ural Language Processing (NLP) and informa-tion retrieval, as they help correctly interpretand extract information from text. Arabic ver-sion is further challenging because of its mor-phological richness, encompassing a complexinterplay of roots, stems, and affixes. This pa-per describes our solutions to both tasks, em-ploying Llama3 and Cohere-based models un-der Zero-Shot Learning and Re-Ranking, re-spectively. Both the shared tasks were partof the second Arabic Natural Language Pro-cessing Conference co-located with ACL 2024.Overall, we achieved 1st rank in the WSD task(accuracy 78%) and 2nd rank in the LMD task(MRR@1 0.59)