Vinayak S Puranik
2026
DIRECT: Directional Relevance in Conversational Trajectories
Anshuman Mourya | Rajdeep Mukherjee | Prerna Jolly | Vinayak S Puranik | Sivaramakrishnan R Kaveri
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 5: Industry Track)
Anshuman Mourya | Rajdeep Mukherjee | Prerna Jolly | Vinayak S Puranik | Sivaramakrishnan R Kaveri
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 5: Industry Track)
Conversational Agents have become ubiquitous across application domains, such as, shopping assistants, medical diagnosis, autonomous task planning etc. Users interacting with these agents often fail to understand how to start a conversation or what to ask next to obtain the desired information. To enable seamless and hassle-free user-agent interactions, we introduce Next Question Suggestions (NQS), which are essentially highly relevant follow-up question recommendations that act as conversation starters or discover-ability tools to capture non-trivial user intents, leading to more engaging conversations. Relying on LLMs for both response as well as NQS generation is a costly ask in latency-constrained commercial settings, with an added risk of handling potentially unsafe or unanswerable generated queries. A key component of building an efficient low-latency NQS experience is, therefore, retrieval (or embedding) models that fetch the most-relevant candidate questions from an offline pre-curated Question Bank (QB). Off-the-shelf embedding models cannot capture domain-specific nuances and more importantly the directionality inherent in follow-up question recommendations. In this work, we propose an end-to-end retrieval system, DIRECT that is optimized to model directional relevance. Given a user query, it produces a ranked list of highly relevant follow-up question recommendations within 1 sec. Our system also contains an LLM-as-a-judge component, tuned on proprietary user-agent interaction logs, to evaluate the end-to-end performance in terms of CTR.
2024
PEARL: Preference Extraction with Exemplar Augmentation and Retrieval with LLM Agents
Vijit Malik | Akshay Jagatap | Vinayak S Puranik | Anirban Majumder
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track
Vijit Malik | Akshay Jagatap | Vinayak S Puranik | Anirban Majumder
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track
Identifying preferences of customers in their shopping journey is a pivotal aspect in providing product recommendations. The task becomes increasingly challenging when there is a multi-turn conversation between the user and a shopping assistant chatbot. In this paper, we tackle a novel and complex problem of identifying customer preferences in the form of key-value filters on an e-commerce website in a multi-turn conversational setting. Existing systems specialize in extracting customer preferences from standalone customer queries which makes them unsuitable to multi-turn setup. We propose PEARL (Preference Extraction with ICL Augmentation and Retrieval with LLM Agents) that leverages collaborative LLM agents, generates in-context learning exemplars and dynamically retrieves relevant exemplars during inference time to extract customer preferences as a combination of key-value filters. Our experiments on proprietary and public datasets show that PEARL not only improves performance on exact match by ~10% compared to competitive LLM-based baselines but additionally improves inference latency by ~110%.