Yara Rizk


2024

pdf bib
Granite-Function Calling Model: Introducing Function Calling Abilities via Multi-task Learning of Granular Tasks
Ibrahim Abdelaziz | Kinjal Basu | Mayank Agarwal | Sadhana Kumaravel | Matthew Stallone | Rameswar Panda | Yara Rizk | G P Shrivatsa Bhargav | Maxwell Crouse | Chulaka Gunasekara | Shajith Ikbal | Sachindra Joshi | Hima Karanam | Vineet Kumar | Asim Munawar | Sumit Neelam | Dinesh Raghu | Udit Sharma | Adriana Meza Soria | Dheeraj Sreedhar | Praveen Venkateswaran | Merve Unuvar | David Daniel Cox | Salim Roukos | Luis A. Lastras | Pavan Kapanipathi
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track

An emergent research trend explores the use of Large Language Models (LLMs) as the backbone of agentic systems (e.g., SWE-Bench, Agent-Bench). To fulfill LLMs’ potential as autonomous agents, they must be able to identify, call, and interact with a variety of external tools and application program interfaces (APIs). This capability of LLMs, commonly termed function calling, leads to a myriad of advantages such as access to current and domain-specific information in databases and the outsourcing of tasks that can be reliably performed by tools. In this work, we introduce Granite-20B-FunctionCalling, a model trained using a multi-task training approach on seven fundamental tasks encompassed in function calling. Our comprehensive evaluation on multiple out-of-domain datasets, which compares Granite-20B-FunctionCalling to more than 15 other best proprietary and open models, shows that Granite-20B-FunctionCalling has better generalizability on multiple tasks across seven different evaluation benchmarks. Moreover, Granite-20B-FunctionCalling shows the best performance among all open models and ranks among the top on the Berkeley Function Calling Leaderboard (BFCL).

2023

pdf bib
Towards large language model-based personal agents in the enterprise: Current trends and open problems
Vinod Muthusamy | Yara Rizk | Kiran Kate | Praveen Venkateswaran | Vatche Isahagian | Ashu Gulati | Parijat Dube
Findings of the Association for Computational Linguistics: EMNLP 2023

There is an emerging trend to use large language models (LLMs) to reason about complex goals and orchestrate a set of pluggable tools or APIs to accomplish a goal. This functionality could, among other use cases, be used to build personal assistants for knowledge workers. While there are impressive demos of LLMs being used as autonomous agents or for tool composition, these solutions are not ready mission-critical enterprise settings. For example, they are brittle to input changes, and can produce inconsistent results for the same inputs. These use cases have many open problems in an exciting area of NLP research, such as trust and explainability, consistency and reproducibility, adherence to guardrails and policies, best practices for composable tool design, and the need for new metrics and benchmarks. This vision paper illustrates some examples of LLM-based autonomous agents that reason and compose tools, highlights cases where they fail, surveys some of the recent efforts in this space, and lays out the research challenges to make these solutions viable for enterprises.

pdf bib
TaskDiff: A Similarity Metric for Task-Oriented Conversations
Ankita Bhaumik | Praveen Venkateswaran | Yara Rizk | Vatche Isahagian
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

The popularity of conversational digital assistants has resulted in the availability of large amounts of conversational data which can be utilized for improved user experience and personalized response generation. Building these assistants using popular large language models like ChatGPT also require additional emphasis on prompt engineering and evaluation methods. Textual similarity metrics are a key ingredient for such analysis and evaluations. While many similarity metrics have been proposed in the literature, they have not proven effective for task-oriented conversations as they do not take advantage of unique conversational features. To address this gap, we present TaskDiff, a novel conversational similarity metric that utilizes different dialogue components (utterances, intents, and slots) and their distributions to compute similarity. Extensive experimental evaluation of TaskDiff on a benchmark dataset demonstrates its superior performance and improved robustness over other related approaches.