Shamane Siriwardhana
2024
Arcee’s MergeKit: A Toolkit for Merging Large Language Models
Charles Goddard
|
Shamane Siriwardhana
|
Malikeh Ehghaghi
|
Luke Meyers
|
Vladimir Karpukhin
|
Brian Benedict
|
Mark McQuade
|
Jacob Solawetz
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track
The rapid growth of open-source language models provides the opportunity to merge model checkpoints, combining their parameters to improve performance and versatility. Advances in transfer learning have led to numerous task-specific models, which model merging can integrate into powerful multitask models without additional training. MergeKit is an open-source library designed to support this process with an efficient and extensible framework suitable for any hardware. It has facilitated the merging of thousands of models, contributing to some of the world’s most powerful open-source model checkpoints. The library is accessible at: https://github.com/arcee-ai/mergekit.
2023
Improving the Domain Adaptation of Retrieval Augmented Generation (RAG) Models for Open Domain Question Answering
Shamane Siriwardhana
|
Rivindu Weerasekera
|
Elliott Wen
|
Tharindu Kaluarachchi
|
Rajib Rana
|
Suranga Nanayakkara
Transactions of the Association for Computational Linguistics, Volume 11
Retrieval Augment Generation (RAG) is a recent advancement in Open-Domain Question Answering (ODQA). RAG has only been trained and explored with a Wikipedia-based external knowledge base and is not optimized for use in other specialized domains such as healthcare and news. In this paper, we evaluate the impact of joint training of the retriever and generator components of RAG for the task of domain adaptation in ODQA. We propose RAG-end2end, an extension to RAG that can adapt to a domain-specific knowledge base by updating all components of the external knowledge base during training. In addition, we introduce an auxiliary training signal to inject more domain-specific knowledge. This auxiliary signal forces RAG-end2end to reconstruct a given sentence by accessing the relevant information from the external knowledge base. Our novel contribution is that, unlike RAG, RAG-end2end does joint training of the retriever and generator for the end QA task and domain adaptation. We evaluate our approach with datasets from three domains: COVID-19, News, and Conversations, and achieve significant performance improvements compared to the original RAG model. Our work has been open-sourced through the HuggingFace Transformers library, attesting to our work’s credibility and technical consistency.
Search
Co-authors
- Charles Goddard 1
- Malikeh Ehghaghi 1
- Luke Meyers 1
- Vladimir Karpukhin 1
- Brian Benedict 1
- show all...