GPT4All: An Ecosystem of Open Source Compressed Language Models

Large language models (LLMs) have recently achieved human-level performance on a range of professional and academic benchmarks.The accessibility of these models has lagged behind their performance.State-of-the-art LLMs require costly infrastructure; are only accessible via rate-limited, geo-locked, and censored web interfaces; and lack publicly available code and technical reports.In this paper, we tell the story of GPT4All, a popular open source repository that aims to democratize access to LLMs.We outline the technical details of the original GPT4All model family, as well as the evolution of the GPT4All project from a single model into a fully fledged open source ecosystem.It is our hope that this paper acts as both a technical overview of the original GPT4All models as well as a case study on the subsequent growth of the GPT4All open source ecosystem.


Introduction
On March 14 2023, OpenAI released GPT-4, a large language model capable of achieving human level performance on a variety of professional and academic benchmarks.Despite the popularity of the release, the GPT-4 technical report (OpenAI, 2023) contained virtually no details regarding the architecture, hardware, training compute, dataset construction, or training method used to create the model.Moreover, users could only access the model through the internet interface at chat.openai.com,which was severely rate limited and unavailable in several locales (e.g.Italy) (BBC News, 2023).Additionally, GPT-4 refused to answer a wide * Shared Senior Authorship variety of queries, responding only with the now infamous "As an AI Language Model, I cannot..." prefix (Vincent, 2023).These transparency and accessibility concerns spurred several developers to begin creating open source large language model (LLM) alternatives.Several grassroots efforts focused on fine tuning Meta's open code LLaMA model (Touvron et al., 2023;McMillan, 2023), whose weights were leaked on BitTorrent less than a week prior to the release of GPT-4 (Verge, 2023).GPT4All started as one of these variants.
In this paper, we tell the story of GPT4All.We comment on the technical details of the original GPT4All model (Anand et al., 2023), as well as the evolution of GPT4All from a single model to an ecosystem of several models.We remark on the impact that the project has had on the open source community, and discuss future directions.It is our hope that this paper acts as both a technical overview of the original GPT4All models as well as a case study on the subsequent growth of the GPT4All open source ecosystem.

Data Collection and Curation
To train the original GPT4All model, we collected roughly one million prompt-response pairs using the GPT-3.5-Turbo OpenAI API between March 20, 2023 andMarch 26th, 2023.In particular, we gathered GPT-3.5-Turboresponses to prompts of three publicly available datasets: the unified chip2 subset of LAION OIG, a random sub-sample of Stackoverflow Questions, and a sub-sample of Bigscience/P3 (Sanh et al., 2021).Following the approach in Stanford Alpaca (Taori et al., 2023), an open source LLaMA variant that came just before GPT4All, we focused substantial effort on dataset curation.
The collected dataset was loaded into Atlas (AI, 2023)-a visual interface for exploring and tagging massive unstructured datasets -for data curation.Using At-arXiv:2311.04931v1 [cs.CL] 6 Nov 2023 las, we identified and removed subsets of the data where GPT-3.5-Turborefused to respond, had malformed output, or produced a very short response.This resulted in the removal of the entire Bigscience/P3 subset of our data, as many P3 prompts induced responses that were simply one word.After curation, we were left with a set of 437,605 prompt-response pairs, which we visualize in Figure 1a.

Model Training
The original GPT4All model was a fine tuned variant of LLaMA 7B.In order to train it more efficiently, we froze the base weights of LLaMA, and only trained a small set of LoRA (Hu et al., 2021) weights during the fine tuning process.Detailed model hyper-parameters and training code can be found in our associated code repository1 .

Model Access
We publicly released all data, training code, and model weights for the community to build upon.Further, we provided a 4-bit quantized version of the model, which enabled users to run it on their own commodity hardware without transferring data to a 3rd party service.
Our research and development costs were dominated by ∼$800 in GPU spend (rented from Lambda Labs and Paperspace) and ∼$500 in OpenAI API spend.Our final GPT4All model could be trained in about eight hours on a Lambda Labs DGX A100 8x 80GB for a total cost of ∼$100.

Model Evaluation
We performed a preliminary evaluation of our model using the human evaluation data from the Self Instruct paper (Wang et al., 2023).We reported the ground truth perplexity of our model against what was, to our knowledge, the best openly available alpaca-lora model at the time, provided by user chainyo on HuggingFace.Both models had very large perplexities on a small number of tasks, so we reported perplexities clipped to a maximum of 100.We found that GPT4All produces stochastically lower ground truth perplexities than alpaca-lora (Anand et al., 2023).
3 From a Model to an Ecosystem 3.1 GPT4All-J: Repository Growth and the implications of the LLaMA License The GPT4All repository grew rapidly after its release, gaining over 20000 GitHub stars in just one week, as shown in Figure 2.This growth was supported by an in-person hackathon hosted in New York City three days after the model release, which attracted several hundred participants.As the Nomic discord, the home of online discussion about GPT4All, ballooned to over 10000 people, one thing became very clear -there was massive demand for a model that could be used commercially.
The LLaMA model that GPT4All was based on was licensed for research only, which severely limited the set of domains that GPT4All could be applied in.As a response to this, the Nomic team repeated the model training procedure of the original GPT4All model, but based on the already open source and commercially licensed GPT-J model (Wang and Komatsuzaki, 2021).GPT4All-J also had an augmented training set, which contained multi-turn QA examples and creative writing such as poetry, rap, and short stories.The creative writing prompts were generated by filling in schemas such as "Write a [CREATIVE STORY TYPE] about [NOUN] in the style of [PERSON]."We again employed Atlas to curate the prompt-response pairs in this data set.
Our evaluation methodology also evolved as the project grew.In particular, we began evaluating GPT4All models using a suite of seven reasoning tasks that were used for evaluation of the Databricks Dolly (Conover et al., 2023b) model, which was released on April 12, 2023.Unfortunately, GPT4All-J did not outperform other prominent open source models on this evaluation.As a result, we endeavoured to create a model that did.

GPT4All-Snoozy: the Emergence of the GPT4All Ecosystem
GPT4All-Snoozy was developed using roughly the same procedure as the previous GPT4All models, but with a few key modifications.First, GPT4All-Snoozy used the LLaMA-13B base model due to its superior base metrics when compared to GPT-J.Next, GPT4All-Snoozy incorporated the Dolly's training data into its train mix.After data curation and deduplication with Atlas, this yielded a training set of 739,259 total prompt-response pairs.We dubbed the model that resulted from training on this improved dataset GPT4All-Snoozy.As shown in Figure 1, GPT4All-Snoozy had the best average score on our evaluation benchmark of any model in the ecosystem at the time of its release.
Concurrently with the development of GPT4All, several organizations such as LMSys, Stability AI, BAIR, and Databricks built and deployed open source language models.We heard increasingly from the community that they wanted quantized versions of these models for local use.As we realized that organizations with ever more resources were developing source language models, we decided to pivot our effort away from training increasingly capable models and towards providing easy access to the plethora of models being produced by the open source community.Practically, this meant spending our time compressing open source models for use on commodity hardware, providing stable and simple high level model APIs, and supporting a GUI for no code model experimentation.

The Current State of GPT4All
Today, GPT4All is focused on improving the accessibility of open source language models.The repository   GPT4All currently provides native support and benchmark data for over 35 models (see Figure 1), and includes several models co-developed with industry partners such as Replit and Hugging Face.GPT4All also provides high level model APIs in languages including Python, Typescript, Go, C#, and Java, among others.Furthermore, the GPT4All no code GUI currently supports the workflows of over 50000 monthly active users, with over 25% of users coming back to the tool every day of the week.(Note that all GPT4All user data is collected on an opt in basis.)GPT4All has become the top language model integration in the popular open source AI orchestration library LangChain (Chase, 2022), and powers many popular open source projects such as PrivateGPT (imartinez, 2023), Quiver (StanGirard, 2023), and MindsDB (MindsDB, 2023), among others.GPT4All is the 3rd fastest growing GitHub repository of all time (Leo, 2023), and is the 185th most popular repository on the platform, by star count.

The Future of GPT4All
In the future, we will continue to grow GPT4All, supporting it as the de facto solution for LLM accessibility.Concretely, this means continuing to compress and distribute important open-source language models developed by the community, as well as compressing and distributing increasingly multimodal AI models.Furthermore, we will expand the set of hardware devices that GPT4All models run on, so that GPT4All models "just work" on any machine, whether it comes equipped with Apple Metal silicon, NVIDIA, AMD, or other edgeaccelerated hardware.Overall, we envision a world where anyone, anywhere, with any machine, can access and contribute to the cutting edge of AI.

Limitations
By enabling access to large language models, the GPT4All project also inherits many of the ethical concerns associated with generative models.Principal among these is the concern that unfiltered language models like GPT4All enable malicious users to generate content that could be harmful and dangerous (e.g., instructions on building bioweapons).While we recognize this risk, we also acknowledge the risk of concentrating this technology in the hands of a limited number of increasingly secretive research groups.We believe that the risk of focusing on the benefits of language model technology significantly outweighs the risk of misuse, and hence we prefer to make the technology as widely available as possible.
Finally, we realize the challenge in assigning credit for large-scale open source initiatives.We make a first attempt at fair credit assignment by explicitly including the GPT4All open source developers as authors on this work, but recognize that this is insufficient fully characterize everyone involved in the GPT4All effort.Furthermore, we acknowledge the difficulty in citing open source works that do not necessarily have standardized citations, and do our best in this paper to provide URLs to projects whenever possible.We encourage further research in the area of open source credit assignment, and hope to be able to support some of this research ourselves in the future.

Figure 1 :
Figure 1: TSNE visualizations showing the progression of the GPT4All train set.Panel (a) shows the original uncurated data.The red arrow denotes a region of highly homogeneous prompt-response pairs.The coloring denotes which open dataset contributed the prompt.Panel (b) shows the original GPT4All data after curation.This panel, as well as panels (c) and (d) are 10 colored by topic, which Atlas automatically extracts.Notice that the large homogeneous prompt-response blobs no longer appearl.Panel (c) shows the GPT4All-J dataset.The "starburst" clusters introduced on the right side of the panel correspond to the newly added creative data.Panel (d) shows the final GPT4All-snoozy dataset.All datasets have been released to the public, and can be interactively explored online.In the web version of this article, you can click on a panel to be taken to its interactive visualization.

Figure 2 :
Figure 2: Comparison of the github start growth of GPT4All, Meta's LLaMA, and Stanford's Alpaca.We conjecture that GPT4All achieved and maintains faster ecosystem growth due to the focus on access, which allows more users to meaningfully participate.

Table 1 :
Evaluations of all language models in the GPT4All ecosystem as of August 1, 2023.Code models are not included.OpenAI's text-davinci-003 is included as a point of comparison.The best overall performing model in the GPT4All ecosystem, Nous-Hermes2, achieves over 92% of the average performance of text-davinci-003.Models marked with an asterisk were available in the ecosystem as of the release of GPT4All-Snoozy.Note that at release, GPT4All-Snoozy had the best average performance of any model in the ecosystem.Bolded numbers indicate the best performing model as of August 1, 2023.