Open-source communities continue to break new ground, and Together is no exception. Together has released the first-ever open-source alternative to ChatGPT, known as OpenChatKit. This development represents a significant milestone in the evolution of chatbot technology. Although OpenChatKit isn’t quite as stable or comprehensive as ChatGPT yet, it won’t be long before it catches up.
Based on EleutherAI 20 billion parameter language model GPT-NeoX, OpenChatKit has been specifically tuned with 43 million instructions for chat use. In the industry-standard HELM benchmark, the chat model has outperformed the base model.
OpenChatKit is available for free on GitHub under the Apache 2.0 license. It comes with a toolkit, which includes customization recipes to fine-tune the model for high accuracy on specific tasks. Also, an extensible retrieval system is enabled, enabling developers to augment bot responses with information from a document repository, API, or other live-updating sources. A moderation model, fine-tuned from GPT-JT-6B, is designed to filter which questions the bot responds to. The kit also includes tools for users to provide feedback on the chatbot’s responses and add new datasets.
Although OpenChatKit’s strengths lie in tasks such as summarizing and answering questions with context, extracting information, and classifying text, it’s less convincing when it comes to tasks that require no context, such as coding and creative writing. These tasks have helped ChatGPT become so popular, and OpenAI’s chatbot also hallucinates regularly. OpenChatKit also struggles with changing the subject in the middle of a conversation and sometimes repeats answers. However, it has performed much better after being fine-tuned for specific use cases. Together is working on its own chatbots for learning, financial advice, and support requests.
In the short test, OpenChatKit was not as eloquent as ChatGPT, partly because responses are limited to 256 tokens instead of around 500. Nonetheless, OpenChatKit generates replies much faster than ChatGPT, and switching between languages doesn’t seem to be a problem. Additionally, formatting as a list or table is possible.

Whatever the outcome, the training process is probably the future of large-scale open-source projects. The developers of OpenChatKit have taken a decentralized approach, distributing the necessary computing power from a central data center to many computers. As with GPT-JT, this decentralization enables developers to work more efficiently and provides a broader range of computing power to train AI models.
Together is relying on user feedback to further improve OpenChatKit. As more people use OpenChatKit and provide feedback, the chatbot’s performance is expected to continue to improve. It will also be interesting to see how the chatbot performs when trained on larger datasets.
Although OpenChatKit is the first open-source product to emulate ChatGPT, it won’t be the only one for long. With Meta’s LLaMa models leaked earlier this month, the largest of which has three times as many parameters as GPT-NeoX-20B, it’s only a matter of time before we see another chatbot based on it.
OpenChatKit represents a significant milestone in the evolution of chatbot technology. While it’s not as stable or comprehensive as ChatGPT yet, it has the potential to catch up quickly. The open-source community will continue to innovate and develop new models, and it’s only a matter of time before we see more open-source alternatives to ChatGPT. The project can be tested on Huggin Face Openchatki project page.