At the risk To state the obvious, AI-powered chatbots are all the rage right now.
The tools, which can write essays, emails and more with some text-based instructions, have caught the attention of tech hobbyists and businesses alike. OpenAI’s ChatGPT, arguably its precursor, is estimated to have more than 100 million users. Through an API, brands like Instacart, Quizlet, and Snap have started integrating it into their respective platforms, further increasing usage numbers.
But, to the chagrin of some in the developer community, the organizations that build these chatbots remain part of a well-funded, well-resourced, and exclusive club. Anthropic, DeepMind, and OpenAI — all of which have deep pockets — are among the few that have managed to create their own modern chatbot technologies. In contrast, the open source community has been hampered in its efforts to create one.
That’s largely because training the AI models underlying chatbots requires massive computing power, not to mention a large training dataset that needs to be carefully curated. But a new, loosely affiliated group of researchers calling themselves Together The goal is to overcome these challenges to be the first to offer a ChatGPT-like system as open source.
Together has already made progress. Trained models were released last week that any developer can use to create an AI-powered chatbot.
“Together is building an accessible platform for open startup models,” said Vipul Ved Prakash, Together co-founder, in an email interview with TechCrunch. “We consider what we build to be part of AI’s ‘Linux moment’. We want to enable researchers, developers and companies to use and improve open source AI models with a platform that brings together data, models and calculations.”
Prakash previously co-founded Cloudmark, a cybersecurity startup that Proofpoint bought for $110 million in 2017. After Apple acquired Prakash’s next company, social media search and analytics platform Topsy, in 2013, he stayed as a senior director at Apple for five years before leaving to start together.
Over the weekend, Together launched its first major project, OpenChatKit, a framework for creating both specialized and general-purpose AI-powered chatbots. The kit, available on GitHub, includes the trained models mentioned above and an “extensible” fetching system that allows the models to fetch information (such as recent sports scores) from various sources and websites.
The base models come from EleutherAI, a non-profit group of researchers studying text-generating systems. But they have been fine-tuned using Together’s compute infrastructure, Together Decentralized Cloud, which pools hardware resources including GPUs from volunteers from around the web.
“Together we developed the source repositories that allow anyone to replicate the model results, optimize their own model, or integrate a retrieval system,” said Prakash. “Documentation and community processes were also developed together.”
Beyond the training infrastructure, Together worked with other research organizations, including LAION (which helped develop Stable Diffusion) and technologist Huu Nguyen’s Ontocord, to create a training dataset for the models. Dubbed the Open Instruction Generalist Dataset, the dataset contains more than 40 million sample questions and answers, follow-up questions, and more designed to “teach” a model how to respond to various instructions (e.g., the Civil War”).
To gather feedback, Together has released a demo that anyone can use to interact with the OpenChatKit models.
“The main motivation was to allow anyone to use OpenChatKit to improve the model and create more task-specific chat models,” Prakash added. “While large language models have shown an impressive ability to answer general questions, they typically achieve much higher accuracy when fine-tuned for specific applications.”
Prakash says the models can perform a range of tasks, including solving basic high school-level math problems, generating Python code, writing stories, and summarizing documents. How well do they stand up to tests? In my experience, this is sufficient – at least for basic things like writing cover letters that sound plausible.

Among other things, OpenChatKit can write cover letters. Photo credit: OpenChatKit
But there is a very clear limit. If you chat long enough with the OpenChatKit models, they run into the same problems as ChatGPT and other newer chatbots, like parroting incorrect information. I got the OpenChatKit models to give a conflicting answer about whether the earth was flat, for example, and a completely wrong statement about who won the 2020 US presidential election.

OpenChatKit, answering a question (wrong) about the 2020 US Presidential Election. Photo credit: OpenChatKit
The OpenChatKit models are weak in other, less alarming areas like context switching. Changing the subject in the middle of a conversation will often confuse them. They are also not particularly adept at creative writing and programming, and sometimes repeat their answers endlessly.
Prakash blames the training dataset, which he says is actively in the works. “This is an area that we will continue to improve, and we have developed a process where the open community can actively participate,” he said, referring to the demo.
The quality of OpenChatKit’s responses leaves a lot to be desired. (To be fair, depending on the prompt, ChatGPTs aren’t dramatically better.) But Together Is be proactive – or at least attempt being proactive – on the moderation front.
While some chatbots modeled after ChatGPT can be induced to write biased or hateful texts due to their training data, some of which comes from toxic sources, the OpenChatKit models are harder to enforce. I managed to get them to write a phishing email, but they wouldn’t be lured into more controversial territory, like endorsing the Holocaust or justifying why men are better CEOs than women.

OpenChatKit uses some moderation as seen here. Photo credit: OpenChatKit
However, moderation is an optional feature of the OpenChatKit – developers are not required to use it. While one of the models was designed “specifically as a guard rail” for the other, larger model – the model powering the demo – the larger model is unfiltered by default, according to Prakash.
This is unlike the top-down approach favored by OpenAI, Anthropic, and others, which involves a combination of human and automated moderation and API-level filtering. Prakash argues that this opacity behind closed doors could be more damaging in the long run than the lack of OpenChatKit’s mandatory filter.
“Like many dual-use technologies, AI can certainly be used in malicious contexts. This applies to open AI or closed systems commercially available via APIs,” said Prakash. “Our thesis is that the more the open research community can test, test and improve generative AI technologies, the better able we as a society will be to find solutions to these risks. We believe there is greater risk in a world where the power of large generative AI models resides solely in a handful of large tech companies that are unable to be scrutinized, inspected or understood.”
To underscore Prakash’s stand on open development, OpenChatKit includes a second training dataset called OIG moderation, which aims to address a range of chatbot moderation challenges, including bots adopting overly aggressive or depressive tones. (See: Bing Chat.) It was used to train the smaller of the two models in OpenChatKit, and Prakash says OIG moderation can be applied to create other models that detect and filter out problematic text when there is a conflict developers choose to do so.
“We care deeply about AI security, but we believe security by obscurity is a bad approach in the long run. An open, transparent attitude is widely accepted as a standard attitude in the world of computer security and cryptography, and we believe that transparency will be crucial if we want to build secure AI,” said Prakash. “Wikipedia is a great demonstration of how an open community can be an excellent solution for demanding moderation tasks at scale.”
I’m not sure. For starters, Wikipedia isn’t exactly the gold standard — the site’s moderation process is notoriously opaque and territorial. In addition, open source systems are often (and quickly) abused. Taking the Stable Diffusion image-generating AI system as an example, communities like 4chan have been using the model — which also includes optional moderation tools — to create non-consensual pornographic deepfakes of famous actors within days of its release.
OpenChatKit’s license expressly prohibits uses such as generating misinformation, promoting hate speech, spamming, and engaging in cyberbullying or harassment. But nothing prevents malicious actors from ignoring both these terms and the moderation tools.
Anticipating the worst, some researchers have started sounding the alarm about open-access chatbots.
NewsGuard, a company that tracks online misinformation, found in a recent study that newer chatbots, notably ChatGPT, could be tricked into writing content making harmful claims about vaccines, mimicking propaganda and disinformation from China and Russia and echo the tone of partisan news outlets. According to the study, ChatGPT fulfilled requests to write responses based on false and misleading ideas about 80% of the time.
In response to NewsGuard’s findings, OpenAI improved ChatGPT’s backend content filters. Of course, that wouldn’t be possible with a system like OpenChatKit, which imposes a duty on the developers to keep the models up to date.
Prakash sticks to his argument.
“Many applications need to be customized and specialized, and we believe an open-source approach will better support a healthy variety of approaches and applications,” he said. “The open models are getting better and better, and we expect their adoption to increase sharply.”