chatbot ui and free open source alternatives to chat gpt plusa developer's guide to making the switch

minute read

Like all great projects, this journey began in an effort to solve a simple problem. I was spending too much on a ChatGPT Plus subscription.

When I first subscribed, ChatGPT was awesome. GPT-3.5 was handling most tasks I was throwing at it, and when GPT-4 was released, I was wowed. Keeping my subscription was a no-brainer.

But then the quality started to wane. It got sluggish. Lazy. Dare I say, dumb. Earlier this year, Sam Altman tweeted that they'd tweaked GPT, to make it less lazy. And for some tasks, that seemed true. But the ChatGPT of today is nowhere near the quality I experienced when GPT-4 first launched.

As such, I find myself using ChatGPT less and less. Patiently awaiting GPT-4.5. But in the meantime, it's often more work to wrangle GPT for a correct answer than it is to complete the task myself. That defeats the purpose of AI, and significantly degrades the value that I'm able to get out of my current subscription.

So I considered my options, and after some experimenting, landed on a mixture that I'm so far pretty happy with.

OpenAI is an industry leader. Despite the quality of their flagship product sliding a little bit, I think it's still quite clear they're leading the pack (for now), especially when it comes to consumer options. With that in mind, although I know I want to cancel my monthly subscription to ChatGPT Plus, I don't want to exclude myself from their offering entirely. Thankfully, OpenAI provides a great set of APIs for interacting with their models.

I've tried consuming the API a few different ways. I'm a long time fan (and contributor) of Raycast, which has a third party bring-your-own-api-key ChatGPT extension. And that works really well for most simple tasks. But I primarily use GPT to help write tests and reason about complex code. The extension is great, and super convenient to access directly in Raycast, but if you're working with anything longer than a few lines, then it starts to feel like the wrong solution.

Finally, I discovered Chatbot UI. The tool is still fairly new, but it's open source, and had some major upgrades implemented in it's latest (at the time of this writing) release, so I decided to try it out.

Chatbot UI

The set up for was a breeze, thanks to McKay Wrigley for the easy-to-follow Chatbot UI tutorial on YouTube (10 min). If you're curious about setting up Chatbot UI, I highly recommend watching it, as it'll get you up and running in just a few minutes. And if you're new to LLMs in general, it'll help you get set up with some of the most popular tools like Ollama too.

Before going any further, it's worth noting that you can set up Ollama on its own, without Chatbot UI, or any of the other tools listed below. The caveat is that you'd be limited to using the command line for your conversations. If you prefer a GUI, continue reading.

Chatbot UI is just a UI. So you'll need to install a few other programs to actually run the LLMs and persist data for conversation history and custom prompts.

Although this article only mentions OpenAI, you can use a variety of different paid services by using your API key for those services. Check the GitHub repo for a full list.

Let's take a look at what each of those tools are for.


Docker is a program for developing and running applications within virtual containers. This allows for developers and users to create similar container environments, regardless of their operating system or machine; ensuring everyone has the same predictable experience with the software. In this case, the database (Supabase) will run in Docker, and the rest of the software we set up will run on your bare metal.

Although you can configure other parts of your Chatbot UI set up to run on Docker, in the basic tutorial, only Supabase requires Docker.

Setting up Docker

Docker provides a few different ways to interact with its products. In this case all we need is Docker Desktop, which you can install here.


Supabase is an open source Firebase alternative. They offer a limited (but flexible) free plan, as well as paid plans - if you use their hosting. However since we're using Docker to run Supabase locally on our own machines, we won't need to worry about Supabase hosting or any sort of paid plans.

Setting up Supabase

If you're on a Mac then you can install Supabase with Homebrew:

brew install supabase/tap/supabase

If you're on a different platform, have a look at the docs in the Chatbot UI repo, or the Supabase CLI docs.


Ollama is a tool that allows you to run open source LLMs (like Llama2, Mistral, Gemma and more), locally on your machine.

The main benefit of Chatbot UI is the user friendly interface, prompt and conversation history, and the ease of switching between models. But if that's more than you'll need, then feel free to interact with your models from your favourite terminal app instead.

Setting up Ollama

You can install Ollama by downloading the appropriate .zip or .exe from the Ollama GitHub repo. Once Ollama installed, you can install new LLMs as needed. You'll find a list of the available LLMs on Ollama's list of models. Simply click on the model, and you'll be shown a bash script you can use to install and run the model.

For example, to install and run Llama2, you can run the following command:

ollama run llama2

After installing an LLM with Ollama, it'll show up in your Chatbot UI. So if you decide you've had enough fun with Llama2, and want to give Mistral a try, then you'd just run something like this:

ollama run mistral

Although you can manage Ollama from the command line, if you ever need to dig into things with your file explorer, then you'll find Ollama's content stored in your home directory. Since the Ollama files are stored in an .ollama file, which begins with a dot, you might not see it right away, as all dot files are hidden by default.

On a Mac, you can show or hide your dot files by pressing cmd + shift + .

Bear in mind, dot files are intentionally hidden by default, because they store a lot of important app and system information. Be sure not to delete or modify any dot files you don't understand.

How to install and run different size models of a particular LLM

Suppose that you've been working with Llama2 7b, and you have a more complex task, and want to run Llama2 13b, or even 70b. You can install each version of the LLM individually by appending a colon and the particular version you're after.

ollama run llama2:7b
ollama run llama2:70b

Bear in mind, by default these are running locally on your computer, which means their capability is limited to the capability of your machine. The Llama2 README from Meta recommends the following specs.

  • 7b models generally require at least 8GB of RAM
  • 13b models generally require at least 16GB of RAM
  • 70b models generally require at least 64GB of RAM

Although Llama2 doesn't provide any middle-ground between 13b and 70b, if you're coding, then codellama does have a 34b model.

Bring your own API Keys (Optional)

Okay, open source is great, and as you might have seen, some of these open source models are coming close to giving ChatGPT a run for it's money. But as I mentioned earlier, I still have faith in OpenAI, and am excited for the next major release of GPT. So while I do use a local model for most of my tasks, I still maintain my OpenAI API account for when I need a little bit of extra oomph.

Automating Chatbot UI Startup

One of the benefits of ChatGPT Plus is that it's dead simple. Visit the website, login, interact with GPT. But if you followed along with the Chatbot UI setup tutorial, you probably noticed that there are quite a few steps to get Chatbot UI up and running each time you want to use it.

  1. Start Docker (Desktop)
  2. Start Supabase in the Docker container
  3. Start Ollama
  4. Start Chatbot UI

You'll need at least two terminal windows (or tabs), since both Ollama and Chatbot UI need to run continuously while using them.

The privacy of running an LLM locally is nice, but that loses some of it's shine if we're sacrificing productivity each time we need to use it. But, if you're using MacOS, then you can use the built in Automator app to do the heavy lifting for you.

Using Automator, you can create an application with the option for it to Run AppleScript. Since this portion isn't covered in the Chatbot UI setup tutorial, or any of the documentation for the other tools, let's run through the basics of setting up that automation.

  1. Open the Automator app. The easiest way to do this is with Spotlight, by pressing cmd + space, then typing Automator
  2. When Automator launches, you'll be prompted to choose a type for your new document. Select Application, then click Choose to proceed.
  3. In the Library of available actions (in the left panel), find Run AppleScript and click and drag it into the panel on the right.
  4. Write your AppleScript from scratch, or feel free to use or modify the AppleScript example below.
  5. Test it out and save your application. You can save with the traditional File > Save flow, however when saving, be sure to set the File Format to Application
  6. Navigate to where you saved your application (you can drag it into the dock for convenience if you wish), and double click it. The app will start, and begin running Chatbot UI locally.

AppleScript Example

I use iTerm2 as my default terminal. If you don't have iTerm2 installed, then you'll need to modify the script below to accommodate your specific toolset.

on run {input, parameters}
	tell application "iTerm"

		-- Give iTerm a few seconds to open
		delay 3
		-- Create a new window or use the current one
		tell current window
			create tab with default profile
		end tell
		-- Start Docker in the first iTerm tab
		tell current session of current tab of current window
			write text "open -a Docker"
		end tell

		-- Give Docker 10 seconds to start up
		delay 10
		-- Start Ollama in the Docker tab
		tell current session of current tab of current window
			write text "ollama serve"
		end tell

		-- Give Ollama 10 seconds to start up
		delay 10

		-- Open a new iTerm tab for the frontend dev server
		tell current window
			create tab with default profile
		end tell
		tell current session of current tab of current window
			write text "cd /path/to/chatbot-ui && npm run chat"
		end tell
	end tell

	return input
end run

In conclusion, what began as a practical endeavour to reduce my reliance and expenses on ChatGPT Plus, soon became a much welcomed deep dive into LLMs, AppleScript, and Automator. Transitioning from a pre-packaged consumer AI to an open-source setup was not only (after gathering a bit of context) fairly straightforward and a lot of fun, but it's given me plenty more to tinker with over the next few weeks too.