Open-WebUI is a sleek and intuitive web-based user interface designed for interacting with large language models. When paired with Ollama, it provides an easy way to manage and run AI models locally with a clean and accessible dashboard. This post is a Guide to Open-WebUI: Using It with Ollama
A Guide to Open-WebUI: Using It with Ollama
In this guide, we’ll walk you through setting up Open-WebUI and integrating it with Ollama to enhance your local AI workflow.
![](https://i0.wp.com/jonathansblog.co.uk/wp-content/uploads/2025/01/demo.gif?resize=1200%2C675&ssl=1)
What is Open-WebUI?
Open-WebUI is an open-source alternative to platforms like ChatGPT, allowing you to run large language models locally while maintaining full control over your data and privacy. It supports various backends, including Ollama, making it a powerful tool for self-hosted AI applications.
A Guide to Open-WebUI: Using It with Ollama
Before you begin, ensure that Ollama is installed and running on your machine. If you haven’t installed it yet, check out our Ollama guide for installation instructions.
Step 1: Clone the Open-WebUI Repository
Open a terminal and run:
git clone https://github.com/open-webui/open-webui.git
cd open-webui
Step 2: Set Up the Environment
Create an .env
file and configure it to use Ollama:
echo "OLLAMA_BASE_URL=http://localhost:11434" > .env
Step 3: Start Open-WebUI
Run the following command to start the web interface:
docker compose up -d
After this, Open-WebUI should be accessible at http://localhost:3000
.
Connecting Open-WebUI to Ollama
Since we’ve already set OLLAMA_BASE_URL
, Open-WebUI will automatically detect and connect to your local Ollama instance. You can now:
- View installed models
- Run queries against different models
- Manage and fine-tune responses through a web-based chat interface
Running Models Through Open-WebUI
Once Open-WebUI is up and running, follow these steps:
- Open a browser and go to
http://localhost:3000
- Select a model from the dropdown (e.g.,
mistral
ordeepseek-r1
) - Type a query and press Enter to interact with the model
Adding More Models to Open-WebUI
To install additional models via Ollama, use:
ollama pull <model-name>
For example, to add deepseek-r1
:
ollama pull deepseek-r1
Once added, refresh Open-WebUI to see the new model in the dropdown.
Stopping and Restarting Open-WebUI
To stop the web UI:
docker compose down
To restart it:
docker compose up -d