3 minute read

Integrating Ollama AI Models and Open WebUI on DockerPermalink

IntroductionPermalink

Ollama, Opensource LLM Framework from Meta, provides a powerful way to work with large language models (LLMs) efficiently. While Open WebUI is a user-friendly interface that simplifies interaction with Ollama-hosted AI models. You can host Ollama and WebUI on your local machine. By using Docker, we can containerize these components, ensuring a seamless and reproducible setup across different environments. This guide will walk you through integrating Ollama and Open WebUI within Docker.

PrerequisitesPermalink

  • “Windows 11 with WSL 2 enabled”
  • “Docker installed and running within WSL 2”
  • “Basic knowledge of Docker commands and YAML configuration”
  • “Basic understanding of LLM working, Ollama framework, and Ollama model hosting”

Setting Up Open WebUIPermalink

“Open WebUI allows users to interact with AI models easily. We will configure Open WebUI within a Docker container to make deployment straightforward. You can install WebUI on WSL2 directly or inside a Docker container. For the second option, it is best to pick up the ready-made image from Docker Hub, then download and install it in the WSL environment.”

Steps to Install Open WebUI:Permalink

  1. “Pull the Open WebUI Docker image:”
    docker pull open-webui/open-webui:latest
    
  2. “Run the Open WebUI container:”
    docker run -d --name open-webui -p 3000:3000 open-webui/open-webui
    
  3. “Access the interface at http://localhost:3000 in your browser. Keep in mind that in the frontend, you will not see any Ollama model. Therefore, we need to download Ollama models using the WebUI frontend. But the problem with this approach is that these models will be deleted when you delete your container or Docker image (depending on how you set up). The second way is to download the model separately, find the location of the model, and attach that location as a volume (-v option) in the above docker run command.”

Configuring Docker for Ollama and Open WebUIPermalink

“Now, there are two ways.”

Method 1: Configuring Ollama on WSL2Permalink

“Ollama can be installed directly on WSL2, allowing seamless integration with Open WebUI running in a Docker container. This method provides better performance compared to running everything inside Docker.”

Steps to Install Ollama on WSL2:Permalink

  1. “Update your system and install required dependencies:”
    sudo apt update && sudo apt install -y curl
    
  2. “Download and install Ollama:”
    curl -fsSL https://ollama.ai/install.sh | sh
    
  3. “Verify the installation:”
    ollama --version
    
  4. “Start Ollama in the background:”
    ollama serve &
    
  5. “Download and store Ollama models in a persistent volume:”
    ollama run openthinker
    
  6. “Run the Open WebUI container and mount the model directory:”
    docker run -d --name open-webui -p 3000:3000 -v ~/.ollama/models:/root/.ollama/models open-webui/open-webui
    
  7. “Now, Open WebUI should be able to detect the Ollama model stored in the mounted volume.”

Method 2: Configuring Ollama with DockerPermalink

“To streamline deployment, we will set up a docker-compose.yml file to run Ollama alongside Open WebUI. We need Docker Compose when running multiple Docker containers simultaneously, and we want these containers to talk to each other to achieve a common application goal.”

Create a docker-compose.yml File:Permalink

version: '3.8'

services:
  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    ports:
      - "11434:11434"
    volumes:
      - ./models:/root/.ollama/models
    restart: unless-stopped
    command: ["sh", "-c", "ollama run openthinker"]

  open-webui:
    image: open-webui/open-webui:latest
    container_name: open-webui
    ports:
      - "3000:3000"
    depends_on:
      - ollama
    volumes:
      - ./models:/root/.ollama/models
    restart: unless-stopped

Running the Containers:Permalink

  1. “Navigate to the directory where docker-compose.yml is saved.”
  2. “Run the following command to start both containers:”
    docker-compose up -d
    
  3. “Open WebUI will be available at http://localhost:3000, and Ollama will run on port 11434 with the openthinker model available in Open WebUI.”

Diagram RepresentationPermalink

ConclusionPermalink

“By using Docker, we efficiently integrate Ollama AI models with Open WebUI. This setup ensures easy deployment, scalability, and consistent performance. Future enhancements can include setting up persistent storage for models and optimizing resource allocation.”

Updated: