Skip to main content
  1. Data Science Blog/

Integrating Ollama AI Models and Open WebUI with Docker: A Step-by-Step Guide

·620 words·3 mins· loading · ·
AI Hardware & Infrastructure Development Environment & Tools AI/ML Models Specific AI Models Specific AI Tools Docker AI Model Deployment AI Model Integration

On This Page

Table of Contents
Share with :

Integrating Ollama AI Models and Open WebUI on Docker
#

Introduction
#

Ollama, Opensource LLM Framework from Meta, provides a powerful way to work with large language models (LLMs) efficiently. While Open WebUI is a user-friendly interface that simplifies interaction with Ollama-hosted AI models. You can host Ollama and WebUI on your local machine. By using Docker, we can containerize these components, ensuring a seamless and reproducible setup across different environments. This guide will walk you through integrating Ollama and Open WebUI within Docker.

Prerequisites
#

  • “Windows 11 with WSL 2 enabled”
  • “Docker installed and running within WSL 2”
  • “Basic knowledge of Docker commands and YAML configuration”
  • “Basic understanding of LLM working, Ollama framework, and Ollama model hosting”

Setting Up Open WebUI
#

“Open WebUI allows users to interact with AI models easily. We will configure Open WebUI within a Docker container to make deployment straightforward. You can install WebUI on WSL2 directly or inside a Docker container. For the second option, it is best to pick up the ready-made image from Docker Hub, then download and install it in the WSL environment.”

Steps to Install Open WebUI:
#

  1. “Pull the Open WebUI Docker image:”
    docker pull open-webui/open-webui:latest
    
  2. “Run the Open WebUI container:”
    docker run -d --name open-webui -p 3000:3000 open-webui/open-webui
    
  3. “Access the interface at http://localhost:3000 in your browser. Keep in mind that in the frontend, you will not see any Ollama model. Therefore, we need to download Ollama models using the WebUI frontend. But the problem with this approach is that these models will be deleted when you delete your container or Docker image (depending on how you set up). The second way is to download the model separately, find the location of the model, and attach that location as a volume (-v option) in the above docker run command.”

Configuring Docker for Ollama and Open WebUI
#

“Now, there are two ways.”

Method 1: Configuring Ollama on WSL2
#

“Ollama can be installed directly on WSL2, allowing seamless integration with Open WebUI running in a Docker container. This method provides better performance compared to running everything inside Docker.”

Steps to Install Ollama on WSL2:
#

  1. “Update your system and install required dependencies:”
    sudo apt update && sudo apt install -y curl
    
  2. “Download and install Ollama:”
    curl -fsSL https://ollama.ai/install.sh | sh
    
  3. “Verify the installation:”
    ollama --version
    
  4. “Start Ollama in the background:”
    ollama serve &
    
  5. “Download and store Ollama models in a persistent volume:”
    ollama run openthinker
    
  6. “Run the Open WebUI container and mount the model directory:”
    docker run -d --name open-webui -p 3000:3000 -v ~/.ollama/models:/root/.ollama/models open-webui/open-webui
    
  7. “Now, Open WebUI should be able to detect the Ollama model stored in the mounted volume.”

Method 2: Configuring Ollama with Docker
#

“To streamline deployment, we will set up a docker-compose.yml file to run Ollama alongside Open WebUI. We need Docker Compose when running multiple Docker containers simultaneously, and we want these containers to talk to each other to achieve a common application goal.”

Create a docker-compose.yml File:
#

version: "3.8"

services:
  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    ports:
      - "11434:11434"
    volumes:
      - ./models:/root/.ollama/models
    restart: unless-stopped
    command: ["sh", "-c", "ollama run openthinker"]

  open-webui:
    image: open-webui/open-webui:latest
    container_name: open-webui
    ports:
      - "3000:3000"
    depends_on:
      - ollama
    volumes:
      - ./models:/root/.ollama/models
    restart: unless-stopped

Running the Containers:
#

  1. “Navigate to the directory where docker-compose.yml is saved.”
  2. “Run the following command to start both containers:”
    docker-compose up -d
    
  3. “Open WebUI will be available at http://localhost:3000, and Ollama will run on port 11434 with the openthinker model available in Open WebUI.”

Diagram Representation
#

Conclusion
#

“By using Docker, we efficiently integrate Ollama AI models with Open WebUI. This setup ensures easy deployment, scalability, and consistent performance. Future enhancements can include setting up persistent storage for models and optimizing resource allocation.”

Dr. Hari Thapliyaal's avatar

Dr. Hari Thapliyaal

Dr. Hari Thapliyal is a seasoned professional and prolific blogger with a multifaceted background that spans the realms of Data Science, Project Management, and Advait-Vedanta Philosophy. Holding a Doctorate in AI/NLP from SSBM (Geneva, Switzerland), Hari has earned Master's degrees in Computers, Business Management, Data Science, and Economics, reflecting his dedication to continuous learning and a diverse skill set. With over three decades of experience in management and leadership, Hari has proven expertise in training, consulting, and coaching within the technology sector. His extensive 16+ years in all phases of software product development are complemented by a decade-long focus on course design, training, coaching, and consulting in Project Management. In the dynamic field of Data Science, Hari stands out with more than three years of hands-on experience in software development, training course development, training, and mentoring professionals. His areas of specialization include Data Science, AI, Computer Vision, NLP, complex machine learning algorithms, statistical modeling, pattern identification, and extraction of valuable insights. Hari's professional journey showcases his diverse experience in planning and executing multiple types of projects. He excels in driving stakeholders to identify and resolve business problems, consistently delivering excellent results. Beyond the professional sphere, Hari finds solace in long meditation, often seeking secluded places or immersing himself in the embrace of nature.

Comments:

Share with :

Related

Roadmap to Reality
·990 words·5 mins· loading
Philosophy & Cognitive Science Interdisciplinary Topics Scientific Journey Self-Discovery Personal Growth Cosmic Perspective Human Evolution Technology Biology Neuroscience
Roadmap to Reality # A Scientific Journey to Know the Universe — and the Self # 🌱 Introduction: The …
From Being Hacked to Being Reborn: How I Rebuilt My LinkedIn Identity in 48 Hours
·893 words·5 mins· loading
Personal Branding Cybersecurity Technology Trends & Future Personal Branding LinkedIn Profile Professional Identity Cybersecurity Online Presence Digital Identity Online Branding
💔 From Being Hacked to Being Reborn: How I Rebuilt My LinkedIn Identity in 48 Hours # “In …
Exploring CSS Frameworks - A Collection of Lightweight, Responsive, and Themeable Alternatives
·1378 words·7 mins· loading
Web Development Frontend Development Design Systems CSS Frameworks Lightweight CSS Responsive CSS Themeable CSS CSS Utilities Utility-First CSS
Exploring CSS Frameworks # There are many CSS frameworks and approaches you can use besides …
Dimensions of Software Architecture: Balancing Concerns
·873 words·5 mins· loading
Software Architecture Software Architecture Technical Debt Maintainability Scalability Performance
Dimensions of Software Architecture # Call these “Architectural Concern Categories” or …
Understanding `async`, `await`, and Concurrency in Python
·616 words·3 mins· loading
Python Asyncio Concurrency Synchronous Programming Asynchronous Programming
Understanding async, await, and Concurrency # Understanding async, await, and Concurrency in Python …