Install Open WebUI with One Command

Open WebUI is a feature-rich, self-hosted AI chat interface with over 128K+ GitHub stars, making it the most popular open-source alternative to ChatGPT. It provides a modern, ChatGPT-like web interface that connects to Ollama local models, the OpenAI API, and other OpenAI-compatible LLM services. Users can run large language models entirely on their own hardware, preserving data privacy while avoiding per-call API charges.

Open WebUI supports multi-user management, conversation history, file upload and analysis, RAG knowledge bases, model parameter tuning, Markdown rendering, code highlighting, and more -- delivering an experience that rivals or exceeds commercial products. Its deep integration with Ollama means you can easily run popular open-source models like Llama 3, Mistral, Gemma, Qwen, and DeepSeek. For individuals and teams who want private AI chat capabilities on their own infrastructure, Open WebUI is the go-to solution.

Why Installing Open WebUI Is Hard

Although Open WebUI itself is a web application, the full installation pipeline involves multiple independent components that must work together. This is where most users get stuck:

  • Ollama installation and integration -- Open WebUI's core value is connecting to local models, which requires Ollama to be installed first. Ollama installs differently on each OS (macOS: .app, Linux: curl script, Windows: installer package). After installation, you must verify the Ollama service is running and that Open WebUI can reach it at the correct API address.
  • Model pulling takes time -- After installing Ollama, you need to pull at least one model before you can start chatting. Even smaller models like Llama 3 8B are several GB, and larger ones like Qwen 72B exceed 40 GB. Users often face interrupted downloads, slow speeds, and uncertainty about which models their hardware can actually run.
  • GPU passthrough configuration -- To let Ollama use GPU acceleration, Linux users must install the NVIDIA Container Toolkit and configure Docker GPU passthrough. Windows users need to verify WSL2 GPU support. Misconfiguration means models run on CPU only, with inference speeds 10x slower or worse.
  • Docker networking and service boot order -- Open WebUI runs inside a Docker container and needs to reach the Ollama service running on the host. Docker network isolation and different Ollama listen addresses (127.0.0.1 vs 0.0.0.0) frequently produce "connection refused" errors. Incorrect service startup order can also cause Open WebUI to fail on launch because Ollama is not yet ready.
  • Reverse proxy and HTTPS -- Accessing Open WebUI from other devices on your network or the public internet requires nginx reverse proxy configuration and SSL certificates. WebSocket proxy misconfiguration is a common cause of broken streaming output.

How AIMA Solves This

AIMA collapses the multi-step process of "install Ollama, pull a model, install Docker, configure networking, start Open WebUI" into a single command that handles everything end to end.

  • Automatic Ollama install and startup -- Detects whether Ollama is installed, installs it using the method appropriate for your OS if missing, and ensures the service is running and listening.
  • Smart model recommendations -- Based on your hardware (GPU VRAM, system RAM), recommends an appropriate model size and auto-pulls a base model so you have something ready to chat with immediately.
  • Network configuration handled -- Automatically configures Docker-to-Ollama networking, sets the correct API address and port mappings, and resolves container isolation issues.
  • End-to-end verification -- After installation, verifies the Open WebUI interface is accessible, Ollama connectivity is healthy, and at least one model is available, ensuring the install is truly complete.

Step-by-Step: Install Open WebUI with AIMA

Step 1: Open a Terminal

On macOS, open Terminal.app or iTerm2. On Linux, open your preferred terminal emulator. On Windows, open PowerShell as Administrator.

Step 2: Run the Install Command

For macOS and Linux:

curl -sL https://aimaserver.com/install/open-webui | bash

For Windows (PowerShell):

iex (irm https://aimaserver.com/install/open-webui)

Step 3: Enter Your Invite Code

After the command runs, the AIMA client starts and prompts you for an invite code. This code links your device to the AIMA platform and activates the AI Agent installation capabilities.

Step 4: AIMA Handles the Rest

AIMA automatically installs Ollama (if not present), pulls a base model suited to your hardware, installs Docker (if needed), pulls the Open WebUI container image, configures the Ollama connection, and starts all services.

Step 5: Open Your Browser and Start Chatting

Once installation is complete, AIMA reports the access URL -- typically http://localhost:3000 or http://localhost:8080. Open it in your browser, create an admin account, and start chatting with your local AI models.

System Requirements

Component Requirement
Operating SystemmacOS 12+, Ubuntu 22.04/24.04, Windows 10/11
Memory (RAM)8 GB minimum (for 7B models), 16 GB+ recommended (for 13B+ models)
GPU (recommended)NVIDIA GPU with 6 GB+ VRAM; Apple Silicon (unified memory); or CPU-only mode
Disk SpaceAt least 15 GB (Docker images + one 7B model)
NetworkInternet connection required to pull container images and models

Common Issues AIMA Handles Automatically

  • Ollama service not running -- Detects the service status and auto-starts Ollama, waiting until it is ready before proceeding.
  • Docker container cannot reach Ollama -- Configures the OLLAMA_BASE_URL environment variable and network settings so the container can access the host's Ollama API.
  • Ports 3000/8080 occupied -- Detects the conflict and switches to an available port automatically.
  • GPU passthrough not configured -- On Linux, auto-installs the NVIDIA Container Toolkit. On Windows, validates WSL2 GPU support.
  • No models available -- Automatically pulls a base model sized for your hardware so you can start chatting right away.

Frequently Asked Questions

Can Open WebUI connect to the OpenAI API?

Yes. Open WebUI supports both Ollama local models and OpenAI-compatible APIs. You can add your OpenAI API key in the Open WebUI settings page, or configure any third-party API that follows the OpenAI format (Anthropic, DeepSeek, Qwen, etc.). AIMA configures the Ollama connection by default; you can add additional model providers at any time through the web interface.

Can multiple people share one Open WebUI instance?

Yes. Open WebUI includes a built-in multi-user management system. The first user to register becomes the administrator and can manage other users' registration and permissions. Each user's conversation history, settings, and uploaded files are fully isolated from one another.

Where is my conversation data stored?

Everything stays on your local machine. Open WebUI uses a SQLite database for conversation history and user settings, stored in a Docker volume by default. Ollama model files are saved to local disk. Your conversations are never uploaded to any external server.

Which models does Open WebUI support?

Through Ollama, Open WebUI supports hundreds of open-source models including Llama 3, Mistral, Gemma, Qwen, DeepSeek, CodeLlama, Phi, and many more. You can browse all available models in the Ollama model library and pull new ones directly from the Open WebUI interface.

How do I make Open WebUI accessible to other devices on my network?

By default, Open WebUI listens on localhost and is only accessible from the host machine. To allow LAN access, change the Docker port mapping to bind on 0.0.0.0, then access it from other devices by entering the host machine's local IP address and port in the browser.

Ready to install Open WebUI?

One command. AIMA handles Ollama, model downloads, and every configuration.