Install ComfyUI with One Command
ComfyUI is a powerful node-based AI image generation workflow tool with over 106K+ GitHub stars. It lets users visually wire together every stage of a Stable Diffusion pipeline -- text encoding, model loading, sampler selection, ControlNet conditioning, and post-processing -- through a drag-and-drop graph editor. Unlike traditional form-based WebUIs, ComfyUI's node paradigm gives advanced users the freedom to build arbitrarily complex workflows, including multi-model blending, LoRA stacking, inpainting, outpainting, and even video generation.
The project's core strength is its extensibility. The community has contributed thousands of custom nodes covering everything from face restoration and background replacement to animation and 3D rendering. For AI artists, designers, and researchers, ComfyUI represents the most professional local Stable Diffusion deployment available today, offering finer control and better memory efficiency than alternative interfaces.
Why Installing ComfyUI Is Hard
ComfyUI's installation process is one of the most error-prone in the AI tool ecosystem, primarily because it depends heavily on hardware configuration and the Python packaging ecosystem. Here are the most common pain points:
- Python environment chaos -- ComfyUI requires Python 3.10 or 3.11, but many systems ship with 3.8 or 3.12, and version mismatches cause PyTorch installation failures. Users must choose between venv, conda, and pyenv, and ensure pip-installed packages land in the correct virtual environment -- a significant barrier for non-Python developers.
- CUDA / GPU driver configuration -- Using an NVIDIA GPU for acceleration requires a matching combination of CUDA Toolkit, cuDNN, and PyTorch CUDA build. A mismatch produces the dreaded "CUDA not available" error. macOS users need the MPS backend, AMD GPU users need ROCm, and each path has entirely different setup steps.
- Large model downloads -- Stable Diffusion checkpoint files range from 2 to 7 GB. ControlNet models, LoRA weights, and VAE files add hundreds of megabytes each. Download sources can be slow or unreliable, and files must be placed in the correct directory structure -- wrong paths mean models simply do not appear in the UI.
- Custom node dependency conflicts -- Third-party custom nodes often require conflicting versions of the same Python package, causing pip resolver errors. Some nodes also require C/C++ extension compilation, which on Windows demands Visual Studio Build Tools.
- Low VRAM handling -- GPUs with 4-6 GB VRAM need specific optimization flags (--lowvram, --fp16), but not every model and node supports these options, leading to out-of-memory crashes during generation.
How AIMA Solves This
AIMA's AI Agent detects your hardware and software environment, then dynamically selects the optimal installation strategy rather than running a one-size-fits-all script.
- Intelligent Python environment management -- Detects existing Python versions, creates an isolated virtual environment, and installs the matching PyTorch build with all dependencies -- without disturbing system Python.
- Automatic GPU adaptation -- Detects GPU model and driver version, selects the correct PyTorch build (CUDA 11.8/12.1, MPS, or CPU-only), and configures appropriate launch parameters.
- Model directory setup -- Automatically creates the standard models directory structure (checkpoints, loras, controlnet, vae, etc.) and configures ComfyUI's model search paths.
- Launch verification -- After starting ComfyUI, verifies the Web UI is accessible and the GPU is correctly recognized, ensuring the installation actually works.
Step-by-Step: Install ComfyUI with AIMA
Step 1: Open a Terminal
On macOS, open Terminal.app or iTerm2. On Linux, open your preferred terminal emulator. On Windows, open PowerShell as Administrator.
Step 2: Run the Install Command
For macOS and Linux:
curl -sL https://aimaserver.com/install/comfyui | bash For Windows (PowerShell):
iex (irm https://aimaserver.com/install/comfyui) Step 3: Enter Your Invite Code
After the command runs, the AIMA client starts and prompts you for an invite code. This code links your device to the AIMA platform and activates the AI Agent installation capabilities.
Step 4: AIMA Handles the Rest
AIMA detects your system and GPU configuration, creates a Python virtual environment, installs the correct version of PyTorch, clones the ComfyUI repository, installs all dependencies, and configures the model directory structure. No manual intervention is required.
Step 5: Open ComfyUI in Your Browser
Once installation is complete, AIMA starts ComfyUI and reports the access URL -- typically http://localhost:8188. Open it in your browser and start creating your first AI image generation workflow.
System Requirements
| Component | Requirement |
|---|---|
| Operating System | macOS 12+, Ubuntu 22.04/24.04, Windows 10/11 |
| Memory (RAM) | 8 GB minimum, 16 GB+ recommended |
| GPU (recommended) | NVIDIA GPU with 4 GB+ VRAM (CUDA); Apple Silicon (MPS); or CPU-only mode |
| Disk Space | At least 20 GB (Python environment + at least one SD checkpoint) |
| Python | Managed automatically by AIMA -- no pre-install needed |
Common Issues AIMA Handles Automatically
- PyTorch CUDA version mismatch -- Detects the NVIDIA driver version and selects a compatible PyTorch + CUDA combination.
- Incompatible Python version -- Installs the right Python version and runs ComfyUI in an isolated virtual environment without affecting the system Python.
- Custom node installation failures -- Handles compilation dependencies (e.g., Build Tools on Windows) and installs required Python packages in the correct order.
- Port 8188 occupied -- Detects the conflict and switches to an available port automatically.
- Model path misconfiguration -- Creates the standard directory structure and configures extra_model_paths.yaml.
Frequently Asked Questions
Can I use ComfyUI without a GPU?
Yes. ComfyUI supports CPU-only mode, and AIMA will automatically detect the absence of a dedicated GPU and install the CPU version of PyTorch. However, generation times increase significantly in CPU mode (several minutes to over ten minutes per image), so it is best suited for testing and learning purposes.
Does AIMA download Stable Diffusion models for me?
AIMA sets up the model directory structure and can assist with downloading a base model as needed. Since SD model files are large (2-7 GB each), actual download time depends on your network speed. AIMA places models in the correct directories so ComfyUI detects them immediately.
How well does ComfyUI run on macOS?
Apple Silicon Macs (M1/M2/M3/M4) deliver solid performance through the MPS backend. AIMA automatically detects Apple Silicon and installs the MPS-optimized PyTorch build. Intel Macs are limited to CPU-only mode, which is considerably slower.
How do I install custom nodes for ComfyUI?
After AIMA completes the base installation, you can use ComfyUI Manager (a community-built management tool) to browse and install custom nodes, or manually clone node repositories into the custom_nodes directory. If a node install fails due to dependency issues, you can use AIMA again for assistance.
What is the difference between ComfyUI and Stable Diffusion WebUI?
Stable Diffusion WebUI (AUTOMATIC1111) offers a traditional form-based interface that is easier to learn but less flexible. ComfyUI uses a node-based workflow editor with a steeper learning curve but far more powerful capabilities, especially for complex workflows. ComfyUI is also more memory-efficient, typically handling larger images on the same hardware.
You might also want to install
Ready to install ComfyUI?
One command. AIMA handles Python, GPU drivers, and every dependency.