Unverified Commit cd214093 authored by Muyang Li's avatar Muyang Li Committed by GitHub
Browse files

Merge pull request #530 from mit-han-lab/dev

parents 2a785f77 51732b7a
Installation
============
We provide step-by-step tutorial videos to help you install and use **Nunchaku on Windows**,
available in both `English <nunchaku_windows_tutorial_en_>`_ and `Chinese <nunchaku_windows_tutorial_zh_>`_.
You can also follow the corresponding text guide at :doc:`Windows Setup Guide <setup_windows>`.
If you encounter any issues, these resources are a good place to start.
(Recommended) Option 1: Installing Prebuilt Wheels
--------------------------------------------------
Prerequisites
^^^^^^^^^^^^^
Ensure that you have `PyTorch ≥ 2.5 <pytorch_home_>`_ installed. For example, to install **PyTorch 2.7 with CUDA 12.8**, use:
.. code-block:: shell
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
Installing Nunchaku
^^^^^^^^^^^^^^^^^^^
Once PyTorch is installed, you can install ``nunchaku`` from one of the following sources:
- `GitHub Releases <nunchaku_github_releases_>`_
- `Hugging Face <nunchaku_huggingface_>`_
- `ModelScope <nunchaku_modelscope_>`_
.. code-block:: shell
pip install https://github.com/mit-han-lab/nunchaku/releases/download/v0.3.1/nunchaku-0.3.1+torch2.7-cp311-cp311-linux_x86_64.whl
For ComfyUI Users
^^^^^^^^^^^^^^^^^
If you're using the **ComfyUI portable package**,
ensure that ``nunchaku`` is installed into the Python environment bundled with ComfyUI. You can either:
- Use our **NunchakuWheelInstaller Node** in `ComfyUI-nunchaku <comfyui_nunchaku_>`_, or
- Manually install the wheel using the correct Python path.
Option 1: Using NunchakuWheelInstaller
""""""""""""""""""""""""""""""""""""""
With `ComfyUI-nunchaku <comfyui_nunchaku_>`_ v0.3.2+, you can install Nunchaku using the provided `workflow <comfyui_nunchaku_wheel_installation_workflow_>`_ directly in ComfyUI.
.. image:: https://huggingface.co/mit-han-lab/nunchaku-artifacts/resolve/main/ComfyUI-nunchaku/assets/install_wheel.png
Option 2: Manual Installation
"""""""""""""""""""""""""""""
To find the correct Python path:
1. Launch ComfyUI.
2. Check the console log—look for a line like:
.. code-block:: text
** Python executable: G:\ComfyUI\python\python.exe
3. Use that executable to install the wheel manually:
.. code-block:: bat
"G:\ComfyUI\python\python.exe" -m pip install <your-wheel-file>.whl
**Example:** Installing for Python 3.11 and PyTorch 2.7:
.. code-block:: bat
"G:\ComfyUI\python\python.exe" -m pip install https://github.com/mit-han-lab/nunchaku/releases/download/v0.3.1/nunchaku-0.3.1+torch2.7-cp311-cp311-linux_x86_64.whl
For Blackwell GPUs (50-series)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you're using a **Blackwell (RTX 50-series)** GPU:
- Use **PyTorch ≥ 2.7** with **CUDA ≥ 12.8**.
- Use **FP4 models** instead of **INT4 models** for best compatibility and performance.
Option 2: Build from Source
---------------------------
Requirements
^^^^^^^^^^^^
- **CUDA version**:
- Linux: ≥ 12.2
- Windows: ≥ 12.6
- Blackwell GPUs: CUDA ≥ 12.8 required
- **Compiler**:
- Linux: ``gcc/g++ >= 11``
- Windows: Latest **MSVC** via `Visual Studio <visual_studio_>`_
.. note::
Currently supported GPU architectures:
- ``sm_75`` (Turing: RTX 2080)
- ``sm_80`` (Ampere: A100)
- ``sm_86`` (Ampere: RTX 3090, A6000)
- ``sm_89`` (Ada: RTX 4090)
- ``sm_120`` (Blackwell: RTX 5090)
Step 1: Set Up Environment
^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: shell
conda create -n nunchaku python=3.11
conda activate nunchaku
# Install PyTorch
pip install torch torchvision torchaudio
# Install dependencies
pip install ninja wheel diffusers transformers accelerate sentencepiece protobuf huggingface_hub
# Optional: For gradio demos
pip install peft opencv-python gradio spaces
For Blackwell users (50-series), install PyTorch ≥ 2.7 with CUDA ≥ 12.8:
.. code-block:: shell
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
Step 2: Build and Install Nunchaku
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**For Linux (if** ``gcc/g++`` **is not recent enough):**
.. code-block:: shell
conda install -c conda-forge gxx=11 gcc=11
For Windows users, download and install the latest `Visual Studio <visual_studio_>`_ and use its development environment. See :doc:`Window Setup Guide <setup_windows>` for more details.
**Clone and build:**
.. code-block:: shell
git clone https://github.com/mit-han-lab/nunchaku.git
cd nunchaku
git submodule init
git submodule update
python setup.py develop
**To build a wheel for distribution:**
.. code-block:: shell
NUNCHAKU_INSTALL_MODE=ALL NUNCHAKU_BUILD_WHEELS=1 python -m build --wheel --no-isolation
.. important::
Set ``NUNCHAKU_INSTALL_MODE=ALL`` to ensure the wheel works on all supported GPU architectures. Otherwise, it may only run on the GPU type used for building.
Windows Setup Guide
===================
Environment Setup
-----------------
1. Install Cuda
^^^^^^^^^^^^^^^^
Download and install the latest CUDA Toolkit from the official `NVIDIA CUDA Downloads <nvidia_cuda_downloads_>`_. After installation, verify the installation:
.. code-block:: bat
nvcc --version
2. Install Visual Studio C++ Build Tools
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Download from the official `Visual Studio Build Tools page <visual_studio_>`_. During installation, select the following workloads:
- **Desktop development with C++**
- **C++ tools for Linux development**
3. Install Git
^^^^^^^^^^^^^^
Download Git from `https://git-scm.com/downloads/win <git_downloads_win_>`_ and follow the installation steps.
4. (Optional) Install Conda
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Conda helps manage Python environments. You can install either Anaconda or Miniconda from the `official site <anaconda_download_>`_.
5. (Optional) Install ComfyUI
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You may have various ways to install ComfyUI. For example, you can use ComfyUI CLI.
Once Python is installed, you can install ComfyUI via the CLI:
.. code-block:: bat
pip install comfy-cli
comfy install
To launch ComfyUI:
.. code-block:: bat
comfy launch
Installing Nunchaku on Windows
-------------------------------
Step 1: Identify Your Python Environment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To ensure correct installation, you need to find the Python interpreter used by ComfyUI.
Launch ComfyUI and look for this line in the log:
.. code-block:: text
** Python executable: G:\ComfyuI\python\python.exe
Then verify the Python version and installed PyTorch version:
.. code-block:: bat
"G:\ComfyuI\python\python.exe" --version
"G:\ComfyuI\python\python.exe" -m pip show torch
Step 2: Install PyTorch (≥2.5) if you haven't
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Install PyTorch appropriate for your setup:
- **For most users**:
.. code-block:: bat
"G:\ComfyuI\python\python.exe" -m pip install torch==2.6 torchvision==0.21 torchaudio==2.6
- **For RTX 50-series GPUs** (requires PyTorch ≥2.7 with CUDA 12.8):
.. code-block:: bat
"G:\ComfyuI\python\python.exe" -m pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128
Step 3: Install Nunchaku
^^^^^^^^^^^^^^^^^^^^^^^^^
Option 1: Use NunchakuWheelInstaller Node in ComfyUI
""""""""""""""""""""""""""""""""""""""""""""""""""""
With `ComfyUI-nunchaku <comfyui_nunchaku_>`_ v0.3.2+, you can install Nunchaku using the provided `workflow <comfyui_nunchaku_wheel_installation_workflow_>`_ directly in ComfyUI.
.. image:: https://huggingface.co/mit-han-lab/nunchaku-artifacts/resolve/main/ComfyUI-nunchaku/assets/install_wheel.png
Option 2: Manually Install Prebuilt Wheels
"""""""""""""""""""""""""""""""""""""""""""
You can install Nunchaku wheels from one of the following:
- `Hugging Face <nunchaku_huggingface_>`_
- `ModelScope <nunchaku_modelscope_>`_
- `GitHub Releases <nunchaku_github_releases_>`_
Example (for Python 3.11 + PyTorch 2.7):
.. code-block:: bat
"G:\ComfyUI\python\python.exe" -m pip install https://github.com/mit-han-lab/nunchaku/releases/download/v0.3.1/nunchaku-0.3.1+torch2.7-cp311-cp311-linux_x86_64.whl
To verify the installation:
.. code-block:: bat
"G:\ComfyuI\python\python.exe" -c "import nunchaku"
You can also run a test (requires a Hugging Face token for downloading the models):
.. code-block:: bat
"G:\ComfyuI\python\python.exe" -m huggingface-cli login
"G:\ComfyuI\python\python.exe" -m nunchaku.test
Option 3: Build Nunchaku from Source
""""""""""""""""""""""""""""""""""""
Please use CMD instead of PowerShell for building.
Step 1: Install Build Tools
.. code-block:: bat
"G:\ComfyuI\python\python.exe" -m pip install ninja setuptools wheel build
Step 2: Clone the Repository
.. code-block:: bat
git clone https://github.com/mit-han-lab/nunchaku.git
cd nunchaku
git submodule init
git submodule update
Step 3: Set Up Visual Studio Environment
Locate the ``VsDevCmd.bat`` script on your system. Example path:
.. code-block:: text
C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\Common7\Tools\VsDevCmd.bat
Then run:
.. code-block:: bat
"C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\Common7\Tools\VsDevCmd.bat" -startdir=none -arch=x64 -host_arch=x64
set DISTUTILS_USE_SDK=1
Step 4: Build Nunchaku
.. code-block:: bat
"G:\ComfyuI\python\python.exe" setup.py develop
Verify with:
.. code-block:: bat
"G:\ComfyuI\python\python.exe" -c "import nunchaku"
You can also run a test (requires a Hugging Face token):
.. code-block:: bat
"G:\ComfyuI\python\python.exe" -m huggingface-cli login
"G:\ComfyuI\python\python.exe" -m nunchaku.test
(Optional) Step 5: Building wheel for Portable Python
If building directly with portable Python fails:
.. code-block:: bat
set NUNCHAKU_INSTALL_MODE=ALL
"G:\ComfyuI\python\python.exe" python -m build --wheel --no-isolation
Use Nunchaku in ComfyUI
-----------------------
1. Install the Plugin
^^^^^^^^^^^^^^^^^^^^^
Clone the `ComfyUI-nunchaku <comfyui_nunchaku_>`_ plugin into the ``custom_nodes`` folder:
.. code-block:: bat
cd ComfyUI/custom_nodes
git clone https://github.com/mit-han-lab/ComfyUI-nunchaku.git
Alternatively, install it using `ComfyUI-Manager <comfyui_manager_>`_ or ``comfy-cli``.
2. Download Models
^^^^^^^^^^^^^^^^^^
**Standard FLUX.1-dev Models**
Start by downloading the standard `FLUX.1-dev text encoders <https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main>`__ and `VAE <https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/ae.safetensors>`__. You can also optionally download the original `BF16 FLUX.1-dev <https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/flux1-dev.safetensors>`__ model. An example command:
.. code-block:: bat
huggingface-cli download comfyanonymous/flux_text_encoders clip_l.safetensors --local-dir models/text_encoders
huggingface-cli download comfyanonymous/flux_text_encoders t5xxl_fp16.safetensors --local-dir models/text_encoders
huggingface-cli download black-forest-labs/FLUX.1-schnell ae.safetensors --local-dir models/vae
huggingface-cli download black-forest-labs/FLUX.1-dev flux1-dev.safetensors --local-dir models/diffusion_models
**Nunchaku 4-bit FLUX.1-dev Models**
Next, download the Nunchaku 4-bit models to ``models/diffusion_models``:
- For **50-series GPUs**, use the `FP4 model <nunchaku_flux1_dev_fp4_>`_.
- For **other GPUs**, use the `INT4 model <nunchaku_flux1_dev_int4_>`_.
**(Optional): Download Sample LoRAs**
You can test with some sample LoRAs like `FLUX.1-Turbo <turbo_lora_>`_ and `Ghibsky <ghibsky_lora_>`_. Place these files in the ``models/loras`` directory:
.. code-block:: bat
huggingface-cli download alimama-creative/FLUX.1-Turbo-Alpha diffusion_pytorch_model.safetensors --local-dir models/loras
huggingface-cli download aleksa-codes/flux-ghibsky-illustration lora.safetensors --local-dir models/loras
3. Set Up Workflows
^^^^^^^^^^^^^^^^^^^
To use the official workflows, download them from the `ComfyUI-nunchaku <comfyui_nunchaku_>`_ and place them in your ``ComfyUI/user/default/workflows`` directory. The command can be:
.. code-block:: bat
# From the root of your ComfyUI folder
cp -r custom_nodes/ComfyUI-nunchaku/example_workflows user/default/workflows/nunchaku_examples
You can now launch ComfyUI and try running the example workflows.
.. _svdquant_paper: http://arxiv.org/abs/2411.05007
.. _deepcompressor_repo: https://github.com/mit-han-lab/deepcompressor
.. _pytorch_home: https://pytorch.org/
.. _flux_repo: https://github.com/black-forest-labs/flux
.. _diffusers_repo: https://github.com/huggingface/diffusers
.. _nunchaku_github_releases: https://github.com/mit-han-lab/nunchaku/releases
.. _nunchaku_huggingface: https://huggingface.co/mit-han-lab/nunchaku/tree/main
.. _nunchaku_modelscope: https://modelscope.cn/models/Lmxyy1999/nunchaku
.. _comfyui_nunchaku: https://github.com/mit-han-lab/ComfyUI-nunchaku
.. _comfyui_nunchaku_wheel_installation_workflow: https://github.com/mit-han-lab/ComfyUI-nunchaku/blob/main/example_workflows/install_wheel.json
.. _comfyui_manager: https://github.com/Comfy-Org/ComfyUI-Manager
.. _nvidia_cuda_downloads: https://developer.nvidia.com/cuda-downloads
.. _visual_studio: https://visualstudio.microsoft.com/visual-cpp-build-tools/
.. _git_downloads_win: https://git-scm.com/downloads/win
.. _anaconda_download: https://www.anaconda.com/download/success
.. _nunchaku_windows_tutorial_en: https://youtu.be/YHAVe-oM7U8?si=cM9zaby_aEHiFXk0
.. _nunchaku_windows_tutorial_zh: https://www.bilibili.com/video/BV1BTocYjEk5/?share_source=copy_web&vd_source=8926212fef622f25cc95380515ac74ee
.. _nunchaku_repo: https://github.com/mit-han-lab/nunchaku
.. _ghibsky_lora: https://huggingface.co/aleksa-codes/flux-ghibsky-illustration
.. _turbo_lora: https://huggingface.co/alimama-creative/FLUX.1-Turbo-Alpha
.. _nunchaku_flux1_dev_fp4: https://huggingface.co/mit-han-lab/nunchaku-flux.1-dev/blob/main/svdq-fp4_r32-flux.1-dev.safetensors
.. _nunchaku_flux1_dev_int4: https://huggingface.co/mit-han-lab/nunchaku-flux.1-dev/blob/main/svdq-int4_r32-flux.1-dev.safetensors
.. _to_diffusers_lora: https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/lora.py#L100
.. _to_nunchaku_lora: https://github.com/mit-han-lab/nunchaku/blob/main/nunchaku/lora/flux/nunchaku_converter.py#L442
.. _flux1_tools: https://bfl.ai/announcements/24-11-21-tools
.. _controlnet_union_pro: https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro
.. _controlnet_union_pro2: https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0
.. _fbcache: https://github.com/chengzeyi/ParaAttention?tab=readme-ov-file#first-block-cache-our-dynamic-caching
.. _pulid_paper: https://arxiv.org/abs/2404.16022
.. _flux1_kontext_dev: https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev
nunchaku.caching.diffusers\_adapters.flux
=========================================
.. automodule:: nunchaku.caching.diffusers_adapters.flux
:members:
:undoc-members:
:show-inheritance:
nunchaku.caching.diffusers\_adapters
====================================
.. automodule:: nunchaku.caching.diffusers_adapters
:members:
:undoc-members:
:show-inheritance:
.. toctree::
:maxdepth: 4
nunchaku.caching.diffusers_adapters.flux
nunchaku.caching.diffusers_adapters.sana
nunchaku.caching.diffusers\_adapters.sana
=========================================
.. automodule:: nunchaku.caching.diffusers_adapters.sana
:members:
:undoc-members:
:show-inheritance:
nunchaku.caching
================
.. toctree::
:maxdepth: 4
nunchaku.caching.diffusers_adapters
nunchaku.caching.utils
nunchaku.caching.utils
======================
.. automodule:: nunchaku.caching.utils
:members:
:show-inheritance:
nunchaku.lora.flux.compose
==========================
.. automodule:: nunchaku.lora.flux.compose
:members:
:undoc-members:
:show-inheritance:
nunchaku.lora.flux.convert
==========================
.. automodule:: nunchaku.lora.flux.convert
:members:
:undoc-members:
:show-inheritance:
nunchaku.lora.flux.diffusers\_converter
=======================================
.. automodule:: nunchaku.lora.flux.diffusers_converter
:members:
:undoc-members:
:show-inheritance:
nunchaku.lora.flux.nunchaku\_converter
======================================
.. automodule:: nunchaku.lora.flux.nunchaku_converter
:members:
:undoc-members:
:show-inheritance:
nunchaku.lora.flux.packer
=========================
.. automodule:: nunchaku.lora.flux.packer
:members:
:show-inheritance:
nunchaku.lora.flux
==================
.. toctree::
:maxdepth: 4
nunchaku.lora.flux.diffusers_converter
nunchaku.lora.flux.nunchaku_converter
nunchaku.lora.flux.compose
nunchaku.lora.flux.convert
nunchaku.lora.flux.packer
nunchaku.lora.flux.utils
nunchaku.lora.flux.utils
========================
.. automodule:: nunchaku.lora.flux.utils
:members:
:undoc-members:
:show-inheritance:
nunchaku.lora
=============
.. toctree::
:maxdepth: 4
nunchaku.lora.flux
nunchaku.merge\_safetensors
===========================
.. automodule:: nunchaku.merge_safetensors
:members:
:undoc-members:
:show-inheritance:
nunchaku.models.pulid.encoders\_transformer
===========================================
.. automodule:: nunchaku.models.pulid.encoders_transformer
:members:
:undoc-members:
:show-inheritance:
nunchaku.models.pulid.pulid\_forward
====================================
.. automodule:: nunchaku.models.pulid.pulid_forward
:members:
:undoc-members:
:show-inheritance:
nunchaku.models.pulid
=====================
.. toctree::
:maxdepth: 4
nunchaku.models.pulid.pulid_forward
nunchaku.models.pulid.encoders_transformer
nunchaku.models.pulid.utils
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment