README.md 7.37 KB
Newer Older
1
2
# SVDQuant ComfyUI Node

3
**Note**: This node is **deprecated**! Please check **[ComfyUI-nunchaku/](https://github.com/mit-han-lab/ComfyUI-nunchaku/)** for the latest version.
4
![comfyui](../assets/comfyui.jpg)
5

6
7
## Installation

8
9
10
11
12
Please first install `nunchaku` following the instructions in [README.md](https://github.com/mit-han-lab/nunchaku?tab=readme-ov-file#installation). Then just install `image_gen_aux` with 

```shell
pip install git+https://github.com/asomoza/image_gen_aux.git
```
13
14
15

### ComfyUI-CLI

16
```shell
muyangli's avatar
muyangli committed
17
pip install comfy-cli  # install the comfyui-cli
18
comfy node registry-install svdquant
19
```
20

21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
### ComfyUI-Manager (Experimental)

1. Install [ComfyUI-Manager](https://github.com/ltdrdata/ComfyUI-Manager) with the following commands then restart ComfyUI:

   ```shell
   cd ComfyUI/custom_nodes
   git clone https://github.com/ltdrdata/ComfyUI-Manager comfyui-manager
   ```

2. Open the Manager, search `svdquant` in the Custom Nodes Manager and then install it.


### Manual Installation
1. Install dependencies needed to run custom ComfyUI nodes:

   ```shell
   pip install git+https://github.com/asomoza/image_gen_aux.git
   ```
2. Set up the dependencies for [ComfyUI](https://github.com/comfyanonymous/ComfyUI/tree/master) with the following commands:

   ```shell
   git clone https://github.com/comfyanonymous/ComfyUI.git
   cd ComfyUI
   pip install -r requirements.txt
   ```

3. Navigate to the root directory of ComfyUI and link (or copy) the [`nunchaku/comfyui`](./) folder to `custom_nodes/svdquant`. For example:

   ```shell
   # Clone repositories (skip if already cloned)
   git clone https://github.com/comfyanonymous/ComfyUI.git
   git clone https://github.com/mit-han-lab/nunchaku.git
   cd ComfyUI
   
   # Add SVDQuant nodes
   cd custom_nodes
   ln -s ../../nunchaku/comfyui svdquant
   ```

60
61
62
63
## Usage

1. **Set Up ComfyUI and SVDQuant**:

64
65
66
67
68
69
70
71
72
73
74
     * SVDQuant workflows can be found at [`workflows`](./workflows). You can place them in `user/default/workflows` in ComfyUI root directory to load them. For example:

       ```shell
       cd ComfyUI
       
       # Copy workflow configurations
       mkdir -p user/default/workflows
       cp ../nunchaku/comfyui/workflows/* user/default/workflows/
       ```

     * Install missing nodes (e.g., comfyui-inpainteasy) following [this tutorial](https://github.com/ltdrdata/ComfyUI-Manager?tab=readme-ov-file#support-of-missing-nodes-installation).
75
76
77
78

2. **Download Required Models**: Follow [this tutorial](https://comfyanonymous.github.io/ComfyUI_examples/flux/) and download the required models into the appropriate directories using the commands below:

   ```shell
79
80
   huggingface-cli download comfyanonymous/flux_text_encoders clip_l.safetensors --local-dir models/text_encoders
   huggingface-cli download comfyanonymous/flux_text_encoders t5xxl_fp16.safetensors --local-dir models/text_encoders
81
82
83
84
85
86
87
88
89
   huggingface-cli download black-forest-labs/FLUX.1-schnell ae.safetensors --local-dir models/vae
   ```

3. **Run ComfyUI**: From ComfyUI’s root directory, execute the following command to start the application:

   ```shell
   python main.py
   ```

90
4. **Select the SVDQuant Workflow**: Choose one of the SVDQuant workflows (workflows that start with `svdq-`) to get started. For the FLUX.1-Fill workflow, you can use the built-in MaskEditor tool to add mask on top of an image.
91
92
93
94
95

## SVDQuant Nodes

* **SVDQuant Flux DiT Loader**: A node for loading the FLUX diffusion model. 

muyangli's avatar
muyangli committed
96
  * `model_path`: Specifies the model location. If set to the folder starting with `mit-han-lab`, the model will be automatically downloaded from our Hugging Face repository. Alternatively, you can manually download the model directory by running the following command example:
97
98
99
100
101
102
103

    ```shell
    huggingface-cli download mit-han-lab/svdq-int4-flux.1-dev --local-dir models/diffusion_models/svdq-int4-flux.1-dev
    ```

     After downloading, specify the corresponding folder name as the `model_path`.

muyangli's avatar
muyangli committed
104
105
  * `cpu_offload`: Enables CPU offloading for the transformer model. While this may reduce GPU memory usage, it can slow down inference. Memory usage will be further optimized in node v0.1.6.

106
107
  * `device_id`: Indicates the GPU ID for running the model.

108
109
110
111
* **SVDQuant FLUX LoRA Loader**: A node for loading LoRA modules for SVDQuant FLUX models.

  * Place your LoRA checkpoints in the `models/loras` directory. These will appear as selectable options under `lora_name`. Meanwhile, the [example Ghibsky LoRA](https://huggingface.co/aleksa-codes/flux-ghibsky-illustration) is included and will automatically download from our Hugging Face repository when used.
  * `lora_format` specifies the LoRA format. Supported formats include:
112
113
114
115
116
* `auto`: Automatically detects the appropriate LoRA format.
    * `diffusers` (e.g., [aleksa-codes/flux-ghibsky-illustration](https://huggingface.co/aleksa-codes/flux-ghibsky-illustration))
    * `comfyui` (e.g., [Shakker-Labs/FLUX.1-dev-LoRA-Children-Simple-Sketch](https://huggingface.co/Shakker-Labs/FLUX.1-dev-LoRA-Children-Simple-Sketch))
    * `xlab` (e.g., [XLabs-AI/flux-RealismLora](https://huggingface.co/XLabs-AI/flux-RealismLora))
    * `svdquant` (e.g., [mit-han-lab/svdquant-lora-collection](https://huggingface.co/mit-han-lab/svdquant-lora-collection)).
117

118
119
  * `base_model_name` specifies the path to the quantized base model. If `lora_format` is already set to `svdquant`, this option has no use. You can set it to the same value as `model_path` in the above **SVDQuant Flux DiT Loader**.
  * **Note**: Currently, **only one LoRA** can be loaded at a time.
120
121
122
123
124
125
126

* **SVDQuant Text Encoder Loader**: A node for loading the text encoders.

  * For FLUX, use the following files:

    - `text_encoder1`: `t5xxl_fp16.safetensors`
    - `text_encoder2`: `clip_l.safetensors`
127

128
  * `t5_min_length`: Sets the minimum sequence length for T5 text embeddings. The default in `DualCLIPLoader` is hardcoded to 256, but for better image quality in SVDQuant, use 512 here.
129

130
  * `t5_precision`: Specifies the precision of the T5 text encoder. Choose `INT4` to use the INT4 text encoder, which reduces GPU memory usage by approximately 15GB. Please install [`deepcompressor`](https://github.com/mit-han-lab/deepcompressor) when using it:
131

132
133
134
135
136
137
138
    ```shell
    git clone https://github.com/mit-han-lab/deepcompressor
    cd deepcompressor
    pip install poetry
    poetry install
    ```
  
139
  
140
    * `int4_model`: Specifies the INT4 model location. This option is only used when `t5_precision` is set to `INT4`. By default, the path is `mit-han-lab/svdq-flux.1-t5`, and the model will automatically download from our Hugging Face repository. Alternatively, you can manually download the model directory by running the following command:
141
  
142
143
144
145
146
      ```shell
      huggingface-cli download mit-han-lab/svdq-flux.1-t5 --local-dir models/text_encoders/svdq-flux.1-t5
      ```
  
       After downloading, specify the corresponding folder name as the `int4_model`.
147
  
148
149
150
151
152
153
154
155
156
157


* **FLUX.1 Depth Preprocessor**: A node for loading the depth estimation model and output the depth map. `model_path` specifies the model location. If set to [`LiheYoung/depth-anything-large-hf`](https://huggingface.co/LiheYoung/depth-anything-large-hf), the model will be automatically downloaded from the Hugging Face repository. Alternatively, you can manually download the repository at `models/checkpoints` by running the following command example:

  ```shell
  huggingface-cli download LiheYoung/depth-anything-large-hf --local-dir models/checkpoints/depth-anything-large-hf
  ```