- 30 Jul, 2024 1 commit
-
-
comfyanonymous authored
-
- 15 Jun, 2024 1 commit
-
-
comfyanonymous authored
-
- 22 May, 2024 1 commit
-
-
comfyanonymous authored
-
- 14 Mar, 2024 2 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
- 16 Feb, 2024 1 commit
-
-
comfyanonymous authored
-
- 19 Jan, 2024 1 commit
-
-
comfyanonymous authored
-
- 22 Dec, 2023 1 commit
-
-
comfyanonymous authored
Let me know if this breaks anything.
-
- 12 Dec, 2023 1 commit
-
-
comfyanonymous authored
comfy.ops -> comfy.ops.disable_weight_init This should make it more clear what they actually do. Some unused code has also been removed.
-
- 11 Dec, 2023 2 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
Use fp16 text encoder weights for CPU inference to lower memory usage.
-
- 04 Dec, 2023 1 commit
-
-
comfyanonymous authored
-
- 11 Nov, 2023 1 commit
-
-
comfyanonymous authored
-
- 24 Aug, 2023 1 commit
-
-
comfyanonymous authored
-
- 18 Aug, 2023 1 commit
-
-
comfyanonymous authored
Control loras are controlnets where some of the weights are stored in "lora" format: an up and a down low rank matrice that when multiplied together and added to the unet weight give the controlnet weight. This allows a much smaller memory footprint depending on the rank of the matrices. These controlnets are used just like regular ones.
-
- 15 Jun, 2023 1 commit
-
-
comfyanonymous authored
-
- 14 Jun, 2023 2 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
Default pytorch Linear initializes the weights which is useless and slow.
-