"vscode:/vscode.git/clone" did not exist on "ee74ef5c9ed9e9c8ecb967e6ce58ec74f664fd0c"
- 22 Dec, 2023 1 commit
-
-
comfyanonymous authored
-
- 19 Dec, 2023 1 commit
-
-
comfyanonymous authored
-
- 18 Dec, 2023 3 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
To use it use the ImageOnlyCheckpointLoader to load the checkpoint and the new Stable_Zero123 node.
-
- 17 Dec, 2023 1 commit
-
-
comfyanonymous authored
-
- 16 Dec, 2023 2 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
Make it use the same dtype as the text encoder.
-
- 15 Dec, 2023 3 commits
-
-
comfyanonymous authored
-
Hari authored
-
comfyanonymous authored
-
- 14 Dec, 2023 2 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
- 13 Dec, 2023 3 commits
-
-
comfyanonymous authored
Moved all the sag related code to comfy_extras/nodes_sag.py
-
Rafie Walker authored
* First SAG test * need to put extra options on the model instead of patcher * no errors and results seem not-broken * Use @ashen-uncensored formula, which works better!!! * Fix a crash when using weird resolutions. Remove an unnecessary UNet call * Improve comments, optimize memory in blur routine * SAG works with sampler_cfg_function
-
comfyanonymous authored
-
- 12 Dec, 2023 4 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
comfy.ops -> comfy.ops.disable_weight_init This should make it more clear what they actually do. Some unused code has also been removed.
-
- 11 Dec, 2023 3 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
Use fp16 text encoder weights for CPU inference to lower memory usage.
-
- 10 Dec, 2023 1 commit
-
-
comfyanonymous authored
-
- 09 Dec, 2023 3 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
- 08 Dec, 2023 3 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
- 07 Dec, 2023 2 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
Use a simple CLIP model implementation instead of the one from transformers. This will allow some interesting things that would too hackish to implement using the transformers implementation.
-
- 06 Dec, 2023 1 commit
-
-
comfyanonymous authored
-
- 05 Dec, 2023 2 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
- 04 Dec, 2023 5 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
--fp8_e4m3fn-unet and --fp8_e5m2-unet are the two different formats supported by pytorch.
-
comfyanonymous authored
-