- 13 Jul, 2023 1 commit
-
-
comfyanonymous authored
This is to make it match the official checkpoint.
-
- 12 Jul, 2023 1 commit
-
-
comfyanonymous authored
-
- 10 Jul, 2023 1 commit
-
-
comfyanonymous authored
-
- 09 Jul, 2023 5 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
- 07 Jul, 2023 1 commit
-
-
comfyanonymous authored
-
- 06 Jul, 2023 2 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
- 05 Jul, 2023 5 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
but faster.
-
comfyanonymous authored
-
comfyanonymous authored
-
- 04 Jul, 2023 2 commits
-
-
mara authored
The wrong dimensions were being checked, [1] and [2] are the image size. not [2] and [3]. This results in an out-of-bounds error if one of them actually matches.
-
comfyanonymous authored
-
- 03 Jul, 2023 3 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
- 02 Jul, 2023 3 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
- 01 Jul, 2023 5 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
load_model_gpu() is now used with the text encoder models instead of just the unet.
-
comfyanonymous authored
This is faster on big text encoder models than running it on the CPU.
-
- 30 Jun, 2023 3 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
- 29 Jun, 2023 1 commit
-
-
comfyanonymous authored
-
- 28 Jun, 2023 3 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
- 26 Jun, 2023 4 commits
-
-
comfyanonymous authored
Add a new argument --use-quad-cross-attention
-
comfyanonymous authored
The created checkpoints contain workflow metadata that can be loaded by dragging them on top of the UI or loading them with the "Load" button. Checkpoints will be saved in fp16 or fp32 depending on the format ComfyUI is using for inference on your hardware. To force fp32 use: --force-fp32 Anything that patches the model weights like merging or loras will be saved. The output directory is currently set to: output/checkpoints but that might change in the future.
-
comfyanonymous authored
-
comfyanonymous authored
-