- 18 May, 2024 1 commit
-
-
comfyanonymous authored
-
- 17 May, 2024 1 commit
-
-
comfyanonymous authored
-
- 16 May, 2024 1 commit
-
-
comfyanonymous authored
-
- 15 May, 2024 1 commit
-
-
comfyanonymous authored
-
- 14 May, 2024 3 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
- 12 May, 2024 4 commits
-
-
Simon Lui authored
* Change calculation of memory total to be more accurate, allocated is actually smaller than reserved. * Update README.md install documentation for Intel GPUs.
-
comfyanonymous authored
The previous fix didn't cover the case where the model was loaded in lowvram mode right before.
-
comfyanonymous authored
-
comfyanonymous authored
-
- 09 May, 2024 1 commit
-
-
comfyanonymous authored
-
- 08 May, 2024 1 commit
-
-
comfyanonymous authored
-
- 07 May, 2024 1 commit
-
-
comfyanonymous authored
I was going to completely remove this function because it is unmaintainable but I think this is the best compromise. The clip skip and v_prediction parts of the configs should still work but not the fp16 vs fp32.
-
- 02 May, 2024 1 commit
-
-
Simon Lui authored
Change torch.xpu to ipex.optimize, xpu device initialization and remove workaround for text node issue from older IPEX. (#3388)
-
- 01 May, 2024 3 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
Garrett Sutula authored
* Add TLS Support * Add to readme * Add guidance for windows users on generating certificates * Add guidance for windows users on generating certificates * Fix typo
-
- 26 Apr, 2024 1 commit
-
-
Jedrzej Kosinski authored
-
- 24 Apr, 2024 1 commit
-
-
comfyanonymous authored
-
- 19 Apr, 2024 1 commit
-
-
comfyanonymous authored
-
- 15 Apr, 2024 1 commit
-
-
comfyanonymous authored
-
- 14 Apr, 2024 1 commit
-
-
comfyanonymous authored
-
- 13 Apr, 2024 1 commit
-
-
comfyanonymous authored
-
- 10 Apr, 2024 1 commit
-
-
comfyanonymous authored
-
- 08 Apr, 2024 1 commit
-
-
comfyanonymous authored
-
- 06 Apr, 2024 1 commit
-
-
comfyanonymous authored
-
- 05 Apr, 2024 6 commits
-
-
kk-89 authored
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
- 04 Apr, 2024 4 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
- 02 Apr, 2024 1 commit
-
-
comfyanonymous authored
-
- 01 Apr, 2024 1 commit
-
-
comfyanonymous authored
calc_cond_batch can take an arbitrary amount of cond inputs. Added a calc_cond_uncond_batch wrapper with a warning so custom nodes won't break.
-
- 31 Mar, 2024 1 commit
-
-
comfyanonymous authored
This is the code to load the model and inference it with only a text prompt. This commit does not contain the nodes to properly use it with an image input. This supports both the original SD1 instructpix2pix model and the diffusers SDXL one.
-