- 10 Jun, 2024 1 commit
-
-
comfyanonymous authored
-
- 09 Jun, 2024 1 commit
-
-
comfyanonymous authored
-
- 08 Jun, 2024 1 commit
-
-
comfyanonymous authored
-
- 07 Jun, 2024 1 commit
-
-
comfyanonymous authored
-
- 06 Jun, 2024 1 commit
-
-
comfyanonymous authored
-
- 04 Jun, 2024 1 commit
-
-
comfyanonymous authored
-
- 02 Jun, 2024 1 commit
-
-
comfyanonymous authored
-
- 01 Jun, 2024 1 commit
-
-
comfyanonymous authored
-
- 30 May, 2024 1 commit
-
-
comfyanonymous authored
-
- 27 May, 2024 2 commits
-
-
JettHu authored
-
comfyanonymous authored
-
- 25 May, 2024 2 commits
-
-
comfyanonymous authored
This is a different version of #3298 with more correct behavior.
-
comfyanonymous authored
-
- 22 May, 2024 2 commits
-
-
comfyanonymous authored
-
Chenlei Hu authored
* Add type annotation UnetWrapperFunction * nit * Add types.py
-
- 21 May, 2024 2 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
- 20 May, 2024 2 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
- 19 May, 2024 1 commit
-
-
comfyanonymous authored
-
- 18 May, 2024 2 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
- 17 May, 2024 1 commit
-
-
comfyanonymous authored
-
- 16 May, 2024 1 commit
-
-
comfyanonymous authored
-
- 15 May, 2024 1 commit
-
-
comfyanonymous authored
-
- 14 May, 2024 3 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
- 12 May, 2024 4 commits
-
-
Simon Lui authored
* Change calculation of memory total to be more accurate, allocated is actually smaller than reserved. * Update README.md install documentation for Intel GPUs.
-
comfyanonymous authored
The previous fix didn't cover the case where the model was loaded in lowvram mode right before.
-
comfyanonymous authored
-
comfyanonymous authored
-
- 09 May, 2024 1 commit
-
-
comfyanonymous authored
-
- 08 May, 2024 1 commit
-
-
comfyanonymous authored
-
- 07 May, 2024 1 commit
-
-
comfyanonymous authored
I was going to completely remove this function because it is unmaintainable but I think this is the best compromise. The clip skip and v_prediction parts of the configs should still work but not the fp16 vs fp32.
-
- 02 May, 2024 1 commit
-
-
Simon Lui authored
Change torch.xpu to ipex.optimize, xpu device initialization and remove workaround for text node issue from older IPEX. (#3388)
-
- 01 May, 2024 3 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
Garrett Sutula authored
* Add TLS Support * Add to readme * Add guidance for windows users on generating certificates * Add guidance for windows users on generating certificates * Fix typo
-
- 26 Apr, 2024 1 commit
-
-
Jedrzej Kosinski authored
-