- 09 Feb, 2023 11 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
BazettFraga authored
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
-
comfyanonymous authored
-
comfyanonymous authored
-
BazettFraga authored
-
BazettFraga authored
-
- 08 Feb, 2023 11 commits
-
-
comfyanonymous authored
Try to keep batch sizes more consistent which seems to improve things on AMD GPUs.
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
- 07 Feb, 2023 2 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
- 05 Feb, 2023 5 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
Put them in models/clip
-
comfyanonymous authored
This should make things a bit cleaner.
-
comfyanonymous authored
-
comfyanonymous authored
-
- 04 Feb, 2023 4 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
It's pretty much the same as the LatentUpscale node for now but for images in pixel space.
-
comfyanonymous authored
-
comfyanonymous authored
-
- 03 Feb, 2023 5 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
The models are modified in place before being used and unpatched after. I think this is better than monkeypatching since it might make it easier to use faster non pytorch unet inference in the future.
-
comfyanonymous authored
-
- 02 Feb, 2023 2 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-