- 28 May, 2023 1 commit
-
-
comfyanonymous authored
-
- 12 May, 2023 2 commits
-
-
BlenderNeko authored
-
BlenderNeko authored
-
- 04 May, 2023 1 commit
-
-
comfyanonymous authored
-
- 03 May, 2023 4 commits
-
-
comfyanonymous authored
-
pythongosssss authored
-
pythongosssss authored
-
pythongosssss authored
-
- 02 May, 2023 1 commit
-
-
pythongosssss authored
-
- 01 May, 2023 1 commit
-
-
comfyanonymous authored
-
- 24 Apr, 2023 1 commit
-
-
pythongosssss authored
-
- 23 Apr, 2023 1 commit
-
-
comfyanonymous authored
Add a HypernetworkLoader node to use hypernetworks.
-
- 19 Apr, 2023 1 commit
-
-
comfyanonymous authored
-
- 17 Apr, 2023 1 commit
-
-
comfyanonymous authored
-
- 15 Apr, 2023 2 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
- 14 Apr, 2023 1 commit
-
-
BlenderNeko authored
-
- 13 Apr, 2023 1 commit
-
-
BlenderNeko authored
-
- 02 Apr, 2023 1 commit
-
-
comfyanonymous authored
See _for_testing/unclip in the UI for the new nodes. unCLIPCheckpointLoader is used to load them. unCLIPConditioning is used to add the image cond and takes as input a CLIPVisionEncode output which has been moved to the conditioning section.
-
- 31 Mar, 2023 1 commit
-
-
comfyanonymous authored
Tome increases sampling speed at the expense of quality.
-
- 29 Mar, 2023 1 commit
-
-
comfyanonymous authored
-
- 23 Mar, 2023 2 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
- 22 Mar, 2023 2 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
- 21 Mar, 2023 1 commit
-
-
comfyanonymous authored
-
- 19 Mar, 2023 1 commit
-
-
comfyanonymous authored
-
- 17 Mar, 2023 1 commit
-
-
comfyanonymous authored
Workflows will be auto updated.
-
- 14 Mar, 2023 2 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
- 13 Mar, 2023 1 commit
-
-
comfyanonymous authored
-
- 11 Mar, 2023 2 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
- 10 Mar, 2023 1 commit
-
-
comfyanonymous authored
-
- 08 Mar, 2023 1 commit
-
-
comfyanonymous authored
-
- 07 Mar, 2023 1 commit
-
-
comfyanonymous authored
-
- 06 Mar, 2023 2 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
- 05 Mar, 2023 1 commit
-
-
comfyanonymous authored
It needs the CLIPVision model so I added CLIPVisionLoader and CLIPVisionEncode. Put the clip vision model in models/clip_vision Put the t2i style model in models/style_models StyleModelLoader to load it, StyleModelApply to apply it ConditioningAppend to append the conditioning it outputs to a positive one.
-
- 04 Mar, 2023 1 commit
-
-
comfyanonymous authored
-