"vscode:/vscode.git/clone" did not exist on "ec6d22143b5d375e253b2ebfc563b26a43f43684"
- 15 Apr, 2023 1 commit
-
-
- 14 Apr, 2023 3 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
Co-authored-by:missionfloyd <missionfloyd@users.noreply.github.com>
-
- 13 Apr, 2023 1 commit
-
-
missionfloyd authored
-
- 06 Apr, 2023 1 commit
-
-
mligaintart authored
allow better compositing.
-
- 05 Apr, 2023 1 commit
-
-
comfyanonymous authored
-
- 04 Apr, 2023 2 commits
-
-
comfyanonymous authored
-
EllangoK authored
-
- 03 Apr, 2023 1 commit
-
-
EllangoK authored
-
- 02 Apr, 2023 2 commits
-
-
EllangoK authored
-
comfyanonymous authored
See _for_testing/unclip in the UI for the new nodes. unCLIPCheckpointLoader is used to load them. unCLIPConditioning is used to add the image cond and takes as input a CLIPVisionEncode output which has been moved to the conditioning section.
-
- 17 Mar, 2023 2 commits
-
-
comfyanonymous authored
Workflows will be auto updated.
-
comfyanonymous authored
-
- 15 Mar, 2023 1 commit
-
-
comfyanonymous authored
-
- 11 Mar, 2023 3 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
- 06 Mar, 2023 1 commit
-
-
comfyanonymous authored
-
- 05 Mar, 2023 1 commit
-
-
comfyanonymous authored
It needs the CLIPVision model so I added CLIPVisionLoader and CLIPVisionEncode. Put the clip vision model in models/clip_vision Put the t2i style model in models/style_models StyleModelLoader to load it, StyleModelApply to apply it ConditioningAppend to append the conditioning it outputs to a positive one.
-