- 01 May, 2024 1 commit
-
-
comfyanonymous authored
-
- 11 Mar, 2024 1 commit
-
-
comfyanonymous authored
-
- 17 Feb, 2024 3 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
comfyanonymous authored
-
- 26 Jan, 2024 1 commit
-
-
comfyanonymous authored
-
- 09 Jan, 2024 1 commit
-
-
comfyanonymous authored
-
- 07 Jan, 2024 2 commits
-
-
comfyanonymous authored
support masked attention.
-
comfyanonymous authored
-
- 06 Jan, 2024 2 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
- 15 Dec, 2023 1 commit
-
-
comfyanonymous authored
-
- 12 Dec, 2023 1 commit
-
-
comfyanonymous authored
comfy.ops -> comfy.ops.disable_weight_init This should make it more clear what they actually do. Some unused code has also been removed.
-
- 07 Dec, 2023 1 commit
-
-
comfyanonymous authored
Use a simple CLIP model implementation instead of the one from transformers. This will allow some interesting things that would too hackish to implement using the transformers implementation.
-
- 05 Dec, 2023 1 commit
-
-
comfyanonymous authored
-
- 04 Dec, 2023 1 commit
-
-
comfyanonymous authored
-
- 26 Nov, 2023 2 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
Now everything in transformer_options gets put in extra_options.
-
- 24 Nov, 2023 2 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
- 30 Oct, 2023 2 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
- 26 Oct, 2023 1 commit
-
-
comfyanonymous authored
-
- 22 Oct, 2023 2 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
-
- 21 Oct, 2023 1 commit
-
-
comfyanonymous authored
-
- 16 Oct, 2023 1 commit
-
-
comfyanonymous authored
-
- 12 Oct, 2023 2 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
There's no reason for the whole CrossAttention object to be repeated when only the operation in the middle changes.
-
- 27 Sep, 2023 1 commit
-
-
comfyanonymous authored
-
- 04 Sep, 2023 1 commit
-
-
comfyanonymous authored
-
- 03 Sep, 2023 1 commit
-
-
Simon Lui authored
-
- 01 Sep, 2023 1 commit
-
-
comfyanonymous authored
-
- 18 Aug, 2023 2 commits
-
-
comfyanonymous authored
-
comfyanonymous authored
Control loras are controlnets where some of the weights are stored in "lora" format: an up and a down low rank matrice that when multiplied together and added to the unet weight give the controlnet weight. This allows a much smaller memory footprint depending on the rank of the matrices. These controlnets are used just like regular ones.
-
- 29 Jul, 2023 1 commit
-
-
comfyanonymous authored
-
- 19 Jul, 2023 1 commit
-
-
comfyanonymous authored
-
- 06 Jul, 2023 1 commit
-
-
comfyanonymous authored
-
- 02 Jul, 2023 1 commit
-
-
comfyanonymous authored
-
- 24 Jun, 2023 1 commit
-
-
comfyanonymous authored
-