@@ -11,7 +11,7 @@ This ui will let you design and execute advanced stable diffusion pipelines usin
## Features
- Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything.
- Fully supports SD1.x, SD2.x and SDXL
- Fully supports SD1.x, SD2.x, [SDXL](https://comfyanonymous.github.io/ComfyUI_examples/sdxl/) and [Stable Video Diffusion](https://comfyanonymous.github.io/ComfyUI_examples/video/)
- Asynchronous Queue system
- Many optimizations: Only re-executes the parts of the workflow that changes between executions.
- Command line option: ```--lowvram``` to make it work on GPUs with less than 3GB vram (enabled automatically on GPUs with low vram)
...
...
@@ -30,6 +30,8 @@ This ui will let you design and execute advanced stable diffusion pipelines usin
@@ -190,7 +194,7 @@ To use a textual inversion concepts/embeddings in a text prompt put them in the
Make sure you use the regular loaders/Load Checkpoint node to load checkpoints. It will auto pick the right settings depending on your GPU.
You can set this command line setting to disable the upcasting to fp32 in some cross attention operations which will increase your speed. Note that this will very likely give you black images on SD2.x models. If you use xformers this option does not do anything.
You can set this command line setting to disable the upcasting to fp32 in some cross attention operations which will increase your speed. Note that this will very likely give you black images on SD2.x models. If you use xformers or pytorch attention this option does not do anything.
context_dim=context_dimifself.disable_self_attnelseNone,dtype=dtype,device=device,operations=operations)# is a self-attention if not self.disable_self_attn
if'decoder.up_blocks.0.resnets.0.norm1.weight'insd.keys():#diffusers format
sd=diffusers_convert.convert_vae_state_dict(sd)
self.memory_used_encode=lambdashape,dtype:(1767*shape[2]*shape[3])*model_management.dtype_size(dtype)#These are for AutoencoderKL and need tweaking (should be lower)
memory_used=(2078*pixel_samples.shape[2]*pixel_samples.shape[3])*1.7#NOTE: this constant along with the one in the decode above are estimated from the mem usage for the VAE and could change.