Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
renzhc
diffusers_dcu
Commits
ba7e4845
Unverified
Commit
ba7e4845
authored
Aug 07, 2024
by
Steven Liu
Committed by
GitHub
Aug 08, 2024
Browse files
[docs] Organize model toctree (#9118)
* toctree * fix
parent
2dad462d
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
72 additions
and
62 deletions
+72
-62
docs/source/en/_toctree.yml
docs/source/en/_toctree.yml
+70
-62
docs/source/en/optimization/fp16.md
docs/source/en/optimization/fp16.md
+2
-0
No files found.
docs/source/en/_toctree.yml
View file @
ba7e4845
...
@@ -223,68 +223,76 @@
...
@@ -223,68 +223,76 @@
sections
:
sections
:
-
local
:
api/models/overview
-
local
:
api/models/overview
title
:
Overview
title
:
Overview
-
local
:
api/models/unet
-
sections
:
title
:
UNet1DModel
-
local
:
api/models/controlnet
-
local
:
api/models/unet2d
title
:
ControlNetModel
title
:
UNet2DModel
-
local
:
api/models/controlnet_hunyuandit
-
local
:
api/models/unet2d-cond
title
:
HunyuanDiT2DControlNetModel
title
:
UNet2DConditionModel
-
local
:
api/models/controlnet_sd3
-
local
:
api/models/unet3d-cond
title
:
SD3ControlNetModel
title
:
UNet3DConditionModel
-
local
:
api/models/controlnet_sparsectrl
-
local
:
api/models/unet-motion
title
:
SparseControlNetModel
title
:
UNetMotionModel
title
:
ControlNets
-
local
:
api/models/uvit2d
-
sections
:
title
:
UViT2DModel
-
local
:
api/models/aura_flow_transformer2d
-
local
:
api/models/vq
title
:
AuraFlowTransformer2DModel
title
:
VQModel
-
local
:
api/models/cogvideox_transformer3d
-
local
:
api/models/autoencoderkl
title
:
CogVideoXTransformer3DModel
title
:
AutoencoderKL
-
local
:
api/models/dit_transformer2d
-
local
:
api/models/autoencoderkl_cogvideox
title
:
DiTTransformer2DModel
title
:
AutoencoderKLCogVideoX
-
local
:
api/models/flux_transformer
-
local
:
api/models/asymmetricautoencoderkl
title
:
FluxTransformer2DModel
title
:
AsymmetricAutoencoderKL
-
local
:
api/models/hunyuan_transformer2d
-
local
:
api/models/stable_cascade_unet
title
:
HunyuanDiT2DModel
title
:
StableCascadeUNet
-
local
:
api/models/latte_transformer3d
-
local
:
api/models/autoencoder_tiny
title
:
LatteTransformer3DModel
title
:
Tiny AutoEncoder
-
local
:
api/models/lumina_nextdit2d
-
local
:
api/models/autoencoder_oobleck
title
:
LuminaNextDiT2DModel
title
:
Oobleck AutoEncoder
-
local
:
api/models/pixart_transformer2d
-
local
:
api/models/consistency_decoder_vae
title
:
PixArtTransformer2DModel
title
:
ConsistencyDecoderVAE
-
local
:
api/models/prior_transformer
-
local
:
api/models/transformer2d
title
:
PriorTransformer
title
:
Transformer2DModel
-
local
:
api/models/sd3_transformer2d
-
local
:
api/models/pixart_transformer2d
title
:
SD3Transformer2DModel
title
:
PixArtTransformer2DModel
-
local
:
api/models/stable_audio_transformer
-
local
:
api/models/dit_transformer2d
title
:
StableAudioDiTModel
title
:
DiTTransformer2DModel
-
local
:
api/models/transformer2d
-
local
:
api/models/hunyuan_transformer2d
title
:
Transformer2DModel
title
:
HunyuanDiT2DModel
-
local
:
api/models/transformer_temporal
-
local
:
api/models/aura_flow_transformer2d
title
:
TransformerTemporalModel
title
:
AuraFlowTransformer2DModel
title
:
Transformers
-
local
:
api/models/flux_transformer
-
sections
:
title
:
FluxTransformer2DModel
-
local
:
api/models/stable_cascade_unet
-
local
:
api/models/latte_transformer3d
title
:
StableCascadeUNet
title
:
LatteTransformer3DModel
-
local
:
api/models/unet
-
local
:
api/models/cogvideox_transformer3d
title
:
UNet1DModel
title
:
CogVideoXTransformer3DModel
-
local
:
api/models/unet2d
-
local
:
api/models/lumina_nextdit2d
title
:
UNet2DModel
title
:
LuminaNextDiT2DModel
-
local
:
api/models/unet2d-cond
-
local
:
api/models/transformer_temporal
title
:
UNet2DConditionModel
title
:
TransformerTemporalModel
-
local
:
api/models/unet3d-cond
-
local
:
api/models/sd3_transformer2d
title
:
UNet3DConditionModel
title
:
SD3Transformer2DModel
-
local
:
api/models/unet-motion
-
local
:
api/models/stable_audio_transformer
title
:
UNetMotionModel
title
:
StableAudioDiTModel
-
local
:
api/models/uvit2d
-
local
:
api/models/prior_transformer
title
:
UViT2DModel
title
:
PriorTransformer
title
:
UNets
-
local
:
api/models/controlnet
-
sections
:
title
:
ControlNetModel
-
local
:
api/models/autoencoderkl
-
local
:
api/models/controlnet_hunyuandit
title
:
AutoencoderKL
title
:
HunyuanDiT2DControlNetModel
-
local
:
api/models/autoencoderkl_cogvideox
-
local
:
api/models/controlnet_sd3
title
:
AutoencoderKLCogVideoX
title
:
SD3ControlNetModel
-
local
:
api/models/asymmetricautoencoderkl
-
local
:
api/models/controlnet_sparsectrl
title
:
AsymmetricAutoencoderKL
title
:
SparseControlNetModel
-
local
:
api/models/consistency_decoder_vae
title
:
ConsistencyDecoderVAE
-
local
:
api/models/autoencoder_oobleck
title
:
Oobleck AutoEncoder
-
local
:
api/models/autoencoder_tiny
title
:
Tiny AutoEncoder
-
local
:
api/models/vq
title
:
VQModel
title
:
VAEs
title
:
Models
title
:
Models
-
isExpanded
:
false
-
isExpanded
:
false
sections
:
sections
:
...
...
docs/source/en/optimization/fp16.md
View file @
ba7e4845
...
@@ -125,3 +125,5 @@ image
...
@@ -125,3 +125,5 @@ image
<figcaption
class=
"mt-2 text-center text-sm text-gray-500"
>
distilled Stable Diffusion + Tiny AutoEncoder
</figcaption>
<figcaption
class=
"mt-2 text-center text-sm text-gray-500"
>
distilled Stable Diffusion + Tiny AutoEncoder
</figcaption>
</div>
</div>
</div>
</div>
More tiny autoencoder models for other Stable Diffusion models, like Stable Diffusion 3, are available from
[
madebyollin
](
https://huggingface.co/madebyollin
)
.
\ No newline at end of file
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment