Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
diffusers
Commits
9f10c545
Unverified
Commit
9f10c545
authored
Nov 25, 2022
by
Patrick von Platen
Committed by
GitHub
Nov 25, 2022
Browse files
Fix sample size conversion script (#1408)
up
parent
5c10e68a
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
2 additions
and
71 deletions
+2
-71
scripts/convert_original_stable_diffusion_to_diffusers.py
scripts/convert_original_stable_diffusion_to_diffusers.py
+2
-1
v1-inference.yaml
v1-inference.yaml
+0
-70
No files found.
scripts/convert_original_stable_diffusion_to_diffusers.py
View file @
9f10c545
...
@@ -211,6 +211,7 @@ def create_unet_diffusers_config(original_config):
...
@@ -211,6 +211,7 @@ def create_unet_diffusers_config(original_config):
"""
"""
Creates a config for the diffusers based on the config of the LDM model.
Creates a config for the diffusers based on the config of the LDM model.
"""
"""
model_params
=
original_config
.
model
.
params
unet_params
=
original_config
.
model
.
params
.
unet_config
.
params
unet_params
=
original_config
.
model
.
params
.
unet_config
.
params
block_out_channels
=
[
unet_params
.
model_channels
*
mult
for
mult
in
unet_params
.
channel_mult
]
block_out_channels
=
[
unet_params
.
model_channels
*
mult
for
mult
in
unet_params
.
channel_mult
]
...
@@ -230,7 +231,7 @@ def create_unet_diffusers_config(original_config):
...
@@ -230,7 +231,7 @@ def create_unet_diffusers_config(original_config):
resolution
//=
2
resolution
//=
2
config
=
dict
(
config
=
dict
(
sample_size
=
unet
_params
.
image_size
,
sample_size
=
model
_params
.
image_size
,
in_channels
=
unet_params
.
in_channels
,
in_channels
=
unet_params
.
in_channels
,
out_channels
=
unet_params
.
out_channels
,
out_channels
=
unet_params
.
out_channels
,
down_block_types
=
tuple
(
down_block_types
),
down_block_types
=
tuple
(
down_block_types
),
...
...
v1-inference.yaml
deleted
100644 → 0
View file @
5c10e68a
model
:
base_learning_rate
:
1.0e-04
target
:
ldm.models.diffusion.ddpm.LatentDiffusion
params
:
linear_start
:
0.00085
linear_end
:
0.0120
num_timesteps_cond
:
1
log_every_t
:
200
timesteps
:
1000
first_stage_key
:
"
jpg"
cond_stage_key
:
"
txt"
image_size
:
64
channels
:
4
cond_stage_trainable
:
false
# Note: different from the one we trained before
conditioning_key
:
crossattn
monitor
:
val/loss_simple_ema
scale_factor
:
0.18215
use_ema
:
False
scheduler_config
:
# 10000 warmup steps
target
:
ldm.lr_scheduler.LambdaLinearScheduler
params
:
warm_up_steps
:
[
10000
]
cycle_lengths
:
[
10000000000000
]
# incredibly large number to prevent corner cases
f_start
:
[
1.e-6
]
f_max
:
[
1.
]
f_min
:
[
1.
]
unet_config
:
target
:
ldm.modules.diffusionmodules.openaimodel.UNetModel
params
:
image_size
:
32
# unused
in_channels
:
4
out_channels
:
4
model_channels
:
320
attention_resolutions
:
[
4
,
2
,
1
]
num_res_blocks
:
2
channel_mult
:
[
1
,
2
,
4
,
4
]
num_heads
:
8
use_spatial_transformer
:
True
transformer_depth
:
1
context_dim
:
768
use_checkpoint
:
True
legacy
:
False
first_stage_config
:
target
:
ldm.models.autoencoder.AutoencoderKL
params
:
embed_dim
:
4
monitor
:
val/rec_loss
ddconfig
:
double_z
:
true
z_channels
:
4
resolution
:
256
in_channels
:
3
out_ch
:
3
ch
:
128
ch_mult
:
-
1
-
2
-
4
-
4
num_res_blocks
:
2
attn_resolutions
:
[]
dropout
:
0.0
lossconfig
:
target
:
torch.nn.Identity
cond_stage_config
:
target
:
ldm.modules.encoders.modules.FrozenCLIPEmbedder
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment