1. 07 Feb, 2023 1 commit
    • YiYi Xu's avatar
      Stable Diffusion Latent Upscaler (#2059) · 1051ca81
      YiYi Xu authored
      
      
      * Modify UNet2DConditionModel
      
      - allow skipping mid_block
      
      - adding a norm_group_size argument so that we can set the `num_groups` for group norm using `num_channels//norm_group_size`
      
      - allow user to set dimension for the timestep embedding (`time_embed_dim`)
      
      - the kernel_size for `conv_in` and `conv_out` is now configurable
      
      - add random fourier feature layer (`GaussianFourierProjection`) for `time_proj`
      
      - allow user to add the time and class embeddings before passing through the projection layer together - `time_embedding(t_emb + class_label))`
      
      - added 2 arguments `attn1_types` and `attn2_types`
      
        * currently we have argument `only_cross_attention`: when it's set to `True`, we will have a to the
      `BasicTransformerBlock` block with 2 cross-attention , otherwise we
      get a self-attention followed by a cross-attention; in k-upscaler, we need to have blocks that include just one cross-attention, or self-attention -> cross-attention;
      so I added `attn1_types` and `attn2_types` to the unet's argument list to allow user specify the attention types for the 2 positions in each block;  note that I stil kept
      the `only_cross_attention` argument for unet for easy configuration, but it will be converted to `attn1_type` and `attn2_type` when passing down to the down blocks
      
      - the position of downsample layer and upsample layer is now configurable
      
      - in k-upscaler unet, there is only one skip connection per each up/down block (instead of each layer in stable diffusion unet), added `skip_freq = "block"` to support
      this use case
      
      - if user passes attention_mask to unet, it will prepare the mask and pass a flag to cross attention processer to skip the `prepare_attention_mask` step
      inside cross attention block
      
      add up/down blocks for k-upscaler
      
      modify CrossAttention class
      
      - make the `dropout` layer in `to_out` optional
      
      - `use_conv_proj` - use conv instead of linear for all projection layers (i.e. `to_q`, `to_k`, `to_v`, `to_out`) whenever possible. note that when it's used to do cross
      attention, to_k, to_v has to be linear because the `encoder_hidden_states` is not 2d
      
      - `cross_attention_norm` - add an optional layernorm on encoder_hidden_states
      
      - `attention_dropout`: add an optional dropout on attention score
      
      adapt BasicTransformerBlock
      
      - add an ada groupnorm layer  to conditioning attention input with timestep embedding
      
      - allow skipping the FeedForward layer in between the attentions
      
      - replaced the only_cross_attention argument with attn1_type and attn2_type for more flexible configuration
      
      update timestep embedding: add new act_fn  gelu and an optional act_2
      
      modified ResnetBlock2D
      
      - refactored with AdaGroupNorm class (the timestep scale shift normalization)
      
      - add `mid_channel` argument - allow the first conv to have a different output dimension from the second conv
      
      - add option to use input AdaGroupNorm on the input instead of groupnorm
      
      - add options to add a dropout layer after each conv
      
      - allow user to set the bias in conv_shortcut (needed for k-upscaler)
      
      - add gelu
      
      adding conversion script for k-upscaler unet
      
      add pipeline
      
      * fix attention mask
      
      * fix a typo
      
      * fix a bug
      
      * make sure model can be used with GPU
      
      * make pipeline work with fp16
      
      * fix an error in BasicTransfomerBlock
      
      * make style
      
      * fix typo
      
      * some more fixes
      
      * uP
      
      * up
      
      * correct more
      
      * some clean-up
      
      * clean time proj
      
      * up
      
      * uP
      
      * more changes
      
      * remove the upcast_attention=True from unet config
      
      * remove attn1_types, attn2_types etc
      
      * fix
      
      * revert incorrect changes up/down samplers
      
      * make style
      
      * remove outdated files
      
      * Apply suggestions from code review
      
      * attention refactor
      
      * refactor cross attention
      
      * Apply suggestions from code review
      
      * update
      
      * up
      
      * update
      
      * Apply suggestions from code review
      
      * finish
      
      * Update src/diffusers/models/cross_attention.py
      
      * more fixes
      
      * up
      
      * up
      
      * up
      
      * finish
      
      * more corrections of conversion state
      
      * act_2 -> act_2_fn
      
      * remove dropout_after_conv from ResnetBlock2D
      
      * make style
      
      * simplify KAttentionBlock
      
      * add fast test for latent upscaler pipeline
      
      * add slow test
      
      * slow test fp16
      
      * make style
      
      * add doc string for pipeline_stable_diffusion_latent_upscale
      
      * add api doc page for latent upscaler pipeline
      
      * deprecate attention mask
      
      * clean up embeddings
      
      * simplify resnet
      
      * up
      
      * clean up resnet
      
      * up
      
      * correct more
      
      * up
      
      * up
      
      * improve a bit more
      
      * correct more
      
      * more clean-ups
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * add docstrings for new unet config
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * # Copied from
      
      * encode the image if not latent
      
      * remove force casting vae to fp32
      
      * fix
      
      * add comments about preconditioning parameters from k-diffusion paper
      
      * attn1_type, attn2_type -> add_self_attention
      
      * clean up get_down_block and get_up_block
      
      * fix
      
      * fixed a typo(?) in ada group norm
      
      * update slice attention processer for cross attention
      
      * update slice
      
      * fix fast test
      
      * update the checkpoint
      
      * finish tests
      
      * fix-copies
      
      * fix-copy for modeling_text_unet.py
      
      * make style
      
      * make style
      
      * fix f-string
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * fix import
      
      * correct changes
      
      * fix resnet
      
      * make fix-copies
      
      * correct euler scheduler
      
      * add missing #copied from for preprocess
      
      * revert
      
      * fix
      
      * fix copies
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update src/diffusers/models/cross_attention.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * clean up conversion script
      
      * KDownsample2d,KUpsample2d -> KDownsample2D,KUpsample2D
      
      * more
      
      * Update src/diffusers/models/unet_2d_condition.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * remove prepare_extra_step_kwargs
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * fix a typo in timestep embedding
      
      * remove num_image_per_prompt
      
      * fix fasttest
      
      * make style + fix-copies
      
      * fix
      
      * fix xformer test
      
      * fix style
      
      * doc string
      
      * make style
      
      * fix-copies
      
      * docstring for time_embedding_norm
      
      * make style
      
      * final finishes
      
      * make fix-copies
      
      * fix tests
      
      ---------
      Co-authored-by: default avataryiyixuxu <yixu@yis-macbook-pro.lan>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      1051ca81
  2. 18 Dec, 2022 1 commit
    • Will Berman's avatar
      kakaobrain unCLIP (#1428) · 2dcf64b7
      Will Berman authored
      
      
      * [wip] attention block updates
      
      * [wip] unCLIP unet decoder and super res
      
      * [wip] unCLIP prior transformer
      
      * [wip] scheduler changes
      
      * [wip] text proj utility class
      
      * [wip] UnCLIPPipeline
      
      * [wip] kakaobrain unCLIP convert script
      
      * [unCLIP pipeline] fixes re: @patrickvonplaten
      
      remove callbacks
      
      move denoising loops into call function
      
      * UNCLIPScheduler re: @patrickvonplaten
      
      Revert changes to DDPMScheduler. Make UNCLIPScheduler, a modified
      DDPM scheduler with changes to support karlo
      
      * mask -> attention_mask re: @patrickvonplaten
      
      * [DDPMScheduler] remove leftover change
      
      * [docs] PriorTransformer
      
      * [docs] UNet2DConditionModel and UNet2DModel
      
      * [nit] UNCLIPScheduler -> UnCLIPScheduler
      
      matches existing unclip naming better
      
      * [docs] SchedulingUnCLIP
      
      * [docs] UnCLIPTextProjModel
      
      * refactor
      
      * finish licenses
      
      * rename all to attention_mask and prep in models
      
      * more renaming
      
      * don't expose unused configs
      
      * final renaming fixes
      
      * remove x attn mask when not necessary
      
      * configure kakao script to use new class embedding config
      
      * fix copies
      
      * [tests] UnCLIPScheduler
      
      * finish x attn
      
      * finish
      
      * remove more
      
      * rename condition blocks
      
      * clean more
      
      * Apply suggestions from code review
      
      * up
      
      * fix
      
      * [tests] UnCLIPPipelineFastTests
      
      * remove unused imports
      
      * [tests] UnCLIPPipelineIntegrationTests
      
      * correct
      
      * make style
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      2dcf64b7
  3. 14 Nov, 2022 1 commit
    • Nathan Lambert's avatar
      Add UNet 1d for RL model for planning + colab (#105) · 7c5fef81
      Nathan Lambert authored
      
      
      * re-add RL model code
      
      * match model forward api
      
      * add register_to_config, pass training tests
      
      * fix tests, update forward outputs
      
      * remove unused code, some comments
      
      * add to docs
      
      * remove extra embedding code
      
      * unify time embedding
      
      * remove conv1d output sequential
      
      * remove sequential from conv1dblock
      
      * style and deleting duplicated code
      
      * clean files
      
      * remove unused variables
      
      * clean variables
      
      * add 1d resnet block structure for downsample
      
      * rename as unet1d
      
      * fix renaming
      
      * rename files
      
      * add get_block(...) api
      
      * unify args for model1d like model2d
      
      * minor cleaning
      
      * fix docs
      
      * improve 1d resnet blocks
      
      * fix tests, remove permuts
      
      * fix style
      
      * add output activation
      
      * rename flax blocks file
      
      * Add Value Function and corresponding example script to Diffuser implementation (#884)
      
      * valuefunction code
      
      * start example scripts
      
      * missing imports
      
      * bug fixes and placeholder example script
      
      * add value function scheduler
      
      * load value function from hub and get best actions in example
      
      * very close to working example
      
      * larger batch size for planning
      
      * more tests
      
      * merge unet1d changes
      
      * wandb for debugging, use newer models
      
      * success!
      
      * turns out we just need more diffusion steps
      
      * run on modal
      
      * merge and code cleanup
      
      * use same api for rl model
      
      * fix variance type
      
      * wrong normalization function
      
      * add tests
      
      * style
      
      * style and quality
      
      * edits based on comments
      
      * style and quality
      
      * remove unused var
      
      * hack unet1d into a value function
      
      * add pipeline
      
      * fix arg order
      
      * add pipeline to core library
      
      * community pipeline
      
      * fix couple shape bugs
      
      * style
      
      * Apply suggestions from code review
      Co-authored-by: default avatarNathan Lambert <nathan@huggingface.co>
      
      * update post merge of scripts
      
      * add mdiblock / outblock architecture
      
      * Pipeline cleanup (#947)
      
      * valuefunction code
      
      * start example scripts
      
      * missing imports
      
      * bug fixes and placeholder example script
      
      * add value function scheduler
      
      * load value function from hub and get best actions in example
      
      * very close to working example
      
      * larger batch size for planning
      
      * more tests
      
      * merge unet1d changes
      
      * wandb for debugging, use newer models
      
      * success!
      
      * turns out we just need more diffusion steps
      
      * run on modal
      
      * merge and code cleanup
      
      * use same api for rl model
      
      * fix variance type
      
      * wrong normalization function
      
      * add tests
      
      * style
      
      * style and quality
      
      * edits based on comments
      
      * style and quality
      
      * remove unused var
      
      * hack unet1d into a value function
      
      * add pipeline
      
      * fix arg order
      
      * add pipeline to core library
      
      * community pipeline
      
      * fix couple shape bugs
      
      * style
      
      * Apply suggestions from code review
      
      * clean up comments
      
      * convert older script to using pipeline and add readme
      
      * rename scripts
      
      * style, update tests
      
      * delete unet rl model file
      
      * remove imports in src
      Co-authored-by: default avatarNathan Lambert <nathan@huggingface.co>
      
      * Update src/diffusers/models/unet_1d_blocks.py
      
      * Update tests/test_models_unet.py
      
      * RL Cleanup v2 (#965)
      
      * valuefunction code
      
      * start example scripts
      
      * missing imports
      
      * bug fixes and placeholder example script
      
      * add value function scheduler
      
      * load value function from hub and get best actions in example
      
      * very close to working example
      
      * larger batch size for planning
      
      * more tests
      
      * merge unet1d changes
      
      * wandb for debugging, use newer models
      
      * success!
      
      * turns out we just need more diffusion steps
      
      * run on modal
      
      * merge and code cleanup
      
      * use same api for rl model
      
      * fix variance type
      
      * wrong normalization function
      
      * add tests
      
      * style
      
      * style and quality
      
      * edits based on comments
      
      * style and quality
      
      * remove unused var
      
      * hack unet1d into a value function
      
      * add pipeline
      
      * fix arg order
      
      * add pipeline to core library
      
      * community pipeline
      
      * fix couple shape bugs
      
      * style
      
      * Apply suggestions from code review
      
      * clean up comments
      
      * convert older script to using pipeline and add readme
      
      * rename scripts
      
      * style, update tests
      
      * delete unet rl model file
      
      * remove imports in src
      
      * add specific vf block and update tests
      
      * style
      
      * Update tests/test_models_unet.py
      Co-authored-by: default avatarNathan Lambert <nathan@huggingface.co>
      
      * fix quality in tests
      
      * fix quality style, split test file
      
      * fix checks / tests
      
      * make timesteps closer to main
      
      * unify block API
      
      * unify forward api
      
      * delete lines in examples
      
      * style
      
      * examples style
      
      * all tests pass
      
      * make style
      
      * make dance_diff test pass
      
      * Refactoring RL PR (#1200)
      
      * init file changes
      
      * add import utils
      
      * finish cleaning files, imports
      
      * remove import flags
      
      * clean examples
      
      * fix imports, tests for merge
      
      * update readmes
      
      * hotfix for tests
      
      * quality
      
      * fix some tests
      
      * change defaults
      
      * more mps test fixes
      
      * unet1d defaults
      
      * do not default import experimental
      
      * defaults for tests
      
      * fix tests
      
      * fix-copies
      
      * fix
      
      * changes per Patrik's comments (#1285)
      
      * changes per Patrik's comments
      
      * update conversion script
      
      * fix renaming
      
      * skip more mps tests
      
      * last test fix
      
      * Update examples/rl/README.md
      Co-authored-by: default avatarBen Glickenhaus <benglickenhaus@gmail.com>
      7c5fef81
  4. 28 Oct, 2022 1 commit
  5. 25 Oct, 2022 1 commit
  6. 11 Oct, 2022 1 commit
  7. 10 Oct, 2022 1 commit
  8. 04 Oct, 2022 2 commits
  9. 30 Sep, 2022 2 commits
    • Josh Achiam's avatar
      Allow resolutions that are not multiples of 64 (#505) · a784be2e
      Josh Achiam authored
      
      
      * Allow resolutions that are not multiples of 64
      
      * ran black
      
      * fix bug
      
      * add test
      
      * more explanation
      
      * more comments
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      a784be2e
    • Nouamane Tazi's avatar
      Optimize Stable Diffusion (#371) · 9ebaea54
      Nouamane Tazi authored
      * initial commit
      
      * make UNet stream capturable
      
      * try to fix noise_pred value
      
      * remove cuda graph and keep NB
      
      * non blocking unet with PNDMScheduler
      
      * make timesteps np arrays for pndm scheduler
      because lists don't get formatted to tensors in `self.set_format`
      
      * make max async in pndm
      
      * use channel last format in unet
      
      * avoid moving timesteps device in each unet call
      
      * avoid memcpy op in `get_timestep_embedding`
      
      * add `channels_last` kwarg to `DiffusionPipeline.from_pretrained`
      
      * update TODO
      
      * replace `channels_last` kwarg with `memory_format` for more generality
      
      * revert the channels_last changes to leave it for another PR
      
      * remove non_blocking when moving input ids to device
      
      * remove blocking from all .to() operations at beginning of pipeline
      
      * fix merging
      
      * fix merging
      
      * model can run in other precisions without autocast
      
      * attn refactoring
      
      * Revert "attn refactoring"
      
      This reverts commit 0c70c0e189cd2c4d8768274c9fcf5b940ee310fb.
      
      * remove restriction to run conv_norm in fp32
      
      * use `baddbmm` instead of `matmul`for better in attention for better perf
      
      * removing all reshapes to test perf
      
      * Revert "removing all reshapes to test perf"
      
      This reverts commit 006ccb8a8c6bc7eb7e512392e692a29d9b1553cd.
      
      * add shapes comments
      
      * hardcore whats needed for jitting
      
      * Revert "hardcore whats needed for jitting"
      
      This reverts commit 2fa9c698eae2890ac5f8e367ca80532ecf94df9a.
      
      * Revert "remove restriction to run conv_norm in fp32"
      
      This reverts commit cec592890c32da3d1b78d38b49e4307aedf459b9.
      
      * revert using baddmm in attention's forward
      
      * cleanup comment
      
      * remove restriction to run conv_norm in fp32. no quality loss was noticed
      
      This reverts commit cc9bc1339c998ebe9e7d733f910c6d72d9792213.
      
      * add more optimizations techniques to docs
      
      * Revert "add shapes comments"
      
      This reverts commit 31c58eadb8892f95478cdf05229adf678678c5f4.
      
      * apply suggestions
      
      * make quality
      
      * apply suggestions
      
      * styling
      
      * `scheduler.timesteps` are now arrays so we dont need .to()
      
      * remove useless .type()
      
      * use mean instead of max in `test_stable_diffusion_inpaint_pipeline_k_lms`
      
      * move scheduler timestamps to correct device if tensors
      
      * add device to `set_timesteps` in LMSD scheduler
      
      * `self.scheduler.set_timesteps` now uses device arg for schedulers that accept it
      
      * quick fix
      
      * styling
      
      * remove kwargs from schedulers `set_timesteps`
      
      * revert to using max in K-LMS inpaint pipeline test
      
      * Revert "`self.scheduler.set_timesteps` now uses device arg for schedulers that accept it"
      
      This reverts commit 00d5a51e5c20d8d445c8664407ef29608106d899.
      
      * move timesteps to correct device before loop in SD pipeline
      
      * apply previous fix to other SD pipelines
      
      * UNet now accepts tensor timesteps even on wrong device, to avoid errors
      - it shouldnt affect performance if timesteps are alrdy on correct device
      - it does slow down performance if they're on the wrong device
      
      * fix pipeline when timesteps are arrays with strides
      9ebaea54
  10. 29 Sep, 2022 1 commit
  11. 19 Sep, 2022 1 commit
  12. 16 Sep, 2022 2 commits
  13. 14 Sep, 2022 1 commit
  14. 08 Sep, 2022 1 commit
    • Pedro Cuenca's avatar
      Inference support for `mps` device (#355) · 5dda1735
      Pedro Cuenca authored
      * Initial support for mps in Stable Diffusion pipeline.
      
      * Initial "warmup" implementation when using mps.
      
      * Make some deterministic tests pass with mps.
      
      * Disable training tests when using mps.
      
      * SD: generate latents in CPU then move to device.
      
      This is especially important when using the mps device, because
      generators are not supported there. See for example
      https://github.com/pytorch/pytorch/issues/84288.
      
      In addition, the other pipelines seem to use the same approach: generate
      the random samples then move to the appropriate device.
      
      After this change, generating an image in MPS produces the same result
      as when using the CPU, if the same seed is used.
      
      * Remove prints.
      
      * Pass AutoencoderKL test_output_pretrained with mps.
      
      Sampling from `posterior` must be done in CPU.
      
      * Style
      
      * Do not use torch.long for log op in mps device.
      
      * Perform incompatible padding ops in CPU.
      
      UNet tests now pass.
      See https://github.com/pytorch/pytorch/issues/84535
      
      
      
      * Style: fix import order.
      
      * Remove unused symbols.
      
      * Remove MPSWarmupMixin, do not apply automatically.
      
      We do apply warmup in the tests, but not during normal use.
      This adopts some PR suggestions by @patrickvonplaten.
      
      * Add comment for mps fallback to CPU step.
      
      * Add README_mps.md for mps installation and use.
      
      * Apply `black` to modified files.
      
      * Restrict README_mps to SD, show measures in table.
      
      * Make PNDM indexing compatible with mps.
      
      Addresses #239.
      
      * Do not use float64 when using LDMScheduler.
      
      Fixes #358.
      
      * Fix typo identified by @patil-suraj
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Adapt example to new output style.
      
      * Restore 1:1 results reproducibility with CompVis.
      
      However, mps latents need to be generated in CPU because generators
      don't work in the mps device.
      
      * Move PyTorch nightly to requirements.
      
      * Adapt `test_scheduler_outputs_equivalence` ton MPS.
      
      * mps: skip training tests instead of ignoring silently.
      
      * Make VQModel tests pass on mps.
      
      * mps ddim tests: warmup, increase tolerance.
      
      * ScoreSdeVeScheduler indexing made mps compatible.
      
      * Make ldm pipeline tests pass using warmup.
      
      * Style
      
      * Simplify casting as suggested in PR.
      
      * Add Known Issues to readme.
      
      * `isort` import order.
      
      * Remove _mps_warmup helpers from ModelMixin.
      
      And just make changes to the tests.
      
      * Skip tests using unittest decorator for consistency.
      
      * Remove temporary var.
      
      * Remove spurious blank space.
      
      * Remove unused symbol.
      
      * Remove README_mps.
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> 
      5dda1735
  15. 07 Sep, 2022 1 commit
  16. 01 Sep, 2022 1 commit
  17. 25 Aug, 2022 1 commit
  18. 23 Aug, 2022 1 commit
  19. 16 Aug, 2022 1 commit
  20. 28 Jul, 2022 1 commit
  21. 20 Jul, 2022 1 commit
    • Patrick von Platen's avatar
      Big Model Renaming (#109) · 9c3820d0
      Patrick von Platen authored
      * up
      
      * change model name
      
      * renaming
      
      * more changes
      
      * up
      
      * up
      
      * up
      
      * save checkpoint
      
      * finish api / naming
      
      * finish config renaming
      
      * rename all weights
      
      * finish really
      9c3820d0
  22. 18 Jul, 2022 1 commit
  23. 15 Jul, 2022 1 commit
  24. 14 Jul, 2022 2 commits
  25. 12 Jul, 2022 1 commit
  26. 04 Jul, 2022 1 commit
    • Suraj Patil's avatar
      Simplify FirUp/down, unet sde (#71) · 53a42d0a
      Suraj Patil authored
      * refactor fir up/down sample
      
      * remove variance scaling
      
      * remove variance scaling from unet sde
      
      * refactor Linear
      
      * style
      
      * actually remove variance scaling
      
      * add back upsample_2d, downsample_2d
      
      * style
      
      * fix FirUpsample2D
      53a42d0a
  27. 03 Jul, 2022 4 commits
  28. 01 Jul, 2022 6 commits