1. 17 Apr, 2023 2 commits
  2. 13 Apr, 2023 1 commit
  3. 12 Apr, 2023 1 commit
  4. 11 Apr, 2023 2 commits
    • Will Berman's avatar
      add only cross attention to simple attention blocks (#3011) · c6180a31
      Will Berman authored
      * add only cross attention to simple attention blocks
      
      * add test for only_cross_attention re: @patrickvonplaten
      
      * mid_block_only_cross_attention better default
      
      allow mid_block_only_cross_attention to default to
      `only_cross_attention` when `only_cross_attention` is given
      as a single boolean
      c6180a31
    • Patrick von Platen's avatar
      Fix config prints and save, load of pipelines (#2849) · 8b451eb6
      Patrick von Platen authored
      * [Config] Fix config prints and save, load
      
      * Only use potential nn.Modules for dtype and device
      
      * Correct vae image processor
      
      * make sure in_channels is not accessed directly
      
      * make sure in channels is only accessed via config
      
      * Make sure schedulers only access config attributes
      
      * Make sure to access config in SAG
      
      * Fix vae processor and make style
      
      * add tests
      
      * uP
      
      * make style
      
      * Fix more naming issues
      
      * Final fix with vae config
      
      * change more
      8b451eb6
  5. 04 Apr, 2023 1 commit
  6. 31 Mar, 2023 1 commit
  7. 27 Mar, 2023 1 commit
  8. 23 Mar, 2023 2 commits
    • Sanchit Gandhi's avatar
      Add AudioLDM (#2232) · b94880e5
      Sanchit Gandhi authored
      
      
      * Add AudioLDM
      
      * up
      
      * add vocoder
      
      * start unet
      
      * unconditional unet
      
      * clap, vocoder and vae
      
      * clean-up: conversion scripts
      
      * fix: conversion script token_type_ids
      
      * clean-up: pipeline docstring
      
      * tests: from SD
      
      * clean-up: cpu offload vocoder instead of safety checker
      
      * feat: adapt tests to audioldm
      
      * feat: add docs
      
      * clean-up: amend pipeline docstrings
      
      * clean-up: make style
      
      * clean-up: make fix-copies
      
      * fix: add doc path to toctree
      
      * clean-up: args for conversion script
      
      * clean-up: paths to checkpoints
      
      * fix: use conditional unet
      
      * clean-up: make style
      
      * fix: type hints for UNet
      
      * clean-up: docstring for UNet
      
      * clean-up: make style
      
      * clean-up: remove duplicate in docstring
      
      * clean-up: make style
      
      * clean-up: make fix-copies
      
      * clean-up: move imports to start in code snippet
      
      * fix: pass cross_attention_dim as a list/tuple to unet
      
      * clean-up: make fix-copies
      
      * fix: update checkpoint path
      
      * fix: unet cross_attention_dim in tests
      
      * film embeddings -> class embeddings
      
      * Apply suggestions from code review
      Co-authored-by: default avatarWill Berman <wlbberman@gmail.com>
      
      * fix: unet film embed to use existing args
      
      * fix: unet tests to use existing args
      
      * fix: make style
      
      * fix: transformers import and version in init
      
      * clean-up: make style
      
      * Revert "clean-up: make style"
      
      This reverts commit 5d6d1f8b324f5583e7805dc01e2c86e493660d66.
      
      * clean-up: make style
      
      * clean-up: use pipeline tester mixin tests where poss
      
      * clean-up: skip attn slicing test
      
      * fix: add torch dtype to docs
      
      * fix: remove conversion script out of src
      
      * fix: remove .detach from 1d waveform
      
      * fix: reduce default num inf steps
      
      * fix: swap height/width -> audio_length_in_s
      
      * clean-up: make style
      
      * fix: remove nightly tests
      
      * fix: imports in conversion script
      
      * clean-up: slim-down to two slow tests
      
      * clean-up: slim-down fast tests
      
      * fix: batch consistent tests
      
      * clean-up: make style
      
      * clean-up: remove vae slicing fast test
      
      * clean-up: propagate changes to doc
      
      * fix: increase test tol to 1e-2
      
      * clean-up: finish docs
      
      * clean-up: make style
      
      * feat: vocoder / VAE compatibility check
      
      * feat: possibly expand / cut audio waveform
      
      * fix: pipeline call signature test
      
      * fix: slow tests output len
      
      * clean-up: make style
      
      * make style
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarWilliam Berman <WLBberman@gmail.com>
      b94880e5
    • Pedro Cuenca's avatar
      Skip `mps` in text-to-video tests (#2792) · aa0531fa
      Pedro Cuenca authored
      * Skip mps in text-to-video tests.
      
      * style
      
      * Skip UNet3D mps tests.
      aa0531fa
  9. 22 Mar, 2023 2 commits
    • Pedro Cuenca's avatar
      `mps`: remove warmup passes (#2771) · 92e1164e
      Pedro Cuenca authored
      * Remove warmup passes in mps tests.
      
      * Update mps docs: no warmup pass in PyTorch 2
      
      * Update imports.
      92e1164e
    • Patrick von Platen's avatar
      [MS Text To Video] Add first text to video (#2738) · ca1a2229
      Patrick von Platen authored
      
      
      * [MS Text To Video} Add first text to video
      
      * upload
      
      * make first model example
      
      * match unet3d params
      
      * make sure weights are correcctly converted
      
      * improve
      
      * forward pass works, but diff result
      
      * make forward work
      
      * fix more
      
      * finish
      
      * refactor video output class.
      
      * feat: add support for a video export utility.
      
      * fix: opencv availability check.
      
      * run make fix-copies.
      
      * add: docs for the model components.
      
      * add: standalone pipeline doc.
      
      * edit docstring of the pipeline.
      
      * add: right path to TransformerTempModel
      
      * add: first set of tests.
      
      * complete fast tests for text to video.
      
      * fix bug
      
      * up
      
      * three fast tests failing.
      
      * add: note on slow tests
      
      * make work with all schedulers
      
      * apply styling.
      
      * add slow tests
      
      * change file name
      
      * update
      
      * more correction
      
      * more fixes
      
      * finish
      
      * up
      
      * Apply suggestions from code review
      
      * up
      
      * finish
      
      * make copies
      
      * fix pipeline tests
      
      * fix more tests
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * apply suggestions
      
      * up
      
      * revert
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      ca1a2229
  10. 21 Mar, 2023 1 commit
  11. 18 Mar, 2023 1 commit
  12. 17 Mar, 2023 1 commit
  13. 16 Mar, 2023 1 commit
  14. 15 Mar, 2023 2 commits
  15. 04 Mar, 2023 1 commit
  16. 03 Mar, 2023 1 commit
  17. 01 Mar, 2023 1 commit
  18. 24 Feb, 2023 1 commit
    • Pedro Cuenca's avatar
      `mps` test fixes (#2470) · 54bc882d
      Pedro Cuenca authored
      * Skip variant tests (UNet1d, UNetRL) on mps.
      
      mish op not yet supported.
      
      * Exclude a couple of panorama tests on mps
      
      They are too slow for fast CI.
      
      * Exclude mps panorama from more tests.
      
      * mps: exclude all fast panorama tests as they keep failing.
      54bc882d
  19. 13 Feb, 2023 1 commit
  20. 07 Feb, 2023 2 commits
    • Patrick von Platen's avatar
      Replace flake8 with ruff and update black (#2279) · a7ca03aa
      Patrick von Platen authored
      * before running make style
      
      * remove left overs from flake8
      
      * finish
      
      * make fix-copies
      
      * final fix
      
      * more fixes
      a7ca03aa
    • YiYi Xu's avatar
      Stable Diffusion Latent Upscaler (#2059) · 1051ca81
      YiYi Xu authored
      
      
      * Modify UNet2DConditionModel
      
      - allow skipping mid_block
      
      - adding a norm_group_size argument so that we can set the `num_groups` for group norm using `num_channels//norm_group_size`
      
      - allow user to set dimension for the timestep embedding (`time_embed_dim`)
      
      - the kernel_size for `conv_in` and `conv_out` is now configurable
      
      - add random fourier feature layer (`GaussianFourierProjection`) for `time_proj`
      
      - allow user to add the time and class embeddings before passing through the projection layer together - `time_embedding(t_emb + class_label))`
      
      - added 2 arguments `attn1_types` and `attn2_types`
      
        * currently we have argument `only_cross_attention`: when it's set to `True`, we will have a to the
      `BasicTransformerBlock` block with 2 cross-attention , otherwise we
      get a self-attention followed by a cross-attention; in k-upscaler, we need to have blocks that include just one cross-attention, or self-attention -> cross-attention;
      so I added `attn1_types` and `attn2_types` to the unet's argument list to allow user specify the attention types for the 2 positions in each block;  note that I stil kept
      the `only_cross_attention` argument for unet for easy configuration, but it will be converted to `attn1_type` and `attn2_type` when passing down to the down blocks
      
      - the position of downsample layer and upsample layer is now configurable
      
      - in k-upscaler unet, there is only one skip connection per each up/down block (instead of each layer in stable diffusion unet), added `skip_freq = "block"` to support
      this use case
      
      - if user passes attention_mask to unet, it will prepare the mask and pass a flag to cross attention processer to skip the `prepare_attention_mask` step
      inside cross attention block
      
      add up/down blocks for k-upscaler
      
      modify CrossAttention class
      
      - make the `dropout` layer in `to_out` optional
      
      - `use_conv_proj` - use conv instead of linear for all projection layers (i.e. `to_q`, `to_k`, `to_v`, `to_out`) whenever possible. note that when it's used to do cross
      attention, to_k, to_v has to be linear because the `encoder_hidden_states` is not 2d
      
      - `cross_attention_norm` - add an optional layernorm on encoder_hidden_states
      
      - `attention_dropout`: add an optional dropout on attention score
      
      adapt BasicTransformerBlock
      
      - add an ada groupnorm layer  to conditioning attention input with timestep embedding
      
      - allow skipping the FeedForward layer in between the attentions
      
      - replaced the only_cross_attention argument with attn1_type and attn2_type for more flexible configuration
      
      update timestep embedding: add new act_fn  gelu and an optional act_2
      
      modified ResnetBlock2D
      
      - refactored with AdaGroupNorm class (the timestep scale shift normalization)
      
      - add `mid_channel` argument - allow the first conv to have a different output dimension from the second conv
      
      - add option to use input AdaGroupNorm on the input instead of groupnorm
      
      - add options to add a dropout layer after each conv
      
      - allow user to set the bias in conv_shortcut (needed for k-upscaler)
      
      - add gelu
      
      adding conversion script for k-upscaler unet
      
      add pipeline
      
      * fix attention mask
      
      * fix a typo
      
      * fix a bug
      
      * make sure model can be used with GPU
      
      * make pipeline work with fp16
      
      * fix an error in BasicTransfomerBlock
      
      * make style
      
      * fix typo
      
      * some more fixes
      
      * uP
      
      * up
      
      * correct more
      
      * some clean-up
      
      * clean time proj
      
      * up
      
      * uP
      
      * more changes
      
      * remove the upcast_attention=True from unet config
      
      * remove attn1_types, attn2_types etc
      
      * fix
      
      * revert incorrect changes up/down samplers
      
      * make style
      
      * remove outdated files
      
      * Apply suggestions from code review
      
      * attention refactor
      
      * refactor cross attention
      
      * Apply suggestions from code review
      
      * update
      
      * up
      
      * update
      
      * Apply suggestions from code review
      
      * finish
      
      * Update src/diffusers/models/cross_attention.py
      
      * more fixes
      
      * up
      
      * up
      
      * up
      
      * finish
      
      * more corrections of conversion state
      
      * act_2 -> act_2_fn
      
      * remove dropout_after_conv from ResnetBlock2D
      
      * make style
      
      * simplify KAttentionBlock
      
      * add fast test for latent upscaler pipeline
      
      * add slow test
      
      * slow test fp16
      
      * make style
      
      * add doc string for pipeline_stable_diffusion_latent_upscale
      
      * add api doc page for latent upscaler pipeline
      
      * deprecate attention mask
      
      * clean up embeddings
      
      * simplify resnet
      
      * up
      
      * clean up resnet
      
      * up
      
      * correct more
      
      * up
      
      * up
      
      * improve a bit more
      
      * correct more
      
      * more clean-ups
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * add docstrings for new unet config
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * # Copied from
      
      * encode the image if not latent
      
      * remove force casting vae to fp32
      
      * fix
      
      * add comments about preconditioning parameters from k-diffusion paper
      
      * attn1_type, attn2_type -> add_self_attention
      
      * clean up get_down_block and get_up_block
      
      * fix
      
      * fixed a typo(?) in ada group norm
      
      * update slice attention processer for cross attention
      
      * update slice
      
      * fix fast test
      
      * update the checkpoint
      
      * finish tests
      
      * fix-copies
      
      * fix-copy for modeling_text_unet.py
      
      * make style
      
      * make style
      
      * fix f-string
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * fix import
      
      * correct changes
      
      * fix resnet
      
      * make fix-copies
      
      * correct euler scheduler
      
      * add missing #copied from for preprocess
      
      * revert
      
      * fix
      
      * fix copies
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update src/diffusers/models/cross_attention.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * clean up conversion script
      
      * KDownsample2d,KUpsample2d -> KDownsample2D,KUpsample2D
      
      * more
      
      * Update src/diffusers/models/unet_2d_condition.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * remove prepare_extra_step_kwargs
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * fix a typo in timestep embedding
      
      * remove num_image_per_prompt
      
      * fix fasttest
      
      * make style + fix-copies
      
      * fix
      
      * fix xformer test
      
      * fix style
      
      * doc string
      
      * make style
      
      * fix-copies
      
      * docstring for time_embedding_norm
      
      * make style
      
      * final finishes
      
      * make fix-copies
      
      * fix tests
      
      ---------
      Co-authored-by: default avataryiyixuxu <yixu@yis-macbook-pro.lan>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      1051ca81
  21. 26 Jan, 2023 1 commit
  22. 25 Jan, 2023 1 commit
    • Patrick von Platen's avatar
      Reproducibility 3/3 (#1924) · 6ba2231d
      Patrick von Platen authored
      
      
      * make tests deterministic
      
      * run slow tests
      
      * prepare for testing
      
      * finish
      
      * refactor
      
      * add print statements
      
      * finish more
      
      * correct some test failures
      
      * more fixes
      
      * set up to correct tests
      
      * more corrections
      
      * up
      
      * fix more
      
      * more prints
      
      * add
      
      * up
      
      * up
      
      * up
      
      * uP
      
      * uP
      
      * more fixes
      
      * uP
      
      * up
      
      * up
      
      * up
      
      * up
      
      * fix more
      
      * up
      
      * up
      
      * clean tests
      
      * up
      
      * up
      
      * up
      
      * more fixes
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * make
      
      * correct
      
      * finish
      
      * finish
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      6ba2231d
  23. 18 Jan, 2023 1 commit
  24. 30 Dec, 2022 1 commit
  25. 20 Dec, 2022 1 commit
  26. 09 Dec, 2022 1 commit
  27. 05 Dec, 2022 2 commits
  28. 30 Nov, 2022 1 commit
  29. 29 Nov, 2022 1 commit
    • Pedro Cuenca's avatar
      Flax support for Stable Diffusion 2 (#1423) · 4d1e4e24
      Pedro Cuenca authored
      
      
      * Flax: start adapting to Stable Diffusion 2
      
      * More changes.
      
      * attention_head_dim can be a tuple.
      
      * Fix typos
      
      * Add simple SD 2 integration test.
      
      Slice values taken from my Ampere GPU.
      
      * Add simple UNet integration tests for Flax.
      
      Note that the expected values are taken from the PyTorch results. This
      ensures the Flax and PyTorch versions are not too far off.
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Typos and style
      
      * Tests: verify jax is available.
      
      * Style
      
      * Make flake happy
      
      * Remove typo.
      
      * Simple Flax SD 2 pipeline tests.
      
      * Import order
      
      * Remove unused import.
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: @camenduru 
      4d1e4e24
  30. 23 Nov, 2022 1 commit
    • Suraj Patil's avatar
      update unet2d (#1376) · f07a16e0
      Suraj Patil authored
      * boom boom
      
      * remove duplicate arg
      
      * add use_linear_proj arg
      
      * fix copies
      
      * style
      
      * add fast tests
      
      * use_linear_proj -> use_linear_projection
      f07a16e0
  31. 15 Nov, 2022 1 commit
  32. 14 Nov, 2022 1 commit
    • Nathan Lambert's avatar
      Add UNet 1d for RL model for planning + colab (#105) · 7c5fef81
      Nathan Lambert authored
      
      
      * re-add RL model code
      
      * match model forward api
      
      * add register_to_config, pass training tests
      
      * fix tests, update forward outputs
      
      * remove unused code, some comments
      
      * add to docs
      
      * remove extra embedding code
      
      * unify time embedding
      
      * remove conv1d output sequential
      
      * remove sequential from conv1dblock
      
      * style and deleting duplicated code
      
      * clean files
      
      * remove unused variables
      
      * clean variables
      
      * add 1d resnet block structure for downsample
      
      * rename as unet1d
      
      * fix renaming
      
      * rename files
      
      * add get_block(...) api
      
      * unify args for model1d like model2d
      
      * minor cleaning
      
      * fix docs
      
      * improve 1d resnet blocks
      
      * fix tests, remove permuts
      
      * fix style
      
      * add output activation
      
      * rename flax blocks file
      
      * Add Value Function and corresponding example script to Diffuser implementation (#884)
      
      * valuefunction code
      
      * start example scripts
      
      * missing imports
      
      * bug fixes and placeholder example script
      
      * add value function scheduler
      
      * load value function from hub and get best actions in example
      
      * very close to working example
      
      * larger batch size for planning
      
      * more tests
      
      * merge unet1d changes
      
      * wandb for debugging, use newer models
      
      * success!
      
      * turns out we just need more diffusion steps
      
      * run on modal
      
      * merge and code cleanup
      
      * use same api for rl model
      
      * fix variance type
      
      * wrong normalization function
      
      * add tests
      
      * style
      
      * style and quality
      
      * edits based on comments
      
      * style and quality
      
      * remove unused var
      
      * hack unet1d into a value function
      
      * add pipeline
      
      * fix arg order
      
      * add pipeline to core library
      
      * community pipeline
      
      * fix couple shape bugs
      
      * style
      
      * Apply suggestions from code review
      Co-authored-by: default avatarNathan Lambert <nathan@huggingface.co>
      
      * update post merge of scripts
      
      * add mdiblock / outblock architecture
      
      * Pipeline cleanup (#947)
      
      * valuefunction code
      
      * start example scripts
      
      * missing imports
      
      * bug fixes and placeholder example script
      
      * add value function scheduler
      
      * load value function from hub and get best actions in example
      
      * very close to working example
      
      * larger batch size for planning
      
      * more tests
      
      * merge unet1d changes
      
      * wandb for debugging, use newer models
      
      * success!
      
      * turns out we just need more diffusion steps
      
      * run on modal
      
      * merge and code cleanup
      
      * use same api for rl model
      
      * fix variance type
      
      * wrong normalization function
      
      * add tests
      
      * style
      
      * style and quality
      
      * edits based on comments
      
      * style and quality
      
      * remove unused var
      
      * hack unet1d into a value function
      
      * add pipeline
      
      * fix arg order
      
      * add pipeline to core library
      
      * community pipeline
      
      * fix couple shape bugs
      
      * style
      
      * Apply suggestions from code review
      
      * clean up comments
      
      * convert older script to using pipeline and add readme
      
      * rename scripts
      
      * style, update tests
      
      * delete unet rl model file
      
      * remove imports in src
      Co-authored-by: default avatarNathan Lambert <nathan@huggingface.co>
      
      * Update src/diffusers/models/unet_1d_blocks.py
      
      * Update tests/test_models_unet.py
      
      * RL Cleanup v2 (#965)
      
      * valuefunction code
      
      * start example scripts
      
      * missing imports
      
      * bug fixes and placeholder example script
      
      * add value function scheduler
      
      * load value function from hub and get best actions in example
      
      * very close to working example
      
      * larger batch size for planning
      
      * more tests
      
      * merge unet1d changes
      
      * wandb for debugging, use newer models
      
      * success!
      
      * turns out we just need more diffusion steps
      
      * run on modal
      
      * merge and code cleanup
      
      * use same api for rl model
      
      * fix variance type
      
      * wrong normalization function
      
      * add tests
      
      * style
      
      * style and quality
      
      * edits based on comments
      
      * style and quality
      
      * remove unused var
      
      * hack unet1d into a value function
      
      * add pipeline
      
      * fix arg order
      
      * add pipeline to core library
      
      * community pipeline
      
      * fix couple shape bugs
      
      * style
      
      * Apply suggestions from code review
      
      * clean up comments
      
      * convert older script to using pipeline and add readme
      
      * rename scripts
      
      * style, update tests
      
      * delete unet rl model file
      
      * remove imports in src
      
      * add specific vf block and update tests
      
      * style
      
      * Update tests/test_models_unet.py
      Co-authored-by: default avatarNathan Lambert <nathan@huggingface.co>
      
      * fix quality in tests
      
      * fix quality style, split test file
      
      * fix checks / tests
      
      * make timesteps closer to main
      
      * unify block API
      
      * unify forward api
      
      * delete lines in examples
      
      * style
      
      * examples style
      
      * all tests pass
      
      * make style
      
      * make dance_diff test pass
      
      * Refactoring RL PR (#1200)
      
      * init file changes
      
      * add import utils
      
      * finish cleaning files, imports
      
      * remove import flags
      
      * clean examples
      
      * fix imports, tests for merge
      
      * update readmes
      
      * hotfix for tests
      
      * quality
      
      * fix some tests
      
      * change defaults
      
      * more mps test fixes
      
      * unet1d defaults
      
      * do not default import experimental
      
      * defaults for tests
      
      * fix tests
      
      * fix-copies
      
      * fix
      
      * changes per Patrik's comments (#1285)
      
      * changes per Patrik's comments
      
      * update conversion script
      
      * fix renaming
      
      * skip more mps tests
      
      * last test fix
      
      * Update examples/rl/README.md
      Co-authored-by: default avatarBen Glickenhaus <benglickenhaus@gmail.com>
      7c5fef81
  33. 08 Nov, 2022 1 commit
    • Pedro Cuenca's avatar
      MPS schedulers: don't use float64 (#1169) · 813744e5
      Pedro Cuenca authored
      * Schedulers: don't use float64 on mps
      
      * Test set_timesteps() on device (float schedulers).
      
      * SD pipeline: use device in set_timesteps.
      
      * SD in-painting pipeline: use device in set_timesteps.
      
      * Tests: fix mps crashes.
      
      * Skip test_load_pipeline_from_git on mps.
      
      Not compatible with float16.
      
      * Use device.type instead of str in Euler schedulers.
      813744e5