1. 09 Aug, 2023 1 commit
    • Steven Liu's avatar
      [docs] Clean scheduler api (#4204) · 16ad13b6
      Steven Liu authored
      * clean scheduler mixin
      
      * up to dpmsolvermultistep
      
      * finish cleaning
      
      * first draft
      
      * fix overview table
      
      * apply feedback
      
      * update reference code
      16ad13b6
  2. 06 Jul, 2023 1 commit
    • YiYi Xu's avatar
      Add Shap-E (#3742) · 45f6d52b
      YiYi Xu authored
      
      
      * refactor prior_transformer
      
      adding conversion script
      
      add pipeline
      
      add step_index from pipeline, + remove permute
      
      add zero pad token
      
      remove copy from statement for betas_for_alpha_bar function
      
      * add
      
      * add
      
      * update conversion script for renderer model
      
      * refactor camera a little bit
      
      * clean up
      
      * style
      
      * fix copies
      
      * Update src/diffusers/schedulers/scheduling_heun_discrete.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/shap_e/pipeline_shap_e.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/shap_e/pipeline_shap_e.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * alpha_transform_type
      
      * remove step_index argument
      
      * remove get_sigmas_karras
      
      * remove _yiyi_sigma_to_t
      
      * move the rescale prompt_embeds from prior_transformer to pipeline
      
      * replace baddbmm with einsum to match origial repo
      
      * Revert "replace baddbmm with einsum to match origial repo"
      
      This reverts commit 3f6b435d65dad3e5514cad2f5dd9e4419ca78e0b.
      
      * add step_index to scale_model_input
      
      * Revert "move the rescale prompt_embeds from prior_transformer to pipeline"
      
      This reverts commit 5b5a8e6be918fefd114a2945ed89d8e8fa8be21b.
      
      * move rescale from prior_transformer to pipeline
      
      * correct step_index in scale_model_input
      
      * remove print lines
      
      * refactor prior - reduce arguments
      
      * make style
      
      * add prior_image
      
      * arg embedding_proj_norm -> norm_embedding_proj
      
      * add pre-norm for proj_embedding
      
      * move rescale prompt from pipeline to _encode_prompt
      
      * add img2img pipeline
      
      * style
      
      * copies
      
      * Update src/diffusers/models/prior_transformer.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/models/prior_transformer.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/models/prior_transformer.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/models/prior_transformer.py
      
      add arg: encoder_hid_proj
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/models/prior_transformer.py
      
      add new config: norm_in_type
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/models/prior_transformer.py
      
      add new config: added_emb_type
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/models/prior_transformer.py
      
      rename out_dim -> clip_embed_dim
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/models/prior_transformer.py
      
      rename config: out_dim -> clip_embed_dim
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/models/prior_transformer.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/models/prior_transformer.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * finish refactor prior_tranformer
      
      * make style
      
      * refactor renderer
      
      * fix
      
      * make style
      
      * refactor img2img
      
      * remove params_proj
      
      * add test
      
      * add upcast_softmax to prior_transformer
      
      * enable num_images_per_prompt, add save_gif utility
      
      * add
      
      * add fast test
      
      * make style
      
      * add slow test
      
      * style
      
      * add test for img2img
      
      * refactor
      
      * enable batching
      
      * style
      
      * refactor scheduler
      
      * update test
      
      * style
      
      * attempt to solve batch related tests timeout
      
      * add doc
      
      * Update src/diffusers/pipelines/shap_e/pipeline_shap_e.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/shap_e/pipeline_shap_e_img2img.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * hardcode rendering related config
      
      * update betas_for_alpha_bar on ddpm_scheduler
      
      * fix copies
      
      * fix
      
      * export_to_gif
      
      * style
      
      * second attempt to speed up batching tests
      
      * add doc page to index
      
      * Remove intermediate clipping
      
      * 3rd attempt to speed up batching tests
      
      * Remvoe time index
      
      * simplify scheduler
      
      * Fix more
      
      * Fix more
      
      * fix more
      
      * make style
      
      * fix schedulers
      
      * fix some more tests
      
      * finish
      
      * add one more test
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * style
      
      * apply feedbacks
      
      * style
      
      * fix copies
      
      * add one example
      
      * style
      
      * add example for img2img
      
      * fix doc
      
      * fix more doc strings
      
      * size -> frame_size
      
      * style
      
      * update doc
      
      * style
      
      * fix on doc
      
      * update repo name
      
      * improve the usage example in shap-e img2img
      
      * add usage examples in the shap-e docs.
      
      * consolidate examples.
      
      * minor fix.
      
      * update doc
      
      * Apply suggestions from code review
      
      * Apply suggestions from code review
      
      * remove upcast
      
      * Make sure background is white
      
      * Update src/diffusers/pipelines/shap_e/pipeline_shap_e.py
      
      * Apply suggestions from code review
      
      * Finish
      
      * Apply suggestions from code review
      
      * Update src/diffusers/pipelines/shap_e/pipeline_shap_e.py
      
      * Make style
      
      ---------
      Co-authored-by: default avataryiyixuxu <yixu310@gmail,com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      45f6d52b
  3. 05 Jul, 2023 1 commit
    • Pedro Cuenca's avatar
      Add `timestep_spacing` and `steps_offset` to schedulers (#3947) · 07c9a08e
      Pedro Cuenca authored
      
      
      * Add timestep_spacing to DDPM, LMSDiscrete, PNDM.
      
      * Remove spurious line.
      
      * More easy schedulers.
      
      * Add `linspace` to DDIM
      
      * Noise sigma for `trailing`.
      
      * Add timestep_spacing to DEISMultistepScheduler.
      
      Not sure the range is the way it was intended.
      
      * Fix: remove line used to debug.
      
      * Support timestep_spacing in DPMSolverMultistep, DPMSolverSDE, UniPC
      
      * Fix: convert to numpy.
      
      * Use sched. defaults when instantiating from_config
      
      For params not present in the original configuration.
      
      This makes it possible to switch pipeline schedulers even if they use
      different timestep_spacing (or any other param).
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Missing args in DPMSolverMultistep
      
      * Test: default args not in config
      
      * Style
      
      * Fix scheduler name in test
      
      * Remove duplicated entries
      
      * Add test for solver_type
      
      This test currently fails in main. When switching from DEIS to UniPC,
      solver_type is "logrho" (the default value from DEIS), which gets
      translated to "bh1" by UniPC. This is different to the default value for
      UniPC: "bh2". This is where the translation happens: https://github.com/huggingface/diffusers/blob/36d22d0709dc19776e3016fb3392d0f5578b0ab2/src/diffusers/schedulers/scheduling_unipc_multistep.py#L171
      
      
      
      * UniPC: use same default for solver_type
      
      Fixes a bug when switching from UniPC from another scheduler (i.e.,
      DEIS) that uses a different solver type. The solver is now the same as
      if we had instantiated the scheduler directly.
      
      * do not save use default values
      
      * fix more
      
      * fix all
      
      * fix schedulers
      
      * fix more
      
      * finish for real
      
      * finish for real
      
      * flaky tests
      
      * Update tests/pipelines/stable_diffusion/test_stable_diffusion_pix2pix_zero.py
      
      * Default steps_offset to 0.
      
      * Add missing docstrings
      
      * Apply suggestions from code review
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      07c9a08e
  4. 20 Jun, 2023 1 commit
  5. 12 May, 2023 1 commit
  6. 17 Apr, 2023 1 commit
  7. 14 Apr, 2023 1 commit
    • Will Berman's avatar
      ddpm custom timesteps (#3007) · b811964a
      Will Berman authored
      add custom timesteps test
      
      add custom timesteps descending order check
      
      docs
      
      timesteps -> custom_timesteps
      
      can only pass one of num_inference_steps and timesteps
      b811964a
  8. 11 Apr, 2023 1 commit
    • Patrick von Platen's avatar
      Fix config prints and save, load of pipelines (#2849) · 8b451eb6
      Patrick von Platen authored
      * [Config] Fix config prints and save, load
      
      * Only use potential nn.Modules for dtype and device
      
      * Correct vae image processor
      
      * make sure in_channels is not accessed directly
      
      * make sure in channels is only accessed via config
      
      * Make sure schedulers only access config attributes
      
      * Make sure to access config in SAG
      
      * Fix vae processor and make style
      
      * add tests
      
      * uP
      
      * make style
      
      * Fix more naming issues
      
      * Final fix with vae config
      
      * change more
      8b451eb6
  9. 10 Apr, 2023 5 commits
  10. 07 Mar, 2023 1 commit
  11. 01 Mar, 2023 1 commit
  12. 14 Feb, 2023 1 commit
  13. 03 Feb, 2023 1 commit
  14. 31 Jan, 2023 1 commit
  15. 25 Jan, 2023 1 commit
  16. 17 Jan, 2023 1 commit
    • Kashif Rasul's avatar
      DiT Pipeline (#1806) · 37d113cc
      Kashif Rasul authored
      
      
      * added dit model
      
      * import
      
      * initial pipeline
      
      * initial convert script
      
      * initial pipeline
      
      * make style
      
      * raise valueerror
      
      * single function
      
      * rename classes
      
      * use DDIMScheduler
      
      * timesteps embedder
      
      * samples to cpu
      
      * fix var names
      
      * fix numpy type
      
      * use timesteps class for proj
      
      * fix typo
      
      * fix arg name
      
      * flip_sin_to_cos and better var names
      
      * fix C shape cal
      
      * make style
      
      * remove unused imports
      
      * cleanup
      
      * add back patch_size
      
      * initial dit doc
      
      * typo
      
      * Update docs/source/api/pipelines/dit.mdx
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * added copyright license headers
      
      * added example usage and toc
      
      * fix variable names asserts
      
      * remove comment
      
      * added docs
      
      * fix typo
      
      * upstream changes
      
      * set proper device for drop_ids
      
      * added initial dit pipeline test
      
      * update docs
      
      * fix imports
      
      * make fix-copies
      
      * isort
      
      * fix imports
      
      * get rid of more magic numbers
      
      * fix code when guidance is off
      
      * remove block_kwargs
      
      * cleanup script
      
      * removed to_2tuple
      
      * use FeedForward class instead of another MLP
      
      * style
      
      * work on mergint DiTBlock with BasicTransformerBlock
      
      * added missing final_dropout and args to BasicTransformerBlock
      
      * use norm from block
      
      * fix arg
      
      * remove unused arg
      
      * fix call to class_embedder
      
      * use timesteps
      
      * make style
      
      * attn_output gets multiplied
      
      * removed commented code
      
      * use Transformer2D
      
      * use self.is_input_patches
      
      * fix flags
      
      * fixed conversion to use Transformer2DModel
      
      * fixes for pipeline
      
      * remove dit.py
      
      * fix timesteps device
      
      * use randn_tensor and fix fp16 inf.
      
      * timesteps_emb already the right dtype
      
      * fix dit test class
      
      * fix test and style
      
      * fix norm2 usage in vq-diffusion
      
      * added author names to pipeline and lmagenet labels link
      
      * fix tests
      
      * use norm_type as string
      
      * rename dit to transformer
      
      * fix name
      
      * fix test
      
      * set  norm_type = "layer" by default
      
      * fix tests
      
      * do not skip common tests
      
      * Update src/diffusers/models/attention.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * revert AdaLayerNorm API
      
      * fix norm_type name
      
      * make sure all components are in eval mode
      
      * revert norm2 API
      
      * compact
      
      * finish deprecation
      
      * add slow tests
      
      * remove @
      
      * refactor some stuff
      
      * upload
      
      * Update src/diffusers/pipelines/dit/pipeline_dit.py
      
      * finish more
      
      * finish docs
      
      * improve docs
      
      * finish docs
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      Co-authored-by: default avatarWilliam Berman <WLBberman@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      37d113cc
  17. 04 Jan, 2023 2 commits
    • Patrick von Platen's avatar
      Improve reproduceability 2/3 (#1906) · 9b638548
      Patrick von Platen authored
      * [Repro] Correct reproducability
      
      * up
      
      * up
      
      * uP
      
      * up
      
      * need better image
      
      * allow conversion from no state dict checkpoints
      
      * up
      
      * up
      
      * up
      
      * up
      
      * check tensors
      
      * check tensors
      
      * check tensors
      
      * check tensors
      
      * next try
      
      * up
      
      * up
      
      * better name
      
      * up
      
      * up
      
      * Apply suggestions from code review
      
      * correct more
      
      * up
      
      * replace all torch randn
      
      * fix
      
      * correct
      
      * correct
      
      * finish
      
      * fix more
      
      * up
      9b638548
    • Joqsan's avatar
      fix: DDPMScheduler.set_timesteps() (#1912) · 675ef1ff
      Joqsan authored
      675ef1ff
  18. 19 Dec, 2022 1 commit
  19. 13 Dec, 2022 1 commit
  20. 02 Dec, 2022 1 commit
  21. 01 Dec, 2022 1 commit
  22. 30 Nov, 2022 2 commits
  23. 28 Nov, 2022 2 commits
    • Patrick von Platen's avatar
      Add 2nd order heun scheduler (#1336) · 4c54519e
      Patrick von Platen authored
      * Add heun
      
      * Finish first version of heun
      
      * remove bogus
      
      * finish
      
      * finish
      
      * improve
      
      * up
      
      * up
      
      * fix more
      
      * change progress bar
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
      
      * finish
      
      * up
      
      * up
      
      * up
      4c54519e
    • Suraj Patil's avatar
      v-prediction training support (#1455) · 6c56f050
      Suraj Patil authored
      * add get_velocity
      
      * add v prediction for training
      
      * fix saving
      
      * add revision arg
      
      * fix saving
      
      * save checkpoints dreambooth
      
      * fix saving embeds
      
      * add instruction in readme
      
      * quality
      
      * noise_pred -> model_pred
      6c56f050
  24. 25 Nov, 2022 2 commits
    • Patrick von Platen's avatar
      Allow to set config params directly in init (#1419) · 8faa822d
      Patrick von Platen authored
      * fix
      
      * fix deprecated kwargs logic
      
      * add tests
      
      * finish
      8faa822d
    • Pedro Cuenca's avatar
      Deprecate `predict_epsilon` (#1393) · d52388f4
      Pedro Cuenca authored
      
      
      * Adapt ddpm, ddpmsolver to prediction_type.
      
      * Deprecate predict_epsilon in __init__.
      
      * Bring FlaxDDIMScheduler up to date with DDIMScheduler.
      
      * Set prediction_type as an ivar for consistency.
      
      * Convert pipeline_ddpm
      
      * Adapt tests.
      
      * Adapt unconditional training script.
      
      * Adapt BitDiffusion example.
      
      * Add missing kwargs in dpmsolver_multistep
      
      * Ugly workaround to accept deprecated predict_epsilon when loading
      schedulers using from_pretrained.
      
      * make style
      
      * Remove import no longer in use.
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Use config.prediction_type everywhere
      
      * Add a couple of Flax prediction type tests.
      
      * make style
      
      * fix register deprecated arg
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      d52388f4
  25. 15 Nov, 2022 1 commit
  26. 14 Nov, 2022 1 commit
    • Nathan Lambert's avatar
      Add UNet 1d for RL model for planning + colab (#105) · 7c5fef81
      Nathan Lambert authored
      
      
      * re-add RL model code
      
      * match model forward api
      
      * add register_to_config, pass training tests
      
      * fix tests, update forward outputs
      
      * remove unused code, some comments
      
      * add to docs
      
      * remove extra embedding code
      
      * unify time embedding
      
      * remove conv1d output sequential
      
      * remove sequential from conv1dblock
      
      * style and deleting duplicated code
      
      * clean files
      
      * remove unused variables
      
      * clean variables
      
      * add 1d resnet block structure for downsample
      
      * rename as unet1d
      
      * fix renaming
      
      * rename files
      
      * add get_block(...) api
      
      * unify args for model1d like model2d
      
      * minor cleaning
      
      * fix docs
      
      * improve 1d resnet blocks
      
      * fix tests, remove permuts
      
      * fix style
      
      * add output activation
      
      * rename flax blocks file
      
      * Add Value Function and corresponding example script to Diffuser implementation (#884)
      
      * valuefunction code
      
      * start example scripts
      
      * missing imports
      
      * bug fixes and placeholder example script
      
      * add value function scheduler
      
      * load value function from hub and get best actions in example
      
      * very close to working example
      
      * larger batch size for planning
      
      * more tests
      
      * merge unet1d changes
      
      * wandb for debugging, use newer models
      
      * success!
      
      * turns out we just need more diffusion steps
      
      * run on modal
      
      * merge and code cleanup
      
      * use same api for rl model
      
      * fix variance type
      
      * wrong normalization function
      
      * add tests
      
      * style
      
      * style and quality
      
      * edits based on comments
      
      * style and quality
      
      * remove unused var
      
      * hack unet1d into a value function
      
      * add pipeline
      
      * fix arg order
      
      * add pipeline to core library
      
      * community pipeline
      
      * fix couple shape bugs
      
      * style
      
      * Apply suggestions from code review
      Co-authored-by: default avatarNathan Lambert <nathan@huggingface.co>
      
      * update post merge of scripts
      
      * add mdiblock / outblock architecture
      
      * Pipeline cleanup (#947)
      
      * valuefunction code
      
      * start example scripts
      
      * missing imports
      
      * bug fixes and placeholder example script
      
      * add value function scheduler
      
      * load value function from hub and get best actions in example
      
      * very close to working example
      
      * larger batch size for planning
      
      * more tests
      
      * merge unet1d changes
      
      * wandb for debugging, use newer models
      
      * success!
      
      * turns out we just need more diffusion steps
      
      * run on modal
      
      * merge and code cleanup
      
      * use same api for rl model
      
      * fix variance type
      
      * wrong normalization function
      
      * add tests
      
      * style
      
      * style and quality
      
      * edits based on comments
      
      * style and quality
      
      * remove unused var
      
      * hack unet1d into a value function
      
      * add pipeline
      
      * fix arg order
      
      * add pipeline to core library
      
      * community pipeline
      
      * fix couple shape bugs
      
      * style
      
      * Apply suggestions from code review
      
      * clean up comments
      
      * convert older script to using pipeline and add readme
      
      * rename scripts
      
      * style, update tests
      
      * delete unet rl model file
      
      * remove imports in src
      Co-authored-by: default avatarNathan Lambert <nathan@huggingface.co>
      
      * Update src/diffusers/models/unet_1d_blocks.py
      
      * Update tests/test_models_unet.py
      
      * RL Cleanup v2 (#965)
      
      * valuefunction code
      
      * start example scripts
      
      * missing imports
      
      * bug fixes and placeholder example script
      
      * add value function scheduler
      
      * load value function from hub and get best actions in example
      
      * very close to working example
      
      * larger batch size for planning
      
      * more tests
      
      * merge unet1d changes
      
      * wandb for debugging, use newer models
      
      * success!
      
      * turns out we just need more diffusion steps
      
      * run on modal
      
      * merge and code cleanup
      
      * use same api for rl model
      
      * fix variance type
      
      * wrong normalization function
      
      * add tests
      
      * style
      
      * style and quality
      
      * edits based on comments
      
      * style and quality
      
      * remove unused var
      
      * hack unet1d into a value function
      
      * add pipeline
      
      * fix arg order
      
      * add pipeline to core library
      
      * community pipeline
      
      * fix couple shape bugs
      
      * style
      
      * Apply suggestions from code review
      
      * clean up comments
      
      * convert older script to using pipeline and add readme
      
      * rename scripts
      
      * style, update tests
      
      * delete unet rl model file
      
      * remove imports in src
      
      * add specific vf block and update tests
      
      * style
      
      * Update tests/test_models_unet.py
      Co-authored-by: default avatarNathan Lambert <nathan@huggingface.co>
      
      * fix quality in tests
      
      * fix quality style, split test file
      
      * fix checks / tests
      
      * make timesteps closer to main
      
      * unify block API
      
      * unify forward api
      
      * delete lines in examples
      
      * style
      
      * examples style
      
      * all tests pass
      
      * make style
      
      * make dance_diff test pass
      
      * Refactoring RL PR (#1200)
      
      * init file changes
      
      * add import utils
      
      * finish cleaning files, imports
      
      * remove import flags
      
      * clean examples
      
      * fix imports, tests for merge
      
      * update readmes
      
      * hotfix for tests
      
      * quality
      
      * fix some tests
      
      * change defaults
      
      * more mps test fixes
      
      * unet1d defaults
      
      * do not default import experimental
      
      * defaults for tests
      
      * fix tests
      
      * fix-copies
      
      * fix
      
      * changes per Patrik's comments (#1285)
      
      * changes per Patrik's comments
      
      * update conversion script
      
      * fix renaming
      
      * skip more mps tests
      
      * last test fix
      
      * Update examples/rl/README.md
      Co-authored-by: default avatarBen Glickenhaus <benglickenhaus@gmail.com>
      7c5fef81
  27. 09 Nov, 2022 1 commit
  28. 08 Nov, 2022 1 commit
  29. 06 Nov, 2022 1 commit
    • Cheng Lu's avatar
      Add multistep DPM-Solver discrete scheduler (#1132) · b4a1ed85
      Cheng Lu authored
      
      
      * add dpmsolver discrete pytorch scheduler
      
      * fix some typos in dpm-solver pytorch
      
      * add dpm-solver pytorch in stable-diffusion pipeline
      
      * add jax/flax version dpm-solver
      
      * change code style
      
      * change code style
      
      * add docs
      
      * add `add_noise` method for dpmsolver
      
      * add pytorch unit test for dpmsolver
      
      * add dummy object for pytorch dpmsolver
      
      * Update src/diffusers/schedulers/scheduling_dpmsolver_discrete.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Update tests/test_config.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Update tests/test_config.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * resolve the code comments
      
      * rename the file
      
      * change class name
      
      * fix code style
      
      * add auto docs for dpmsolver multistep
      
      * add more explanations for the stabilizing trick (for steps < 15)
      
      * delete the dummy file
      
      * change the API name of predict_epsilon, algorithm_type and solver_type
      
      * add compatible lists
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      b4a1ed85
  30. 31 Oct, 2022 1 commit
  31. 14 Oct, 2022 1 commit
  32. 10 Oct, 2022 1 commit