"vscode:/vscode.git/clone" did not exist on "91f5f862804a2cb5ab4cb65d9634ab9017168a67"
- 01 Dec, 2022 1 commit
-
-
Suraj Patil authored
* support v prediction in other schedulers * v heun * add tests for v pred * fix tests * fix test euler a * v ddpm
-
- 30 Nov, 2022 4 commits
-
-
Anton Lozhkov authored
-
Pedro Cuenca authored
Remove reminder comment.
-
Patrick von Platen authored
* Add test * up * no bfloat16 for mps * fix * rename test
-
Patrick von Platen authored
-
- 29 Nov, 2022 1 commit
-
-
Rohan Taori authored
* cast to float for quantile method * add fp16 test for DPMSolverMultistepScheduler fix * formatting update
-
- 28 Nov, 2022 1 commit
-
-
Patrick von Platen authored
* Add heun * Finish first version of heun * remove bogus * finish * finish * improve * up * up * fix more * change progress bar * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py * finish * up * up * up
-
- 25 Nov, 2022 2 commits
-
-
Patrick von Platen authored
* fix * fix deprecated kwargs logic * add tests * finish
-
Pedro Cuenca authored
* Adapt ddpm, ddpmsolver to prediction_type. * Deprecate predict_epsilon in __init__. * Bring FlaxDDIMScheduler up to date with DDIMScheduler. * Set prediction_type as an ivar for consistency. * Convert pipeline_ddpm * Adapt tests. * Adapt unconditional training script. * Adapt BitDiffusion example. * Add missing kwargs in dpmsolver_multistep * Ugly workaround to accept deprecated predict_epsilon when loading schedulers using from_pretrained. * make style * Remove import no longer in use. * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Use config.prediction_type everywhere * Add a couple of Flax prediction type tests. * make style * fix register deprecated arg Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 15 Nov, 2022 1 commit
-
-
Patrick von Platen authored
* add conversion script for vae * uP * uP * more changes * push * up * finish again * up * up * up * up * finish * up * uP * up * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Anton Lozhkov <anton@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com> * up * up Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Anton Lozhkov <anton@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 09 Nov, 2022 2 commits
-
-
Anton Lozhkov authored
* [Tests] Fix mps+generator fast tests * mps for Euler * retry * warmup issue again? * fix reproducible initial noise * Revert "fix reproducible initial noise" This reverts commit f300d05cb9782ed320064a0c58577a32d4139e6d. * fix reproducible initial noise * fix device
-
Patrick von Platen authored
* fix tests * Fix more * more
-
- 08 Nov, 2022 2 commits
-
-
Patrick von Platen authored
* [Scheduler] Move predict epsilon to init * up * uP * uP * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * up Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Pedro Cuenca authored
* Schedulers: don't use float64 on mps * Test set_timesteps() on device (float schedulers). * SD pipeline: use device in set_timesteps. * SD in-painting pipeline: use device in set_timesteps. * Tests: fix mps crashes. * Skip test_load_pipeline_from_git on mps. Not compatible with float16. * Use device.type instead of str in Euler schedulers.
-
- 06 Nov, 2022 1 commit
-
-
Cheng Lu authored
* add dpmsolver discrete pytorch scheduler * fix some typos in dpm-solver pytorch * add dpm-solver pytorch in stable-diffusion pipeline * add jax/flax version dpm-solver * change code style * change code style * add docs * add `add_noise` method for dpmsolver * add pytorch unit test for dpmsolver * add dummy object for pytorch dpmsolver * Update src/diffusers/schedulers/scheduling_dpmsolver_discrete.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Update tests/test_config.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Update tests/test_config.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * resolve the code comments * rename the file * change class name * fix code style * add auto docs for dpmsolver multistep * add more explanations for the stabilizing trick (for steps < 15) * delete the dummy file * change the API name of predict_epsilon, algorithm_type and solver_type * add compatible lists Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 03 Nov, 2022 1 commit
-
-
Will Berman authored
* Changes for VQ-diffusion VQVAE Add specify dimension of embeddings to VQModel: `VQModel` will by default set the dimension of embeddings to the number of latent channels. The VQ-diffusion VQVAE has a smaller embedding dimension, 128, than number of latent channels, 256. Add AttnDownEncoderBlock2D and AttnUpDecoderBlock2D to the up and down unet block helpers. VQ-diffusion's VQVAE uses those two block types. * Changes for VQ-diffusion transformer Modify attention.py so SpatialTransformer can be used for VQ-diffusion's transformer. SpatialTransformer: - Can now operate over discrete inputs (classes of vector embeddings) as well as continuous. - `in_channels` was made optional in the constructor so two locations where it was passed as a positional arg were moved to kwargs - modified forward pass to take optional timestep embeddings ImagePositionalEmbeddings: - added to provide positional embeddings to discrete inputs for latent pixels BasicTransformerBlock: - norm layers were made configurable so that the VQ-diffusion could use AdaLayerNorm with timestep embeddings - modified forward pass to take optional timestep embeddings CrossAttention: - now may optionally take a bias parameter for its query, key, and value linear layers FeedForward: - Internal layers are now configurable ApproximateGELU: - Activation function in VQ-diffusion's feedforward layer AdaLayerNorm: - Norm layer modified to incorporate timestep embeddings * Add VQ-diffusion scheduler * Add VQ-diffusion pipeline * Add VQ-diffusion convert script to diffusers * Add VQ-diffusion dummy objects * Add VQ-diffusion markdown docs * Add VQ-diffusion tests * some renaming * some fixes * more renaming * correct * fix typo * correct weights * finalize * fix tests * Apply suggestions from code review Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * finish * finish * up Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 31 Oct, 2022 1 commit
-
-
hlky authored
* k-diffusion-euler * make style make quality * make fix-copies * fix tests for euler a * Update src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Update src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Update src/diffusers/schedulers/scheduling_euler_discrete.py Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Update src/diffusers/schedulers/scheduling_euler_discrete.py Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * remove unused arg and method * update doc * quality * make flake happy * use logger instead of warn * raise error instead of deprication * don't require scipy * pass generator in step * fix tests * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update tests/test_scheduler.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * remove unused generator * pass generator as extra_step_kwargs * update tests * pass generator as kwarg * pass generator as kwarg * quality * fix test for lms * fix tests Co-authored-by:
patil-suraj <surajp815@gmail.com> Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 27 Oct, 2022 1 commit
-
-
Pedro Cuenca authored
* Add failing test for #940. * Do not use torch.float64 in mps. * style * Temporarily skip add_noise for IPNDMScheduler. Until #990 is addressed. * Fix additional float64 error in mps. * Improve add_noise test * Slight edit – I think it's clearer this way.
-
- 26 Oct, 2022 1 commit
-
-
Pedro Cuenca authored
* Add failing test for #940. * Do not use torch.float64 in mps. * style * Temporarily skip add_noise for IPNDMScheduler. Until #990 is addressed.
-
- 25 Oct, 2022 1 commit
-
-
Patrick von Platen authored
* start * add more logic * Update src/diffusers/models/unet_2d_condition_flax.py * match weights * up * make model work * making class more general, fixing missed file rename * small fix * make new conversion work * up * finalize conversion * up * first batch of variable renamings * remove c and c_prev var names * add mid and out block structure * add pipeline * up * finish conversion * finish * upload * more fixes * Apply suggestions from code review * add attr * up * uP * up * finish tests * finish * uP * finish * fix test * up * naming consistency in tests * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Nathan Lambert <nathan@huggingface.co> Co-authored-by:
Anton Lozhkov <anton@huggingface.co> * remove hardcoded 16 * Remove bogus * fix some stuff * finish * improve logging * docs * upload Co-authored-by:
Nathan Lambert <nol@berkeley.edu> Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Nathan Lambert <nathan@huggingface.co> Co-authored-by:
Anton Lozhkov <anton@huggingface.co>
-
- 05 Oct, 2022 2 commits
-
-
Anton Lozhkov authored
* init * improve add_noise * [debug start] run slow test * [debug end] * quick revert * Add docstrings and warnings + API tests * Make the warning less spammy
-
Kashif Rasul authored
* pytorch timesteps * style * get rid of if-else * fix test Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 28 Sep, 2022 1 commit
-
-
Anton Lozhkov authored
* Fix the LMS pytorch regression * Copy over the changes from #637 * Copy over the changes from #637 * Fix betas test
-
- 27 Sep, 2022 1 commit
-
-
Kashif Rasul authored
* pytorch only schedulers * fix style * remove match_shape * pytorch only ddpm * remove SchedulerMixin * remove numpy from karras_ve * fix types * remove numpy from lms_discrete * remove numpy from pndm * fix typo * remove mixin and numpy from sde_vp and ve * remove remaining tensor_format * fix style * sigmas has to be torch tensor * removed set_format in readme * remove set format from docs * remove set_format from pipelines * update tests * fix typo * continue to use mixin * fix imports * removed unsed imports * match shape instead of assuming image shapes * remove import typo * update call to add_noise * use math instead of numpy * fix t_index * removed commented out numpy tests * timesteps needs to be discrete * cast timesteps to int in flax scheduler too * fix device mismatch issue * small fix * Update src/diffusers/schedulers/scheduling_pndm.py Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 17 Sep, 2022 1 commit
-
-
Jonatan Kłosko authored
* Unify offset configuration in DDIM and PNDM schedulers * Format Add missing variables * Fix pipeline test * Update src/diffusers/schedulers/scheduling_ddim.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Default set_alpha_to_one to false * Format * Add tests * Format * add deprecation warning Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 16 Sep, 2022 1 commit
-
-
Sid Sahai authored
* [WIP] add LMSDiscreteSchedulerTest * fixes for comments * add torch numpy test * rebase * Update tests/test_scheduler.py * Update tests/test_scheduler.py * style * return residuals Co-authored-by:Anton Lozhkov <anton@huggingface.co>
-
- 15 Sep, 2022 1 commit
-
-
Kashif Rasul authored
* beta never changes removed from state * fix typos in docs * removed unused var * initial ddim flax scheduler * import * added dummy objects * fix style * fix typo * docs * fix typo in comment * set return type * added flax ddom * fix style * remake * pass PRNG key as argument and split before use * fix doc string * use config * added flax Karras VE scheduler * make style * fix dummy * fix ndarray type annotation * replace returns a new state * added lms_discrete scheduler * use self.config * add_noise needs state * use config * use config * docstring * added flax score sde ve * fix imports * fix typos
-
- 13 Sep, 2022 1 commit
-
-
Nathan Lambert authored
* initial attempt at solving * fix pndm power of 3 inference_step * add power of 3 test * fix index in pndm test, remove ddim test * add comments, change to round()
-
- 12 Sep, 2022 1 commit
-
-
Kashif Rasul authored
* update expected results of slow tests * relax sum and mean tests * Print shapes when reporting exception * formatting * fix sentence * relax test_stable_diffusion_fast_ddim for gpu fp16 * relax flakey tests on GPU * added comment on large tolerences * black * format * set scheduler seed * added generator * use np.isclose * set num_inference_steps to 50 * fix dep. warning * update expected_slice * preprocess if image * updated expected results * updated expected from CI * pass generator to VAE * undo change back to orig * use orignal * revert back the expected on cpu * revert back values for CPU * more undo * update result after using gen * update mean * set generator for mps * update expected on CI server * undo * use new seed every time * cpu manual seed * reduce num_inference_steps * style * use generator for randn Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 05 Sep, 2022 1 commit
-
-
Patrick von Platen authored
* add outputs for models * add for pipelines * finish schedulers * better naming * adapt tests as well * replace dict access with . access * make schedulers works * finish * correct readme * make bcp compatible * up * small fix * finish * more fixes * more fixes * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/models/vae.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Adapt model outputs * Apply more suggestions * finish examples * correct Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 31 Aug, 2022 1 commit
-
-
Nouamane Tazi authored
* format timesteps attrs to np arrays in pndm scheduler because lists don't get formatted to tensors in `self.set_format` * convert to long type to use timesteps as indices for tensors * add scheduler set_format test * fix `_timesteps` type * make style with black 22.3.0 and isort 5.10.1 Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 24 Aug, 2022 1 commit
-
-
Kashif Rasul authored
* added test workflow and fixed failing test * 4 decimal places
-
- 22 Aug, 2022 1 commit
-
-
Nathan Lambert authored
-
- 21 Jul, 2022 2 commits
-
-
Patrick von Platen authored
-
Nathan Lambert authored
* work in progress, fixing tests for numpy and make deterministic * make tests pass via pytorch * make pytorch == numpy test cleaner * change default tensor format pndm --> pt
-
- 20 Jul, 2022 1 commit
-
-
Nathan Lambert authored
* organize PNDM tests, begin API change * clean timestep API PNDM * update pipeline PNDM * fix typo * API clean round 2 * small nit
-
- 19 Jul, 2022 1 commit
-
-
Nathan Lambert authored
* clean ddpm api to match ddim * correct ve sde class * update pipeline API for ve sde * make style * Apply suggestions from code review Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 18 Jul, 2022 2 commits
-
-
Patrick von Platen authored
-
Nathan Lambert authored
* improve comments for sde_ve scheduler, init tests * more comments, tweaking pipelines * timesteps --> num_training_timesteps, some comments * merge cpu test, add m1 data * fix scheduler tests with num_train_timesteps * make np compatible, add tests for sde ve * minor default variable fixes * make style and fix-copies Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 15 Jul, 2022 1 commit
-
-
Patrick von Platen authored
* finish * up
-