- 19 Dec, 2022 1 commit
-
-
Anton Lozhkov authored
-
- 18 Dec, 2022 1 commit
-
-
Will Berman authored
* [wip] attention block updates * [wip] unCLIP unet decoder and super res * [wip] unCLIP prior transformer * [wip] scheduler changes * [wip] text proj utility class * [wip] UnCLIPPipeline * [wip] kakaobrain unCLIP convert script * [unCLIP pipeline] fixes re: @patrickvonplaten remove callbacks move denoising loops into call function * UNCLIPScheduler re: @patrickvonplaten Revert changes to DDPMScheduler. Make UNCLIPScheduler, a modified DDPM scheduler with changes to support karlo * mask -> attention_mask re: @patrickvonplaten * [DDPMScheduler] remove leftover change * [docs] PriorTransformer * [docs] UNet2DConditionModel and UNet2DModel * [nit] UNCLIPScheduler -> UnCLIPScheduler matches existing unclip naming better * [docs] SchedulingUnCLIP * [docs] UnCLIPTextProjModel * refactor * finish licenses * rename all to attention_mask and prep in models * more renaming * don't expose unused configs * final renaming fixes * remove x attn mask when not necessary * configure kakao script to use new class embedding config * fix copies * [tests] UnCLIPScheduler * finish x attn * finish * remove more * rename condition blocks * clean more * Apply suggestions from code review * up * fix * [tests] UnCLIPPipelineFastTests * remove unused imports * [tests] UnCLIPPipelineIntegrationTests * correct * make style Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 15 Dec, 2022 2 commits
-
-
YiYi Xu authored
Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Anton Lozhkov authored
* add fast tests * better tests and fp16 * batch fixes * Reuse preprocessing * quickfix
-
- 13 Dec, 2022 1 commit
-
-
Patrick von Platen authored
upgrade version
-
- 08 Dec, 2022 1 commit
-
-
Anton Lozhkov authored
* Fix PyCharm/VSCode static type checking for dummy objects * Re-add dummies * Fix AudioDiffusion imports * fix import * fix import * Update utils/check_dummies.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/utils/import_utils.py * Update src/diffusers/__init__.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/stable_diffusion/__init__.py * fix double import Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 07 Dec, 2022 2 commits
-
-
Randolph-zeng authored
* Update scheduling_repaint.py * update the expected image Co-authored-by:anton- <anton@huggingface.co>
-
Cheng Lu authored
* add singlestep dpmsolver * fix a style typo * fix a style typo * add docs * finish Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 02 Dec, 2022 4 commits
-
-
Patrick von Platen authored
-
Patrick von Platen authored
-
bachr authored
- Add the missing `scale_model_input` method to `FlaxLMSDiscreteScheduler` - Use `jnp.append` for appending to `state.derivatives` - Use `jnp.delete` to pop from `state.derivatives`
-
Patrick von Platen authored
* up * up * finish * finish * up * up * finish
-
- 01 Dec, 2022 2 commits
-
-
Suraj Patil authored
-
Suraj Patil authored
* support v prediction in other schedulers * v heun * add tests for v pred * fix tests * fix test euler a * v ddpm
-
- 30 Nov, 2022 2 commits
-
-
Anton Lozhkov authored
-
Patrick von Platen authored
-
- 29 Nov, 2022 1 commit
-
-
Rohan Taori authored
* cast to float for quantile method * add fp16 test for DPMSolverMultistepScheduler fix * formatting update
-
- 28 Nov, 2022 2 commits
-
-
Patrick von Platen authored
* Add heun * Finish first version of heun * remove bogus * finish * finish * improve * up * up * fix more * change progress bar * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py * finish * up * up * up
-
Suraj Patil authored
* add get_velocity * add v prediction for training * fix saving * add revision arg * fix saving * save checkpoints dreambooth * fix saving embeds * add instruction in readme * quality * noise_pred -> model_pred
-
- 25 Nov, 2022 3 commits
-
-
Kashif Rasul authored
* added initial v-pred support to DPM-solver * fix sign * added v_prediction to flax * fixed typo
-
Patrick von Platen authored
* fix * fix deprecated kwargs logic * add tests * finish
-
Pedro Cuenca authored
* Adapt ddpm, ddpmsolver to prediction_type. * Deprecate predict_epsilon in __init__. * Bring FlaxDDIMScheduler up to date with DDIMScheduler. * Set prediction_type as an ivar for consistency. * Convert pipeline_ddpm * Adapt tests. * Adapt unconditional training script. * Adapt BitDiffusion example. * Add missing kwargs in dpmsolver_multistep * Ugly workaround to accept deprecated predict_epsilon when loading schedulers using from_pretrained. * make style * Remove import no longer in use. * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Use config.prediction_type everywhere * Add a couple of Flax prediction type tests. * make style * fix register deprecated arg Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 24 Nov, 2022 1 commit
-
-
Suraj Patil authored
* add v prediction * adat euler for v pred * velocity -> v_prediction * simplify * fix naming * Update src/diffusers/schedulers/scheduling_euler_discrete.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * style Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 22 Nov, 2022 1 commit
-
-
regisss authored
-
- 18 Nov, 2022 2 commits
-
-
NotNANtoN authored
Casting `self.sigmas` into a different dtype (the one of original_samples) is not advisable. In my img2img pipeline this leads to a long running time in the `integrate.quad` call later on- by long I mean more than 10x slower. Co-authored-by:Anton Lozhkov <anton@huggingface.co>
-
Simon Kirsten authored
[FLAX] Fix loading scheduler from subfolder
-
- 15 Nov, 2022 1 commit
-
-
Patrick von Platen authored
* add conversion script for vae * uP * uP * more changes * push * up * finish again * up * up * up * up * finish * up * uP * up * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Anton Lozhkov <anton@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com> * up * up Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Anton Lozhkov <anton@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 14 Nov, 2022 1 commit
-
-
Nathan Lambert authored
* re-add RL model code * match model forward api * add register_to_config, pass training tests * fix tests, update forward outputs * remove unused code, some comments * add to docs * remove extra embedding code * unify time embedding * remove conv1d output sequential * remove sequential from conv1dblock * style and deleting duplicated code * clean files * remove unused variables * clean variables * add 1d resnet block structure for downsample * rename as unet1d * fix renaming * rename files * add get_block(...) api * unify args for model1d like model2d * minor cleaning * fix docs * improve 1d resnet blocks * fix tests, remove permuts * fix style * add output activation * rename flax blocks file * Add Value Function and corresponding example script to Diffuser implementation (#884) * valuefunction code * start example scripts * missing imports * bug fixes and placeholder example script * add value function scheduler * load value function from hub and get best actions in example * very close to working example * larger batch size for planning * more tests * merge unet1d changes * wandb for debugging, use newer models * success! * turns out we just need more diffusion steps * run on modal * merge and code cleanup * use same api for rl model * fix variance type * wrong normalization function * add tests * style * style and quality * edits based on comments * style and quality * remove unused var * hack unet1d into a value function * add pipeline * fix arg order * add pipeline to core library * community pipeline * fix couple shape bugs * style * Apply suggestions from code review Co-authored-by:
Nathan Lambert <nathan@huggingface.co> * update post merge of scripts * add mdiblock / outblock architecture * Pipeline cleanup (#947) * valuefunction code * start example scripts * missing imports * bug fixes and placeholder example script * add value function scheduler * load value function from hub and get best actions in example * very close to working example * larger batch size for planning * more tests * merge unet1d changes * wandb for debugging, use newer models * success! * turns out we just need more diffusion steps * run on modal * merge and code cleanup * use same api for rl model * fix variance type * wrong normalization function * add tests * style * style and quality * edits based on comments * style and quality * remove unused var * hack unet1d into a value function * add pipeline * fix arg order * add pipeline to core library * community pipeline * fix couple shape bugs * style * Apply suggestions from code review * clean up comments * convert older script to using pipeline and add readme * rename scripts * style, update tests * delete unet rl model file * remove imports in src Co-authored-by:
Nathan Lambert <nathan@huggingface.co> * Update src/diffusers/models/unet_1d_blocks.py * Update tests/test_models_unet.py * RL Cleanup v2 (#965) * valuefunction code * start example scripts * missing imports * bug fixes and placeholder example script * add value function scheduler * load value function from hub and get best actions in example * very close to working example * larger batch size for planning * more tests * merge unet1d changes * wandb for debugging, use newer models * success! * turns out we just need more diffusion steps * run on modal * merge and code cleanup * use same api for rl model * fix variance type * wrong normalization function * add tests * style * style and quality * edits based on comments * style and quality * remove unused var * hack unet1d into a value function * add pipeline * fix arg order * add pipeline to core library * community pipeline * fix couple shape bugs * style * Apply suggestions from code review * clean up comments * convert older script to using pipeline and add readme * rename scripts * style, update tests * delete unet rl model file * remove imports in src * add specific vf block and update tests * style * Update tests/test_models_unet.py Co-authored-by:
Nathan Lambert <nathan@huggingface.co> * fix quality in tests * fix quality style, split test file * fix checks / tests * make timesteps closer to main * unify block API * unify forward api * delete lines in examples * style * examples style * all tests pass * make style * make dance_diff test pass * Refactoring RL PR (#1200) * init file changes * add import utils * finish cleaning files, imports * remove import flags * clean examples * fix imports, tests for merge * update readmes * hotfix for tests * quality * fix some tests * change defaults * more mps test fixes * unet1d defaults * do not default import experimental * defaults for tests * fix tests * fix-copies * fix * changes per Patrik's comments (#1285) * changes per Patrik's comments * update conversion script * fix renaming * skip more mps tests * last test fix * Update examples/rl/README.md Co-authored-by:
Ben Glickenhaus <benglickenhaus@gmail.com>
-
- 09 Nov, 2022 2 commits
-
-
Anton Lozhkov authored
* Match the generator device to the pipeline for DDPM and DDIM * style * fix * update values * fix fast tests * trigger slow tests * deprecate * last value fixes * mps fixes
-
Patrick von Platen authored
* fix tests * Fix more * more
-
- 08 Nov, 2022 3 commits
-
-
Patrick von Platen authored
* [Scheduler] Move predict epsilon to init * up * uP * uP * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * up Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Pedro Cuenca authored
* Schedulers: don't use float64 on mps * Test set_timesteps() on device (float schedulers). * SD pipeline: use device in set_timesteps. * SD in-painting pipeline: use device in set_timesteps. * Tests: fix mps crashes. * Skip test_load_pipeline_from_git on mps. Not compatible with float16. * Use device.type instead of str in Euler schedulers.
-
Suraj Patil authored
* fix noise device in ddim sched * fix typo * self.device -> device * remove duplicated if * use str device * don't use str for device
-
- 06 Nov, 2022 1 commit
-
-
Cheng Lu authored
* add dpmsolver discrete pytorch scheduler * fix some typos in dpm-solver pytorch * add dpm-solver pytorch in stable-diffusion pipeline * add jax/flax version dpm-solver * change code style * change code style * add docs * add `add_noise` method for dpmsolver * add pytorch unit test for dpmsolver * add dummy object for pytorch dpmsolver * Update src/diffusers/schedulers/scheduling_dpmsolver_discrete.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Update tests/test_config.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Update tests/test_config.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * resolve the code comments * rename the file * change class name * fix code style * add auto docs for dpmsolver multistep * add more explanations for the stabilizing trick (for steps < 15) * delete the dummy file * change the API name of predict_epsilon, algorithm_type and solver_type * add compatible lists Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 04 Nov, 2022 2 commits
-
-
Chen Wu (吴尘) authored
* Add CycleDiffusion pipeline for Stable Diffusion * Add the option of passing noise to DDIMScheduler Add the option of providing the noise itself to DDIMScheduler, instead of the random seed generator. * Update README.md * Update README.md * Update pipeline_stable_diffusion_cycle_diffusion.py * Update pipeline_stable_diffusion_cycle_diffusion.py * Update pipeline_stable_diffusion_cycle_diffusion.py * Update pipeline_stable_diffusion_cycle_diffusion.py * Update scheduling_ddim.py * Update import format * Update pipeline_stable_diffusion_cycle_diffusion.py * Update scheduling_ddim.py * Update src/diffusers/schedulers/scheduling_ddim.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/schedulers/scheduling_ddim.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/schedulers/scheduling_ddim.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/schedulers/scheduling_ddim.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/schedulers/scheduling_ddim.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update scheduling_ddim.py * Update scheduling_ddim.py * Update scheduling_ddim.py * add two tests * Update pipeline_stable_diffusion_cycle_diffusion.py * Update pipeline_stable_diffusion_cycle_diffusion.py * Update README.md * Rename pipeline name as suggested in the latest reviewer comment * Update test_pipelines.py * Update test_pipelines.py * Update test_pipelines.py * Update pipeline_stable_diffusion_cycle_diffusion.py * Remove the generator This generator does not control all randomness during sampling, which can be misleading. * Update optimal hyperparameters * Update src/diffusers/pipelines/stable_diffusion/README.md Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Update src/diffusers/pipelines/stable_diffusion/README.md Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Update src/diffusers/pipelines/stable_diffusion/README.md Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Apply suggestions from code review * uP * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_cycle_diffusion.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * up * up * Replace assert with ValueError * finish docs Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
Anton Lozhkov authored
* Bump to 0.8.0.dev0 * deprecate int timesteps * style
-
- 03 Nov, 2022 3 commits
-
-
Suraj Patil authored
* handle device for randn in euler step * convert device to str
-
Will Berman authored
* Changes for VQ-diffusion VQVAE Add specify dimension of embeddings to VQModel: `VQModel` will by default set the dimension of embeddings to the number of latent channels. The VQ-diffusion VQVAE has a smaller embedding dimension, 128, than number of latent channels, 256. Add AttnDownEncoderBlock2D and AttnUpDecoderBlock2D to the up and down unet block helpers. VQ-diffusion's VQVAE uses those two block types. * Changes for VQ-diffusion transformer Modify attention.py so SpatialTransformer can be used for VQ-diffusion's transformer. SpatialTransformer: - Can now operate over discrete inputs (classes of vector embeddings) as well as continuous. - `in_channels` was made optional in the constructor so two locations where it was passed as a positional arg were moved to kwargs - modified forward pass to take optional timestep embeddings ImagePositionalEmbeddings: - added to provide positional embeddings to discrete inputs for latent pixels BasicTransformerBlock: - norm layers were made configurable so that the VQ-diffusion could use AdaLayerNorm with timestep embeddings - modified forward pass to take optional timestep embeddings CrossAttention: - now may optionally take a bias parameter for its query, key, and value linear layers FeedForward: - Internal layers are now configurable ApproximateGELU: - Activation function in VQ-diffusion's feedforward layer AdaLayerNorm: - Norm layer modified to incorporate timestep embeddings * Add VQ-diffusion scheduler * Add VQ-diffusion pipeline * Add VQ-diffusion convert script to diffusers * Add VQ-diffusion dummy objects * Add VQ-diffusion markdown docs * Add VQ-diffusion tests * some renaming * some fixes * more renaming * correct * fix typo * correct weights * finalize * fix tests * Apply suggestions from code review Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * finish * finish * up Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Revist authored
* feat: add repaint * fix: fix quality check with `make fix-copies` * fix: remove old unnecessary arg * chore: change default to DDPM (looks better in experiments) * ".to(device)" changed to "device=" Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * make generator device-specific Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * make generator device-specific and change shape Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * fix: add preprocessing for image and mask Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * fix: update test Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Update src/diffusers/pipelines/repaint/pipeline_repaint.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Add docs and examples * Fix toctree Co-authored-by:
fja <fja@zurich.ibm.com> Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Anton Lozhkov <anton@huggingface.co>
-
- 02 Nov, 2022 1 commit
-
-
Grigory Sizov authored
* Fix equality test for ddim and ddpm * add docs for use_clipped_model_output in DDIM * fix inline comment * reorder imports in test_pipelines.py * Ignore use_clipped_model_output if scheduler doesn't take it
-