- 03 Nov, 2022 4 commits
-
-
Will Berman authored
* Changes for VQ-diffusion VQVAE Add specify dimension of embeddings to VQModel: `VQModel` will by default set the dimension of embeddings to the number of latent channels. The VQ-diffusion VQVAE has a smaller embedding dimension, 128, than number of latent channels, 256. Add AttnDownEncoderBlock2D and AttnUpDecoderBlock2D to the up and down unet block helpers. VQ-diffusion's VQVAE uses those two block types. * Changes for VQ-diffusion transformer Modify attention.py so SpatialTransformer can be used for VQ-diffusion's transformer. SpatialTransformer: - Can now operate over discrete inputs (classes of vector embeddings) as well as continuous. - `in_channels` was made optional in the constructor so two locations where it was passed as a positional arg were moved to kwargs - modified forward pass to take optional timestep embeddings ImagePositionalEmbeddings: - added to provide positional embeddings to discrete inputs for latent pixels BasicTransformerBlock: - norm layers were made configurable so that the VQ-diffusion could use AdaLayerNorm with timestep embeddings - modified forward pass to take optional timestep embeddings CrossAttention: - now may optionally take a bias parameter for its query, key, and value linear layers FeedForward: - Internal layers are now configurable ApproximateGELU: - Activation function in VQ-diffusion's feedforward layer AdaLayerNorm: - Norm layer modified to incorporate timestep embeddings * Add VQ-diffusion scheduler * Add VQ-diffusion pipeline * Add VQ-diffusion convert script to diffusers * Add VQ-diffusion dummy objects * Add VQ-diffusion markdown docs * Add VQ-diffusion tests * some renaming * some fixes * more renaming * correct * fix typo * correct weights * finalize * fix tests * Apply suggestions from code review Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * finish * finish * up Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Revist authored
* feat: add repaint * fix: fix quality check with `make fix-copies` * fix: remove old unnecessary arg * chore: change default to DDPM (looks better in experiments) * ".to(device)" changed to "device=" Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * make generator device-specific Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * make generator device-specific and change shape Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * fix: add preprocessing for image and mask Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * fix: update test Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Update src/diffusers/pipelines/repaint/pipeline_repaint.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Add docs and examples * Fix toctree Co-authored-by:
fja <fja@zurich.ibm.com> Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Anton Lozhkov <anton@huggingface.co>
-
Anton Lozhkov authored
* Allow saving `None` pipeline components * support flax as well * style
-
Anton Lozhkov authored
* Remove the hub token * replace repos * style
-
- 02 Nov, 2022 4 commits
-
-
Patrick von Platen authored
* [Loading] Ignore unneeded files * up
-
Grigory Sizov authored
* Fix equality test for ddim and ddpm * add docs for use_clipped_model_output in DDIM * fix inline comment * reorder imports in test_pipelines.py * Ignore use_clipped_model_output if scheduler doesn't take it
-
Anton Lozhkov authored
* [WIP][CI] Framework and hardware-specific docker images for CI tests * username * fix cpu * try out the image * push latest * update workspace * no root isolation for actions * add a flax image * flax and onnx matrix * fix runners * add reports * onnxruntime image * retry tpu * fix * fix * build onnxruntime * naming * onnxruntime-gpu image * onnxruntime-gpu image, slow tests * latest jax version * trigger flax * run flax tests in one thread * fast flax tests on cpu * fast flax tests on cpu * trigger slow tests * rebuild torch cuda * force cuda provider * fix onnxruntime tests * trigger slow * don't specify gpu for tpu * optimize * memory limit * fix flax tests * disable docker cache
-
Lewington-pitsos authored
* improve test precision get tests passing with greater precision using lewington images * make old numpy load function a wrapper around a more flexible numpy loading function * adhere to black formatting * add more black formatting * adhere to isort * loosen precision and replace path Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 31 Oct, 2022 4 commits
-
-
Patrick von Platen authored
-
Patrick von Platen authored
* [Better scheduler docs] Improve usage examples of schedulers * finish * fix warnings and add test * finish * more replacements * adapt fast tests hf token * correct more * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Integrate compatibility with euler Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
hlky authored
* k-diffusion-euler * make style make quality * make fix-copies * fix tests for euler a * Update src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Update src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Update src/diffusers/schedulers/scheduling_euler_discrete.py Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Update src/diffusers/schedulers/scheduling_euler_discrete.py Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * remove unused arg and method * update doc * quality * make flake happy * use logger instead of warn * raise error instead of deprication * don't require scipy * pass generator in step * fix tests * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update tests/test_scheduler.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * remove unused generator * pass generator as extra_step_kwargs * update tests * pass generator as kwarg * pass generator as kwarg * quality * fix test for lms * fix tests Co-authored-by:
patil-suraj <surajp815@gmail.com> Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Patrick von Platen authored
-
- 28 Oct, 2022 8 commits
-
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
* up * up * up * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py * Apply suggestions from code review
-
Patrick von Platen authored
* [Tests] Speed up slow tests * Up * up
-
Patrick von Platen authored
* improve tests * up * finish * upload * add init * up * finish vae * finish * reduce loading time with device_map * remove device_map from CPU * uP
-
- 27 Oct, 2022 2 commits
-
-
Patrick von Platen authored
* [Accelerate model loading] Fix meta device and super low memory usage * better naming
-
Pedro Cuenca authored
* Add failing test for #940. * Do not use torch.float64 in mps. * style * Temporarily skip add_noise for IPNDMScheduler. Until #990 is addressed. * Fix additional float64 error in mps. * Improve add_noise test * Slight edit – I think it's clearer this way.
-
- 26 Oct, 2022 2 commits
-
-
Pi Esposito authored
* add method to enable cuda with minimal gpu usage to stable diffusion * add test to minimal cuda memory usage * ensure all models but unet are onn torch.float32 * move to cpu_offload along with minor internal changes to make it work * make it test against accelerate master branch * coming back, its official: I don't know how to make it test againt the master branch from accelerate * make it install accelerate from master on tests * go back to accelerate>=0.11 * undo prettier formatting on yml files * undo prettier formatting on yml files againn
-
Pedro Cuenca authored
* Add failing test for #940. * Do not use torch.float64 in mps. * style * Temporarily skip add_noise for IPNDMScheduler. Until #990 is addressed.
-
- 25 Oct, 2022 4 commits
-
-
Patrick von Platen authored
uP
-
Patrick von Platen authored
* add in fp16 * up
-
Patrick von Platen authored
* start * add more logic * Update src/diffusers/models/unet_2d_condition_flax.py * match weights * up * make model work * making class more general, fixing missed file rename * small fix * make new conversion work * up * finalize conversion * up * first batch of variable renamings * remove c and c_prev var names * add mid and out block structure * add pipeline * up * finish conversion * finish * upload * more fixes * Apply suggestions from code review * add attr * up * uP * up * finish tests * finish * uP * finish * fix test * up * naming consistency in tests * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Nathan Lambert <nathan@huggingface.co> Co-authored-by:
Anton Lozhkov <anton@huggingface.co> * remove hardcoded 16 * Remove bogus * fix some stuff * finish * improve logging * docs * upload Co-authored-by:
Nathan Lambert <nol@berkeley.edu> Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Nathan Lambert <nathan@huggingface.co> Co-authored-by:
Anton Lozhkov <anton@huggingface.co>
-
Kashif Rasul authored
* added broadcast_to_shape_from_left helper * initial tests * fixed pndm tests * shape required for pndm * added require_flax * fix style * fix more imports Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 24 Oct, 2022 1 commit
-
-
Anton Lozhkov authored
* Reorganize pipeline tests * fix vq
-
- 22 Oct, 2022 1 commit
-
-
Kashif Rasul authored
fix mps failing tests
-
- 21 Oct, 2022 1 commit
-
-
Patrick von Platen authored
* [Tests] Move stable diffusion into their own files * up
-
- 20 Oct, 2022 4 commits
-
-
Anton Lozhkov authored
* Introduce the copy mechanism * init tests * fix dummy tests * with * update copies tests
-
Suraj Patil authored
-
Patrick von Platen authored
[DiffusionPipeline.from_pretrained] add warning when passing unused kwargs
-
Patrick von Platen authored
* [Stable Diffusion] Add components function * uP
-
- 19 Oct, 2022 4 commits
-
-
Anton Lozhkov authored
* ONNX supervised inpainting * sync with the torch pipeline * fix concat * update ref values * back to 8 steps * type fix * make fix-copies
-
Patrick von Platen authored
-
Suraj Patil authored
* begin pipe * add new pipeline * add tests * correct fast test * up * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py * Update tests/test_pipelines.py * up * up * make style * add fp16 test * doc, comments * up Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Anton Lozhkov <anton@huggingface.co>
-
Anton Lozhkov authored
* [WIP] Onnx img2img determinism * more numpy + seed * numpy inpainting, tolerance * revert test workflow
-
- 18 Oct, 2022 1 commit
-
-
Žilvinas Ledas authored
* * Stabe Diffusion img2img using onnx. * * Stabe Diffusion inpaint using onnx. * Export vae_encoder, upgrade img2img, add test * updated inpainting pipeline + test * style Co-authored-by:anton-l <anton@huggingface.co>
-