"src/vscode:/vscode.git/clone" did not exist on "249d9bc0e76e55eb16c08d0e70b2d6057259c4a0"
- 11 Oct, 2022 2 commits
-
-
Akash Pannu authored
* pass norm_num_groups param and add tests * set resnet_groups for FlaxUNetMidBlock2D * fixed docstrings * fixed typo * using is_flax_available util and created require_flax decorator
-
Suraj Patil authored
* support bf16 for stable diffusion * fix typo * address review comments
-
- 10 Oct, 2022 2 commits
-
-
Nathan Lambert authored
fix typo docstring
-
Nathan Lambert authored
* clean up resnet.py * make style and quality * minor formatting
-
- 07 Oct, 2022 1 commit
-
-
Suraj Patil authored
* handle dtype in vae and image2image pipeline * fix inpaint in fp16 * dtype should be handled in add_noise * style * address review comments * add simple fast tests to check fp16 * fix test name * put mask in fp16
-
- 06 Oct, 2022 2 commits
-
-
Anton Lozhkov authored
Temporarily remove Flax modules from the public API
- 05 Oct, 2022 1 commit
-
-
Nicolas Patry authored
* Removing `autocast` for `35-25% speedup`. * iQuality * Adding a slow test. * Fixing mps noise generation. * Raising error on wrong device, instead of just casting on behalf of user. * Quality. * fix merge Co-authored-by:Nouamane Tazi <nouamane98@gmail.com>
-
- 04 Oct, 2022 3 commits
-
-
NIKHIL A V authored
* renamed single letter variables * renamed x to meaningful variable in resnet.py Hello @patil-suraj can you verify it Thanks * Reformatted using black * renamed x to meaningful variable in resnet.py Hello @patil-suraj can you verify it Thanks * reformatted the files * modified unboundlocalerror in line 374 * removed referenced before error * renamed single variable x -> hidden_state, p-> pad_value Co-authored-by:
Nikhil A V <nikhilav@Nikhils-MacBook-Pro.local> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
Pedro Cuenca authored
Remove comments no longer appropriate. There were casting operations before, they are now gone.
-
Kashif Rasul authored
fix docstring fixes #709
-
- 30 Sep, 2022 3 commits
-
-
Nouamane Tazi authored
* revert using baddbmm in attention - to fix `test_stable_diffusion_memory_chunking` test * styling
-
Josh Achiam authored
* Allow resolutions that are not multiples of 64 * ran black * fix bug * add test * more explanation * more comments Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Nouamane Tazi authored
* initial commit * make UNet stream capturable * try to fix noise_pred value * remove cuda graph and keep NB * non blocking unet with PNDMScheduler * make timesteps np arrays for pndm scheduler because lists don't get formatted to tensors in `self.set_format` * make max async in pndm * use channel last format in unet * avoid moving timesteps device in each unet call * avoid memcpy op in `get_timestep_embedding` * add `channels_last` kwarg to `DiffusionPipeline.from_pretrained` * update TODO * replace `channels_last` kwarg with `memory_format` for more generality * revert the channels_last changes to leave it for another PR * remove non_blocking when moving input ids to device * remove blocking from all .to() operations at beginning of pipeline * fix merging * fix merging * model can run in other precisions without autocast * attn refactoring * Revert "attn refactoring" This reverts commit 0c70c0e189cd2c4d8768274c9fcf5b940ee310fb. * remove restriction to run conv_norm in fp32 * use `baddbmm` instead of `matmul`for better in attention for better perf * removing all reshapes to test perf * Revert "removing all reshapes to test perf" This reverts commit 006ccb8a8c6bc7eb7e512392e692a29d9b1553cd. * add shapes comments * hardcore whats needed for jitting * Revert "hardcore whats needed for jitting" This reverts commit 2fa9c698eae2890ac5f8e367ca80532ecf94df9a. * Revert "remove restriction to run conv_norm in fp32" This reverts commit cec592890c32da3d1b78d38b49e4307aedf459b9. * revert using baddmm in attention's forward * cleanup comment * remove restriction to run conv_norm in fp32. no quality loss was noticed This reverts commit cc9bc1339c998ebe9e7d733f910c6d72d9792213. * add more optimizations techniques to docs * Revert "add shapes comments" This reverts commit 31c58eadb8892f95478cdf05229adf678678c5f4. * apply suggestions * make quality * apply suggestions * styling * `scheduler.timesteps` are now arrays so we dont need .to() * remove useless .type() * use mean instead of max in `test_stable_diffusion_inpaint_pipeline_k_lms` * move scheduler timestamps to correct device if tensors * add device to `set_timesteps` in LMSD scheduler * `self.scheduler.set_timesteps` now uses device arg for schedulers that accept it * quick fix * styling * remove kwargs from schedulers `set_timesteps` * revert to using max in K-LMS inpaint pipeline test * Revert "`self.scheduler.set_timesteps` now uses device arg for schedulers that accept it" This reverts commit 00d5a51e5c20d8d445c8664407ef29608106d899. * move timesteps to correct device before loop in SD pipeline * apply previous fix to other SD pipelines * UNet now accepts tensor timesteps even on wrong device, to avoid errors - it shouldnt affect performance if timesteps are alrdy on correct device - it does slow down performance if they're on the wrong device * fix pipeline when timesteps are arrays with strides
-
- 29 Sep, 2022 1 commit
-
-
Partho authored
renamed x to hidden_states
-
- 27 Sep, 2022 1 commit
-
-
Yih-Dar authored
* Fix SpatialTransformer * Fix SpatialTransformer Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
- 23 Sep, 2022 1 commit
-
-
Younes Belkada authored
* documenting `attention_flax.py` file * documenting `embeddings_flax.py` * documenting `unet_blocks_flax.py` * Add new objs to doc page * document `vae_flax.py` * Apply suggestions from code review * modify `unet_2d_condition_flax.py` * make style * Apply suggestions from code review * make style * Apply suggestions from code review * fix indent * fix typo * fix indent unet * Update src/diffusers/models/vae_flax.py * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Mishig Davaadorj <dmishig@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 22 Sep, 2022 1 commit
-
-
Suraj Patil authored
* add grad ckpt to downsample blocks * make it work * don't pass gradient_checkpointing to upsample block * add tests for UNet2DConditionModel * add test_gradient_checkpointing * add gradient_checkpointing for up and down blocks * add functions to enable and disable grad ckpt * remove the forward argument * better naming * make supports_gradient_checkpointing private
-
- 21 Sep, 2022 1 commit
-
-
Younes Belkada authored
replace `dropout_prob` by `dropout` in `vae`
-
- 20 Sep, 2022 4 commits
-
-
Patrick von Platen authored
* [Flax] Fix unet and ddim scheduler * correct * finish
-
Mishig Davaadorj authored
* WIP: flax FlaxDiffusionPipeline & FlaxStableDiffusionPipeline * todo comment * Fix imports * Fix imports * add dummies * Fix empty init * make pipeline work * up * Use Flax schedulers (typing, docstring) * Wrap model imports inside availability checks. * more updates * make sure flax is not broken * make style * more fixes * up Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@latenitesoft.com>
-
Suraj Patil authored
* rename weights to align with PT * DiagonalGaussianDistribution => FlaxDiagonalGaussianDistribution * fix name
-
Younes Belkada authored
* first commit: - add `from_pt` argument in `from_pretrained` function - add `modeling_flax_pytorch_utils.py` file * small nit - fix a small nit - to not enter in the second if condition * major changes - modify FlaxUnet modules - first conversion script - more keys to be matched * keys match - now all keys match - change module names for correct matching - upsample module name changed * working v1 - test pass with atol and rtol= `4e-02` * replace unsued arg * make quality * add small docstring * add more comments - add TODO for embedding layers * small change - use `jnp.expand_dims` for converting `timesteps` in case it is a 0-dimensional array * add more conditions on conversion - add better test to check for keys conversion * make shapes consistent - output `img_w x img_h x n_channels` from the VAE * Revert "make shapes consistent" This reverts commit 4cad1aeb4aeb224402dad13c018a5d42e96267f6. * fix unet shape - channels first!
-
- 19 Sep, 2022 6 commits
-
-
Yih-Dar authored
* Fix CrossAttention._sliced_attention Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Patrick von Platen authored
-
Patrick von Platen authored
* [Flax] Add Vae * correct * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Finish Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
Yih-Dar authored
* Fix _upsample_2d Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
ydshieh authored
-
ydshieh authored
-
- 18 Sep, 2022 1 commit
-
-
Mishig Davaadorj authored
-
- 16 Sep, 2022 2 commits
-
-
Yuta Hayashibe authored
* Fix typos * Add a typo check action * Fix a bug * Changed to manual typo check currently Ref: https://github.com/huggingface/diffusers/pull/483#pullrequestreview-1104468010 Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Removed a confusing message * Renamed "nin_shortcut" to "in_shortcut" * Add memo about NIN Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com>
-
Yih-Dar authored
* Fix PT up/down sample_2d * empty commit * style * style Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
- 15 Sep, 2022 2 commits
-
-
Pedro Cuenca authored
* First UNet Flax modeling blocks. Mimic the structure of the PyTorch files. The model classes themselves need work, depending on what we do about configuration and initialization. * Remove FlaxUNet2DConfig class. * ignore_for_config non-config args. * Implement `FlaxModelMixin` * Use new mixins for Flax UNet. For some reason the configuration is not correctly applied; the signature of the `__init__` method does not contain all the parameters by the time it's inspected in `extract_init_dict`. * Import `FlaxUNet2DConditionModel` if flax is available. * Rm unused method `framework` * Update src/diffusers/modeling_flax_utils.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Indicate types in flax.struct.dataclass as pointed out by @mishig25 Co-authored-by:
Mishig Davaadorj <mishig.davaadorj@coloradocollege.edu> * Fix typo in transformer block. * make style * some more changes * make style * Add comment * Update src/diffusers/modeling_flax_utils.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Rm unneeded comment * Update docstrings * correct ignore kwargs * make style * Update docstring examples * Make style * Style: remove empty line. * Apply style (after upgrading black from pinned version) * Remove some commented code and unused imports. * Add init_weights (not yet in use until #513). * Trickle down deterministic to blocks. * Rename q, k, v according to the latest PyTorch version. Note that weights were exported with the old names, so we need to be careful. * Flax UNet docstrings, default props as in PyTorch. * Fix minor typos in PyTorch docstrings. * Use FlaxUNet2DConditionOutput as output from UNet. * make style Co-authored-by:
Mishig Davaadorj <dmishig@gmail.com> Co-authored-by:
Mishig Davaadorj <mishig.davaadorj@coloradocollege.edu> Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Suraj Patil authored
* pass norm_num_groups to unet blocs and attention * fix UNet2DConditionModel * add norm_num_groups arg in vae * add tests * remove comment * Apply suggestions from code review
-
- 14 Sep, 2022 2 commits
-
-
Suraj Patil authored
* add different method for sliced attention * Update src/diffusers/models/attention.py * Apply suggestions from code review * Update src/diffusers/models/attention.py Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Nicolas Patry authored
-
- 12 Sep, 2022 1 commit
-
-
Kashif Rasul authored
* update expected results of slow tests * relax sum and mean tests * Print shapes when reporting exception * formatting * fix sentence * relax test_stable_diffusion_fast_ddim for gpu fp16 * relax flakey tests on GPU * added comment on large tolerences * black * format * set scheduler seed * added generator * use np.isclose * set num_inference_steps to 50 * fix dep. warning * update expected_slice * preprocess if image * updated expected results * updated expected from CI * pass generator to VAE * undo change back to orig * use orignal * revert back the expected on cpu * revert back values for CPU * more undo * update result after using gen * update mean * set generator for mps * update expected on CI server * undo * use new seed every time * cpu manual seed * reduce num_inference_steps * style * use generator for randn Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 09 Sep, 2022 2 commits
-
-
Partho authored
* renamed variable names q -> query k -> key v -> value b -> batch c -> channel h -> height w -> weight * rename variable names missed some in the initial commit * renamed more variable names As per code review suggestions, renamed x -> hidden_states and x_in -> residual * fixed minor typo
-
Suraj Patil authored
* use torch.matmul instead of einsum * fix softmax
-
- 08 Sep, 2022 1 commit
-
-
Patrick von Platen authored
* Update black * update table
-