1. 23 May, 2023 6 commits
  2. 22 May, 2023 12 commits
    • Will Berman's avatar
      do not scale the initial global step by gradient accumulation steps when... · 67cd4601
      Will Berman authored
      do not scale the initial global step by gradient accumulation steps when loading from checkpoint (#3506)
      
      67cd4601
    • Birch-san's avatar
      Support for cross-attention bias / mask (#2634) · 64bf5d33
      Birch-san authored
      
      
      * Cross-attention masks
      
      prefer qualified symbol, fix accidental Optional
      
      prefer qualified symbol in AttentionProcessor
      
      prefer qualified symbol in embeddings.py
      
      qualified symbol in transformed_2d
      
      qualify FloatTensor in unet_2d_blocks
      
      move new transformer_2d params attention_mask, encoder_attention_mask to the end of the section which is assumed (e.g. by functions such as checkpoint()) to have a stable positional param interface. regard return_dict as a special-case which is assumed to be injected separately from positional params (e.g. by create_custom_forward()).
      
      move new encoder_attention_mask param to end of CrossAttn block interfaces and Unet2DCondition interface, to maintain positional param interface.
      
      regenerate modeling_text_unet.py
      
      remove unused import
      
      unet_2d_condition encoder_attention_mask docs
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      versatile_diffusion/modeling_text_unet.py encoder_attention_mask docs
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      transformer_2d encoder_attention_mask docs
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      unet_2d_blocks.py: add parameter name comments
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      revert description. bool-to-bias treatment happens in unet_2d_condition only.
      
      comment parameter names
      
      fix copies, style
      
      * encoder_attention_mask for SimpleCrossAttnDownBlock2D, SimpleCrossAttnUpBlock2D
      
      * encoder_attention_mask for UNetMidBlock2DSimpleCrossAttn
      
      * support attention_mask, encoder_attention_mask in KCrossAttnDownBlock2D, KCrossAttnUpBlock2D, KAttentionBlock. fix binding of attention_mask, cross_attention_kwargs params in KCrossAttnDownBlock2D, KCrossAttnUpBlock2D checkpoint invocations.
      
      * fix mistake made during merge conflict resolution
      
      * regenerate versatile_diffusion
      
      * pass time embedding into checkpointed attention invocation
      
      * always assume encoder_attention_mask is a mask (i.e. not a bias).
      
      * style, fix-copies
      
      * add tests for cross-attention masks
      
      * add test for padding of attention mask
      
      * explain mask's query_tokens dim. fix explanation about broadcasting over channels; we actually broadcast over query tokens
      
      * support both masks and biases in Transformer2DModel#forward. document behaviour
      
      * fix-copies
      
      * delete attention_mask docs on the basis I never tested self-attention masking myself. not comfortable explaining it, since I don't actually understand how a self-attn mask can work in its current form: the key length will be different in every ResBlock (we don't downsample the mask when we downsample the image).
      
      * review feedback: the standard Unet blocks shouldn't pass temb to attn (only to resnet). remove from KCrossAttnDownBlock2D,KCrossAttnUpBlock2D#forward.
      
      * remove encoder_attention_mask param from SimpleCrossAttn{Up,Down}Block2D,UNetMidBlock2DSimpleCrossAttn, and mask-choice in those blocks' #forward, on the basis that they only do one type of attention, so the consumer can pass whichever type of attention_mask is appropriate.
      
      * put attention mask padding back to how it was (since the SD use-case it enabled wasn't important, and it breaks the original unclip use-case). disable the test which was added.
      
      * fix-copies
      
      * style
      
      * fix-copies
      
      * put encoder_attention_mask param back into Simple block forward interfaces, to ensure consistency of forward interface.
      
      * restore passing of emb to KAttentionBlock#forward, on the basis that removal caused test failures. restore also the passing of emb to checkpointed calls to KAttentionBlock#forward.
      
      * make simple unet2d blocks use encoder_attention_mask, but only when attention_mask is None. this should fix UnCLIP compatibility.
      
      * fix copies
      64bf5d33
    • takuoko's avatar
      [Community] reference only control (#3435) · c4359d63
      takuoko authored
      * add reference only control
      
      * add reference only control
      
      * add reference only control
      
      * fix lint
      
      * fix lint
      
      * reference adain
      
      * bugfix EulerAncestralDiscreteScheduler
      
      * fix style fidelity rule
      
      * fix default output size
      
      * del unused line
      
      * fix deterministic
      c4359d63
    • Hari Krishna's avatar
      feat: allow disk offload for diffuser models (#3285) · f3d570c2
      Hari Krishna authored
      
      
      * allow disk offload for diffuser models
      
      * sort import
      
      * add max_memory argument
      
      * Changed sample[0] to images[0] (#3304)
      
      A pipeline object stores the results in `images` not in `sample`.
      Current code blocks don't work.
      
      * Typo in tutorial (#3295)
      
      * Torch compile graph fix (#3286)
      
      * fix more
      
      * Fix more
      
      * fix more
      
      * Apply suggestions from code review
      
      * fix
      
      * make style
      
      * make fix-copies
      
      * fix
      
      * make sure torch compile
      
      * Clean
      
      * fix test
      
      * Postprocessing refactor img2img (#3268)
      
      * refactor img2img VaeImageProcessor.postprocess
      
      * remove copy from for init, run_safety_checker, decode_latents
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      ---------
      Co-authored-by: default avataryiyixuxu <yixu@yis-macbook-pro.lan>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * [Torch 2.0 compile] Fix more torch compile breaks (#3313)
      
      * Fix more torch compile breaks
      
      * add tests
      
      * Fix all
      
      * fix controlnet
      
      * fix more
      
      * Add Horace He as co-author.
      >
      >
      Co-authored-by: default avatarHorace He <horacehe2007@yahoo.com>
      
      * Add Horace He as co-author.
      Co-authored-by: default avatarHorace He <horacehe2007@yahoo.com>
      
      ---------
      Co-authored-by: default avatarHorace He <horacehe2007@yahoo.com>
      
      * fix: scale_lr and sync example readme and docs. (#3299)
      
      * fix: scale_lr and sync example readme and docs.
      
      * fix doc link.
      
      * Update stable_diffusion.mdx (#3310)
      
      fixed import statement
      
      * Fix missing variable assign in DeepFloyd-IF-II (#3315)
      
      Fix missing variable assign
      
      lol
      
      * Correct doc build for patch releases (#3316)
      
      Update build_documentation.yml
      
      * Add Stable Diffusion RePaint to community pipelines (#3320)
      
      * Add Stable Diffsuion RePaint to community pipelines
      
      - Adds Stable Diffsuion RePaint to community pipelines
      - Add Readme enty for pipeline
      
      * Fix: Remove wrong import
      
      - Remove wrong import
      - Minor change in comments
      
      * Fix: Code formatting of stable_diffusion_repaint
      
      * Fix: ruff errors in stable_diffusion_repaint
      
      * Fix multistep dpmsolver for cosine schedule (suitable for deepfloyd-if) (#3314)
      
      * fix multistep dpmsolver for cosine schedule (deepfloy-if)
      
      * fix a typo
      
      * Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * update all dpmsolver (singlestep, multistep, dpm, dpm++) for cosine noise schedule
      
      * add test, fix style
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * [docs] Improve LoRA docs (#3311)
      
      * update docs
      
      * add to toctree
      
      * apply feedback
      
      * Added input pretubation (#3292)
      
      * Added input pretubation
      
      * Fixed spelling
      
      * Update write_own_pipeline.mdx (#3323)
      
      * update controlling generation doc with latest goodies. (#3321)
      
      * [Quality] Make style (#3341)
      
      * Fix config dpm (#3343)
      
      * Add the SDE variant of DPM-Solver and DPM-Solver++ (#3344)
      
      * add SDE variant of DPM-Solver and DPM-Solver++
      
      * add test
      
      * fix typo
      
      * fix typo
      
      * Add upsample_size to AttnUpBlock2D, AttnDownBlock2D (#3275)
      
      The argument `upsample_size` needs to be added to these modules to allow compatibility with other blocks that require this argument.
      
      * Rename --only_save_embeds to --save_as_full_pipeline (#3206)
      
      * Set --only_save_embeds to False by default
      
      Due to how the option is named, it makes more sense to behave like this.
      
      * Refactor only_save_embeds to save_as_full_pipeline
      
      * [AudioLDM] Generalise conversion script (#3328)
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Fix TypeError when using prompt_embeds and negative_prompt (#2982)
      
      * test: Added test case
      
      * fix: fixed type checking issue on _encode_prompt
      
      * fix: fixed copies consistency
      
      * fix: one copy was not sufficient
      
      * Fix pipeline class on README (#3345)
      
      Update README.md
      
      * Inpainting: typo in docs (#3331)
      
      Typo in docs
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Add `use_Karras_sigmas` to LMSDiscreteScheduler (#3351)
      
      * add karras sigma to lms discrete scheduler
      
      * add test for lms_scheduler karras
      
      * reformat test lms
      
      * Batched load of textual inversions (#3277)
      
      * Batched load of textual inversions
      
      - Only call resize_token_embeddings once per batch as it is the most expensive operation
      - Allow pretrained_model_name_or_path and token to be an optional list
      - Remove Dict from type annotation pretrained_model_name_or_path as it was not supported in this function
      - Add comment that single files (e.g. .pt/.safetensors) are supported
      - Add comment for token parameter
      - Convert token override log message from warning to info
      
      * Update src/diffusers/loaders.py
      
      Check for duplicate tokens
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update condition for None tokens
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * make fix-copies
      
      * [docs] Fix docstring (#3334)
      
      fix docstring
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * if dreambooth lora (#3360)
      
      * update IF stage I pipelines
      
      add fixed variance schedulers and lora loading
      
      * added kv lora attn processor
      
      * allow loading into alternative lora attn processor
      
      * make vae optional
      
      * throw away predicted variance
      
      * allow loading into added kv lora layer
      
      * allow load T5
      
      * allow pre compute text embeddings
      
      * set new variance type in schedulers
      
      * fix copies
      
      * refactor all prompt embedding code
      
      class prompts are now included in pre-encoding code
      max tokenizer length is now configurable
      embedding attention mask is now configurable
      
      * fix for when variance type is not defined on scheduler
      
      * do not pre compute validation prompt if not present
      
      * add example test for if lora dreambooth
      
      * add check for train text encoder and pre compute text embeddings
      
      * Postprocessing refactor all others (#3337)
      
      * add text2img
      
      * fix-copies
      
      * add
      
      * add all other pipelines
      
      * add
      
      * add
      
      * add
      
      * add
      
      * add
      
      * make style
      
      * style + fix copies
      
      ---------
      Co-authored-by: default avataryiyixuxu <yixu310@gmail,com>
      
      * [docs] Improve safetensors docstring (#3368)
      
      * clarify safetensor docstring
      
      * fix typo
      
      * apply feedback
      
      * add: a warning message when using xformers in a PT 2.0 env. (#3365)
      
      * add: a warning message when using xformers in a PT 2.0 env.
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * StableDiffusionInpaintingPipeline - resize image w.r.t height and width (#3322)
      
      * StableDiffusionInpaintingPipeline now resizes input images and masks w.r.t to passed input height and width. Default is already set to 512. This addresses the common tensor mismatch error. Also moved type check into relevant funciton to keep main pipeline body tidy.
      
      * Fixed StableDiffusionInpaintingPrepareMaskAndMaskedImageTests
      
      Due to previous commit these tests were failing as height and width need to be passed into the prepare_mask_and_masked_image function, I have updated the code and added a height/width variable per unit test as it seemed more appropriate than the current hard coded solution
      
      * Added a resolution test to StableDiffusionInpaintPipelineSlowTests
      
      this unit test simply gets the input and resizes it into some that would fail (e.g. would throw a tensor mismatch error/not a mult of 8). Then passes it through the pipeline and verifies it produces output with correct dims w.r.t the passed height and width
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * make style
      
      * [docs] Adapt a model (#3326)
      
      * first draft
      
      * apply feedback
      
      * conv_in.weight thrown away
      
      * [docs] Load safetensors (#3333)
      
      * safetensors
      
      * apply feedback
      
      * apply feedback
      
      * Apply suggestions from code review
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * make style
      
      * [Docs] Fix stable_diffusion.mdx typo (#3398)
      
      Fix typo in last code block. Correct "prommpts" to "prompt"
      
      * Support ControlNet v1.1 shuffle properly (#3340)
      
      * add inferring_controlnet_cond_batch
      
      * Revert "add inferring_controlnet_cond_batch"
      
      This reverts commit abe8d6311d4b7f5b9409ca709c7fabf80d06c1a9.
      
      * set guess_mode to True
      whenever global_pool_conditions is True
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * nit
      
      * add integration test
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * [Tests] better determinism (#3374)
      
      * enable deterministic pytorch and cuda operations.
      
      * disable manual seeding.
      
      * make style && make quality for unet_2d tests.
      
      * enable determinism for the unet2dconditional model.
      
      * add CUBLAS_WORKSPACE_CONFIG for better reproducibility.
      
      * relax tolerance (very weird issue, though).
      
      * revert to torch manual_seed() where needed.
      
      * relax more tolerance.
      
      * better placement of the cuda variable and relax more tolerance.
      
      * enable determinism for 3d condition model.
      
      * relax tolerance.
      
      * add: determinism to alt_diffusion.
      
      * relax tolerance for alt diffusion.
      
      * dance diffusion.
      
      * dance diffusion is flaky.
      
      * test_dict_tuple_outputs_equivalent edit.
      
      * fix two more tests.
      
      * fix more ddim tests.
      
      * fix: argument.
      
      * change to diff in place of difference.
      
      * fix: test_save_load call.
      
      * test_save_load_float16 call.
      
      * fix: expected_max_diff
      
      * fix: paint by example.
      
      * relax tolerance.
      
      * add determinism to 1d unet model.
      
      * torch 2.0 regressions seem to be brutal
      
      * determinism to vae.
      
      * add reason to skipping.
      
      * up tolerance.
      
      * determinism to vq.
      
      * determinism to cuda.
      
      * determinism to the generic test pipeline file.
      
      * refactor general pipelines testing a bit.
      
      * determinism to alt diffusion i2i
      
      * up tolerance for alt diff i2i and audio diff
      
      * up tolerance.
      
      * determinism to audioldm
      
      * increase tolerance for audioldm lms.
      
      * increase tolerance for paint by paint.
      
      * increase tolerance for repaint.
      
      * determinism to cycle diffusion and sd 1.
      
      * relax tol for cycle diffusion 🚲
      
      
      
      * relax tol for sd 1.0
      
      * relax tol for controlnet.
      
      * determinism to img var.
      
      * relax tol for img variation.
      
      * tolerance to i2i sd
      
      * make style
      
      * determinism to inpaint.
      
      * relax tolerance for inpaiting.
      
      * determinism for inpainting legacy
      
      * relax tolerance.
      
      * determinism to instruct pix2pix
      
      * determinism to model editing.
      
      * model editing tolerance.
      
      * panorama determinism
      
      * determinism to pix2pix zero.
      
      * determinism to sag.
      
      * sd 2. determinism
      
      * sd. tolerance
      
      * disallow tf32 matmul.
      
      * relax tolerance is all you need.
      
      * make style and determinism to sd 2 depth
      
      * relax tolerance for depth.
      
      * tolerance to diffedit.
      
      * tolerance to sd 2 inpaint.
      
      * up tolerance.
      
      * determinism in upscaling.
      
      * tolerance in upscaler.
      
      * more tolerance relaxation.
      
      * determinism to v pred.
      
      * up tol for v_pred
      
      * unclip determinism
      
      * determinism to unclip img2img
      
      * determinism to text to video.
      
      * determinism to last set of tests
      
      * up tol.
      
      * vq cumsum doesn't have a deterministic kernel
      
      * relax tol
      
      * relax tol
      
      * [docs] Add transformers to install (#3388)
      
      add transformers to install
      
      * [deepspeed] partial ZeRO-3 support (#3076)
      
      * [deepspeed] partial ZeRO-3 support
      
      * cleanup
      
      * improve deepspeed fixes
      
      * Improve
      
      * make style
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Add omegaconf for tests (#3400)
      
      Add omegaconfg
      
      * Fix various bugs with LoRA Dreambooth and Dreambooth script (#3353)
      
      * Improve checkpointing lora
      
      * fix more
      
      * Improve doc string
      
      * Update src/diffusers/loaders.py
      
      * make stytle
      
      * Apply suggestions from code review
      
      * Update src/diffusers/loaders.py
      
      * Apply suggestions from code review
      
      * Apply suggestions from code review
      
      * better
      
      * Fix all
      
      * Fix multi-GPU dreambooth
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Fix all
      
      * make style
      
      * make style
      
      ---------
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Fix docker file (#3402)
      
      * up
      
      * up
      
      * fix: deepseepd_plugin retrieval from accelerate state (#3410)
      
      * [Docs] Add `sigmoid` beta_scheduler to docstrings of relevant Schedulers (#3399)
      
      * Add `sigmoid` beta scheduler to `DDPMScheduler` docstring
      
      * Add `sigmoid` beta scheduler to `RePaintScheduler` docstring
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Don't install accelerate and transformers from source (#3415)
      
      * Don't install transformers and accelerate from source (#3414)
      
      * Improve fast tests (#3416)
      
      Update pr_tests.yml
      
      * attention refactor: the trilogy  (#3387)
      
      * Replace `AttentionBlock` with `Attention`
      
      * use _from_deprecated_attn_block check re: @patrickvonplaten
      
      * [Docs] update the PT 2.0 optimization doc with latest findings (#3370)
      
      * add: benchmarking stats for A100 and V100.
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * address patrick's comments.
      
      * add: rtx 4090 stats
      
      * 
      
       benchmark reports done
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * 3313 pr link.
      
      * add: plots.
      Co-authored-by: default avatarPedro <pedro@huggingface.co>
      
      * fix formattimg
      
      * update number percent.
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Fix style rendering (#3433)
      
      * Fix style rendering.
      
      * Fix typo
      
      * unCLIP scheduler do not use note (#3417)
      
      * Replace deprecated command with environment file (#3409)
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * fix warning message pipeline loading (#3446)
      
      * add stable diffusion tensorrt img2img pipeline (#3419)
      
      * add stable diffusion tensorrt img2img pipeline
      Signed-off-by: default avatarAsfiya Baig <asfiyab@nvidia.com>
      
      * update docstrings
      Signed-off-by: default avatarAsfiya Baig <asfiyab@nvidia.com>
      
      ---------
      Signed-off-by: default avatarAsfiya Baig <asfiyab@nvidia.com>
      
      * Refactor controlnet and add img2img and inpaint (#3386)
      
      * refactor controlnet and add img2img and inpaint
      
      * First draft to get pipelines to work
      
      * make style
      
      * Fix more
      
      * Fix more
      
      * More tests
      
      * Fix more
      
      * Make inpainting work
      
      * make style and more tests
      
      * Apply suggestions from code review
      
      * up
      
      * make style
      
      * Fix imports
      
      * Fix more
      
      * Fix more
      
      * Improve examples
      
      * add test
      
      * Make sure import is correctly deprecated
      
      * Make sure everything works in compile mode
      
      * make sure authorship is correctly attributed
      
      * [Scheduler] DPM-Solver (++) Inverse Scheduler (#3335)
      
      * Add DPM-Solver Multistep Inverse Scheduler
      
      * Add draft tests for DiffEdit
      
      * Add inverse sde-dpmsolver steps to tune image diversity from inverted latents
      
      * Fix tests
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * [Docs] Fix incomplete docstring for resnet.py (#3438)
      
      Fix incomplete docstrings for resnet.py
      
      * fix tiled vae blend extent range (#3384)
      
      fix tiled vae bleand extent range
      
      * Small update to "Next steps" section (#3443)
      
      Small update to "Next steps" section:
      
      - PyTorch 2 is recommended.
      - Updated improvement figures.
      
      * Allow arbitrary aspect ratio in IFSuperResolutionPipeline (#3298)
      
      * Update pipeline_if_superresolution.py
      
      Allow arbitrary aspect ratio in IFSuperResolutionPipeline by using the input image shape
      
      * IFSuperResolutionPipeline: allow the user to override the height and width through the arguments
      
      * update IFSuperResolutionPipeline width/height doc string to match StableDiffusionInpaintPipeline conventions
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Adding 'strength' parameter to StableDiffusionInpaintingPipeline  (#3424)
      
      * Added explanation of 'strength' parameter
      
      * Added get_timesteps function which relies on new strength parameter
      
      * Added `strength` parameter which defaults to 1.
      
      * Swapped ordering so `noise_timestep` can be calculated before masking the image
      
      this is required when you aren't applying 100% noise to the masked region, e.g. strength < 1.
      
      * Added strength to check_inputs, throws error if out of range
      
      * Changed `prepare_latents` to initialise latents w.r.t strength
      
      inspired from the stable diffusion img2img pipeline, init latents are initialised by converting the init image into a VAE latent and adding noise (based upon the strength parameter passed in), e.g. random when strength = 1, or the init image at strength = 0.
      
      * WIP: Added a unit test for the new strength parameter in the StableDiffusionInpaintingPipeline
      
      still need to add correct regression values
      
      * Created a is_strength_max to initialise from pure random noise
      
      * Updated unit tests w.r.t new strength parameter + fixed new strength unit test
      
      * renamed parameter to avoid confusion with variable of same name
      
      * Updated regression values for new strength test - now passes
      
      * removed 'copied from' comment as this method is now different and divergent from the cpy
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Ensure backwards compatibility for prepare_mask_and_masked_image
      
      created a return_image boolean and initialised to false
      
      * Ensure backwards compatibility for prepare_latents
      
      * Fixed copy check typo
      
      * Fixes w.r.t backward compibility changes
      
      * make style
      
      * keep function argument ordering same for backwards compatibility in callees with copied from statements
      
      * make fix-copies
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarWilliam Berman <WLBberman@gmail.com>
      
      * [WIP] Bugfix - Pipeline.from_pretrained is broken when the pipeline is partially downloaded (#3448)
      
      Added bugfix using f strings.
      
      * Fix gradient checkpointing bugs in freezing part of models (requires_grad=False) (#3404)
      
      * gradient checkpointing bug fix
      
      * bug fix; changes for reviews
      
      * reformat
      
      * reformat
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Make dreambooth lora more robust to orig unet (#3462)
      
      * Make dreambooth lora more robust to orig unet
      
      * up
      
      * Reduce peak VRAM by releasing large attention tensors (as soon as they're unnecessary) (#3463)
      
      Release large tensors in attention (as soon as they're no longer required). Reduces peak VRAM by nearly 2 GB for 1024x1024 (even after slicing), and the savings scale up with image size.
      
      * Add min snr to text2img lora training script (#3459)
      
      add min snr to text2img lora training script
      
      * Add inpaint lora scale support (#3460)
      
      * add inpaint lora scale support
      
      * add inpaint lora scale test
      
      ---------
      Co-authored-by: default avataryueyang.hyy <yueyang.hyy@alibaba-inc.com>
      
      * [From ckpt] Fix from_ckpt (#3466)
      
      * Correct from_ckpt
      
      * make style
      
      * Update full dreambooth script to work with IF (#3425)
      
      * Add IF dreambooth docs (#3470)
      
      * parameterize pass single args through tuple (#3477)
      
      * attend and excite tests disable determinism on the class level (#3478)
      
      * dreambooth docs torch.compile note (#3471)
      
      * dreambooth docs torch.compile note
      
      * Update examples/dreambooth/README.md
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update examples/dreambooth/README.md
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * add: if entry in the dreambooth training docs. (#3472)
      
      * [docs] Textual inversion inference (#3473)
      
      * add textual inversion inference to docs
      
      * add to toctree
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * [docs] Distributed inference (#3376)
      
      * distributed inference
      
      * move to inference section
      
      * apply feedback
      
      * update with split_between_processes
      
      * apply feedback
      
      * [{Up,Down}sample1d] explicit view kernel size as number elements in flattened indices (#3479)
      
      explicit view kernel size as number elements in flattened indices
      
      * mps & onnx tests rework (#3449)
      
      * Remove ONNX tests from PR.
      
      They are already a part of push_tests.yml.
      
      * Remove mps tests from PRs.
      
      They are already performed on push.
      
      * Fix workflow name for fast push tests.
      
      * Extract mps tests to a workflow.
      
      For better control/filtering.
      
      * Remove --extra-index-url from mps tests
      
      * Increase tolerance of mps test
      
      This test passes in my Mac (Ventura 13.3) but fails in the CI hardware
      (Ventura 13.2). I ran the local tests following the same steps that
      exist in the CI workflow.
      
      * Temporarily run mps tests on pr
      
      So we can test.
      
      * Revert "Temporarily run mps tests on pr"
      
      Tests passed, go back to running on push.
      
      ---------
      Signed-off-by: default avatarAsfiya Baig <asfiyab@nvidia.com>
      Co-authored-by: default avatarIlia Larchenko <41329713+IliaLarchenko@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avataryiyixuxu <yixu@yis-macbook-pro.lan>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarHorace He <horacehe2007@yahoo.com>
      Co-authored-by: default avatarUmar <55330742+mu94-csl@users.noreply.github.com>
      Co-authored-by: default avatarMylo <36931363+gitmylo@users.noreply.github.com>
      Co-authored-by: default avatarMarkus Pobitzer <markuspobitzer@gmail.com>
      Co-authored-by: default avatarCheng Lu <lucheng.lc15@gmail.com>
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      Co-authored-by: default avatarIsamu Isozaki <isamu.website@gmail.com>
      Co-authored-by: default avatarCesar Aybar <csaybar@gmail.com>
      Co-authored-by: default avatarWill Rice <will@spokestack.io>
      Co-authored-by: default avatarAdrià Arrufat <1671644+arrufat@users.noreply.github.com>
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      Co-authored-by: default avatarAt-sushi <dkahw210@kyoto.zaq.ne.jp>
      Co-authored-by: default avatarLucca Zenóbio <luccazen@gmail.com>
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      Co-authored-by: default avatarIsotr0py <41363108+Isotr0py@users.noreply.github.com>
      Co-authored-by: default avatarpdoane <pdoane2@gmail.com>
      Co-authored-by: default avatarWill Berman <wlbberman@gmail.com>
      Co-authored-by: default avataryiyixuxu <yixu310@gmail,com>
      Co-authored-by: default avatarRupert Menneer <71332436+rupertmenneer@users.noreply.github.com>
      Co-authored-by: default avatarsudowind <wfpkueecs@163.com>
      Co-authored-by: default avatarTakuma Mori <takuma104@gmail.com>
      Co-authored-by: default avatarStas Bekman <stas00@users.noreply.github.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      Co-authored-by: default avatarLaureηt <laurentfainsin@protonmail.com>
      Co-authored-by: default avatarJongwoo Han <jongwooo.han@gmail.com>
      Co-authored-by: default avatarasfiyab-nvidia <117682710+asfiyab-nvidia@users.noreply.github.com>
      Co-authored-by: default avatarclarencechen <clarencechenct@gmail.com>
      Co-authored-by: default avatarLaureηt <laurent@fainsin.bzh>
      Co-authored-by: default avatarsuperlabs-dev <133080491+superlabs-dev@users.noreply.github.com>
      Co-authored-by: default avatarDev Aggarwal <devxpy@gmail.com>
      Co-authored-by: default avatarVimarsh Chaturvedi <vimarsh.c@gmail.com>
      Co-authored-by: default avatar7eu7d7 <31194890+7eu7d7@users.noreply.github.com>
      Co-authored-by: default avatarcmdr2 <shashank.shekhar.global@gmail.com>
      Co-authored-by: default avatarwfng92 <43742196+wfng92@users.noreply.github.com>
      Co-authored-by: default avatarGlaceon-Hyy <ffheyy0017@gmail.com>
      Co-authored-by: default avataryueyang.hyy <yueyang.hyy@alibaba-inc.com>
      f3d570c2
    • Patrick von Platen's avatar
      make style · 2b56e8ca
      Patrick von Platen authored
      2b56e8ca
    • Ambrosiussen's avatar
      DataLoader respecting EXIF data in Training Images (#3465) · b8b5daae
      Ambrosiussen authored
      * DataLoader will now bake in any transforms or image manipulations contained in the EXIF
      
      Images may have rotations stored in EXIF. Training using such images will cause those transforms to be ignored while training and thus produce unexpected results
      
      * Fixed the Dataloading EXIF issue in main DreamBooth training as well
      
      * Run make style (black & isort)
      b8b5daae
    • Seongsu Park's avatar
      [Docs] Korean translation (optimization, training) (#3488) · 229fd8cb
      Seongsu Park authored
      
      
      * feat) optimization kr translation
      
      * fix) typo, italic setting
      
      * feat) dreambooth, text2image kr
      
      * feat) lora kr
      
      * fix) LoRA
      
      * fix) fp16 fix
      
      * fix) doc-builder style
      
      * fix) fp16 일부 단어 수정
      
      * fix) fp16 style fix
      
      * fix) opt, training docs update
      
      * feat) toctree update
      
      * feat) toctree update
      
      ---------
      Co-authored-by: default avatarChanran Kim <seriousran@gmail.com>
      229fd8cb
    • Patrick von Platen's avatar
      make style · a2874af2
      Patrick von Platen authored
      a2874af2
    • w4ffl35's avatar
    • Isotr0py's avatar
      Add `use_Karras_sigmas` to DPMSolverSinglestepScheduler (#3476) · 194b0a42
      Isotr0py authored
      * add use_karras_sigmas
      
      * add karras test
      
      * add doc
      194b0a42
    • Patrick von Platen's avatar
      Fix DPM single (#3413) · 6dd3871a
      Patrick von Platen authored
      
      
      * Fix DPM single
      
      * add test
      
      * fix one more bug
      
      * Apply suggestions from code review
      Co-authored-by: default avatarStAlKeR7779 <stalkek7779@yandex.ru>
      
      ---------
      Co-authored-by: default avatarStAlKeR7779 <stalkek7779@yandex.ru>
      6dd3871a
    • Patrick von Platen's avatar
      Refactor full determinism (#3485) · 51843fd7
      Patrick von Platen authored
      * up
      
      * fix more
      
      * Apply suggestions from code review
      
      * fix more
      
      * fix more
      
      * Check it
      
      * Remove 16:8
      
      * fix more
      
      * fix more
      
      * fix more
      
      * up
      
      * up
      
      * Test only stable diffusion
      
      * Test only two files
      
      * up
      
      * Try out spinning up processes that can be killed
      
      * up
      
      * Apply suggestions from code review
      
      * up
      
      * up
      51843fd7
  3. 21 May, 2023 2 commits
  4. 20 May, 2023 1 commit
    • Pedro Cuenca's avatar
      mps & onnx tests rework (#3449) · f7b4f51c
      Pedro Cuenca authored
      * Remove ONNX tests from PR.
      
      They are already a part of push_tests.yml.
      
      * Remove mps tests from PRs.
      
      They are already performed on push.
      
      * Fix workflow name for fast push tests.
      
      * Extract mps tests to a workflow.
      
      For better control/filtering.
      
      * Remove --extra-index-url from mps tests
      
      * Increase tolerance of mps test
      
      This test passes in my Mac (Ventura 13.3) but fails in the CI hardware
      (Ventura 13.2). I ran the local tests following the same steps that
      exist in the CI workflow.
      
      * Temporarily run mps tests on pr
      
      So we can test.
      
      * Revert "Temporarily run mps tests on pr"
      
      Tests passed, go back to running on push.
      f7b4f51c
  5. 19 May, 2023 5 commits
  6. 18 May, 2023 2 commits
  7. 17 May, 2023 11 commits
  8. 16 May, 2023 1 commit