1. 02 Oct, 2023 6 commits
    • dg845's avatar
      [WIP] Refactor UniDiffuser Pipeline and Tests (#4948) · cd1b8d7c
      dg845 authored
      
      
      * Add VAE slicing and tiling methods.
      
      * Switch to using VaeImageProcessing for preprocessing and postprocessing of images.
      
      * Rename the VaeImageProcessor to vae_image_processor to avoid a name clash with the CLIPImageProcessor (image_processor).
      
      * Remove the postprocess() function because we're using a VaeImageProcessor instead.
      
      * Remove UniDiffuserPipeline.decode_image_latents because we're using VaeImageProcessor instead.
      
      * Refactor generating text from text latents into a decode_text_latents method.
      
      * Add enable_full_determinism() to UniDiffuser tests.
      
      * make style
      
      * Add PipelineLatentTesterMixin to UniDiffuserPipelineFastTests.
      
      * Remove enable_model_cpu_offload since it is now part of DiffusionPipeline.
      
      * Rename the VaeImageProcessor instance to self.image_processor for consistency with other pipelines and rename the CLIPImageProcessor instance to clip_image_processor to avoid a name clash.
      
      * Update UniDiffuser conversion script.
      
      * Make safe_serialization configurable in UniDiffuser conversion script.
      
      * Rename image_processor to clip_image_processor in UniDiffuser tests.
      
      * Add PipelineKarrasSchedulerTesterMixin to UniDiffuserPipelineFastTests.
      
      * Add initial test for compiling the UniDiffuser model (not tested yet).
      
      * Update encode_prompt and _encode_prompt to match that of StableDiffusionPipeline.
      
      * Turn off standard classifier-free guidance for now.
      
      * make style
      
      * make fix-copies
      
      * apply suggestions from review
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      cd1b8d7c
    • Patrick von Platen's avatar
      make style · db91e710
      Patrick von Platen authored
      db91e710
    • Nandika-A's avatar
      Add docstrings in forward methods of adapter model (#5253) · 2a62aadc
      Nandika-A authored
      * added docstrings in forward methods of T2IAdapter model and FullAdapter model
      
      * added docstrings in forward methods of FullAdapterXL and AdapterBlock models
      
      * Added docstrings in forward methods of adapter models
      2a62aadc
    • Patrick von Platen's avatar
      [PEFT warnings] Only sure deprecation warnings in the future (#5240) · 4f74a5e1
      Patrick von Platen authored
      * [PEFT warnings] Only sure deprecation warnings in the future
      
      * make style
      4f74a5e1
    • Zanz2's avatar
    • Pedro Cuenca's avatar
      Flax: Ignore PyTorch, ONNX files when they coexist with Flax weights (#5237) · 0c7cb9a6
      Pedro Cuenca authored
      Ignore PyTorch, ONNX files when they coexist with Flax weights
      0c7cb9a6
  2. 29 Sep, 2023 5 commits
  3. 28 Sep, 2023 3 commits
  4. 27 Sep, 2023 6 commits
  5. 26 Sep, 2023 6 commits
  6. 25 Sep, 2023 12 commits
  7. 23 Sep, 2023 1 commit
  8. 22 Sep, 2023 1 commit
    • Pedro Cuenca's avatar
      SDXL flax (#4254) · 3651b14c
      Pedro Cuenca authored
      
      
      * support transformer_layers_per block in flax UNet
      
      * add support for text_time additional embeddings to Flax UNet
      
      * rename attention layers for VAE
      
      * add shape asserts when renaming attention layers
      
      * transpose VAE attention layers
      
      * add pipeline flax SDXL code [WIP]
      
      * continue add pipeline flax SDXL code [WIP]
      
      * cleanup
      
      * Working on JIT support
      
      Fixed prompt embedding shapes so they work in parallel mode. Assuming we
      always have both text encoders for now, for simplicity.
      
      * Fixing embeddings (untested)
      
      * Remove spurious line
      
      * Shard guidance_scale when jitting.
      
      * Decode images
      
      * Fix sharding
      
      * style
      
      * Refiner UNet can be loaded.
      
      * Refiner / img2img pipeline
      
      * Allow latent outputs from base and latent inputs in refiner
      
      This makes it possible to chain base + refiner without having to use the
      vae decoder in the base model, the vae encoder in the refiner, skipping
      conversions to/from PIL, and avoiding TPU <-> CPU memory copies.
      
      * Adapt to FlaxCLIPTextModelOutput
      
      * Update Flax XL pipeline to FlaxCLIPTextModelOutput
      
      * make fix-copies
      
      * make style
      
      * add euler scheduler
      
      * Fix import
      
      * Fix copies, comment unused code.
      
      * Fix SDXL Flax imports
      
      * Fix euler discrete begin
      
      * improve init import
      
      * finish
      
      * put discrete euler in init
      
      * fix flax euler
      
      * Fix more
      
      * make style
      
      * correct init
      
      * correct init
      
      * Temporarily remove FlaxStableDiffusionXLImg2ImgPipeline
      
      * correct pipelines
      
      * finish
      
      ---------
      Co-authored-by: default avatarMartin Müller <martin.muller.me@gmail.com>
      Co-authored-by: default avatarpatil-suraj <surajp815@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      3651b14c