- 04 Nov, 2022 1 commit
-
-
Patrick von Platen authored
* finish * finish * Update src/diffusers/modeling_utils.py * Update src/diffusers/pipeline_utils.py Co-authored-by:
Anton Lozhkov <anton@huggingface.co> * more fixes * fix Co-authored-by:
Anton Lozhkov <anton@huggingface.co>
-
- 02 Nov, 2022 1 commit
-
-
MatthieuTPHR authored
* 2x speedup using memory efficient attention * remove einops dependency * Swap K, M in op instantiation * Simplify code, remove unnecessary maybe_init call and function, remove unused self.scale parameter * make xformers a soft dependency * remove one-liner functions * change one letter variable to appropriate names * Remove Env variable dependency, remove MemoryEfficientCrossAttention class and use enable_xformers_memory_efficient_attention method * Add memory efficient attention toggle to img2img and inpaint pipelines * Clearer management of xformers' availability * update optimizations markdown to add info about memory efficient attention * add benchmarks for TITAN RTX * More detailed explanation of how the mem eff benchmark were ran * Removing autocast from optimization markdown * import_utils: import torch only if is available Co-authored-by:Nouamane Tazi <nouamane98@gmail.com>
-
- 31 Oct, 2022 1 commit
-
-
Anton Lozhkov authored
* Fix pipelines user_agent, ignore CI requests * fix circular import * N/A versions * N/A versions
-
- 04 Oct, 2022 1 commit
-
-
Pi Esposito authored
* add accelerate to load models with smaller memory footprint * remove low_cpu_mem_usage as it is reduntant * move accelerate init weights context to modelling utils * add test to ensure results are the same when loading with accelerate * add tests to ensure ram usage gets lower when using accelerate * move accelerate logic to single snippet under modelling utils and remove it from configuration utils * format code using to pass quality check * fix imports with isor * add accelerate to test extra deps * only import accelerate if device_map is set to auto * move accelerate availability check to diffusers import utils * format code Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 16 Sep, 2022 1 commit
-
-
SkyTNT authored
* Fix is_onnx_available Fix: If user install onnxruntime-gpu, is_onnx_available() will return False. * add more onnxruntime candidates * Run `make style` Co-authored-by:anton-l <anton@huggingface.co>
-
- 08 Sep, 2022 1 commit
-
-
Anton Lozhkov authored
* initial export and design * update imports * custom prover, import fixes * Update src/diffusers/onnx_utils.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/onnx_utils.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * remove push_to_hub * Update src/diffusers/onnx_utils.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * remove torch_device * numpify the rest of the pipeline * torchify the safety checker * revert tensor * Code review suggestions + quality * fix tests * fix provider, add an end-to-end test * style Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 17 Aug, 2022 1 commit
-
-
Anton Lozhkov authored
* Add is_<framework>_available, refactor import utils * deps * quality
-