Unverified Commit dd1371c3 authored by Muyang Li's avatar Muyang Li Committed by GitHub
Browse files

feat: support Qwen-Image-Edit-2509 (#721)

* update

* update

* docs: update README

* add a note
parent de6a75b6
......@@ -15,17 +15,18 @@ Join our user groups on [**Discord**](https://discord.gg/Wk6PnwX9Sm) and [**WeCh
## News
- **[2025-09-24]** 🔥 Released [**4-bit Qwen-Image-Edit-2509**](https://huggingface.co/Qwen/Qwen-Image-Edit-2509)! Models are available on [Hugging Face](https://huggingface.co/nunchaku-tech/nunchaku-qwen-image-edit-2509). Try them out with our [example script](examples/v1/qwen-image-edit-2509.py).
- **[2025-09-09]** 🔥 Released [**4-bit Qwen-Image-Edit**](https://huggingface.co/Qwen/Qwen-Image-Edit) together with the [4/8-step Lightning](https://huggingface.co/lightx2v/Qwen-Image-Lightning) variants! Models are available on [Hugging Face](https://huggingface.co/nunchaku-tech/nunchaku-qwen-image-edit). Try them out with our [example script](examples/v1/qwen-image-edit.py).
- **[2025-09-04]** 🚀 Official release of **Nunchaku v1.0.0**! Qwen-Image now supports **asynchronous offloading**, reducing VRAM usage to as little as **3 GiB** with no performance loss. Check out the [tutorial](https://nunchaku.tech/docs/nunchaku/usage/qwenimage.html) to get started.
- **[2025-08-27]** 🔥 Release **4-bit [4/8-step lightning Qwen-Image](https://huggingface.co/lightx2v/Qwen-Image-Lightning)**! Download on [Hugging Face](https://huggingface.co/nunchaku-tech/nunchaku-qwen-image) or [ModelScope](https://modelscope.cn/models/nunchaku-tech/nunchaku-qwen-image), and try it with our [example script](examples/v1/qwen-image-lightning.py).
- **[2025-08-15]** 🔥 Our **4-bit Qwen-Image** models are now live on [Hugging Face](https://huggingface.co/nunchaku-tech/nunchaku-qwen-image)! Get started with our [example script](examples/v1/qwen-image.py). *ComfyUI, LoRA, and CPU offloading support are coming soon!*
- **[2025-08-15]** 🚀 The **Python backend** is now available! Explore our Pythonic FLUX models [here](nunchaku/models/transformers/transformer_flux_v2.py) and see the modular **4-bit linear layer** [here](nunchaku/models/linear.py).
- **[2025-07-31]** 🚀 **[FLUX.1-Krea-dev](https://www.krea.ai/blog/flux-krea-open-source-release) is now supported!** Check out our new [example script](./examples/flux.1-krea-dev.py) to get started.
- **[2025-07-13]** 🚀 The official [**Nunchaku documentation**](https://nunchaku.tech/docs/nunchaku/) is now live! Explore comprehensive guides and resources to help you get started.
<details>
<summary>More</summary>
- **[2025-07-13]** 🚀 The official [**Nunchaku documentation**](https://nunchaku.tech/docs/nunchaku/) is now live! Explore comprehensive guides and resources to help you get started.
- **[2025-06-29]** 🔥 Support **FLUX.1-Kontext**! Try out our [example script](./examples/flux.1-kontext-dev.py) to see it in action! Our demo is available at this [link](https://svdquant.mit.edu/kontext/)!
- **[2025-06-01]** 🚀 **Release v0.3.0!** This update adds support for multiple-batch inference, [**ControlNet-Union-Pro 2.0**](https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0), initial integration of [**PuLID**](https://github.com/ToTheBeginning/PuLID), and introduces [**Double FB Cache**](examples/flux.1-dev-double_cache.py). You can now load Nunchaku FLUX models as a single file, and our upgraded [**4-bit T5 encoder**](https://huggingface.co/nunchaku-tech/nunchaku-t5) now matches **FP8 T5** in quality!
- **[2025-04-16]** 🎥 Released tutorial videos in both [**English**](https://youtu.be/YHAVe-oM7U8?si=cM9zaby_aEHiFXk0) and [**Chinese**](https://www.bilibili.com/video/BV1BTocYjEk5/?share_source=copy_web&vd_source=8926212fef622f25cc95380515ac74ee) to assist installation and usage.
......
......@@ -12,3 +12,4 @@
.. _hf_qwen-image: https://huggingface.co/Qwen/Qwen-Image
.. _hf_qwen-image-edit: https://huggingface.co/Qwen/Qwen-Image-Edit
.. _hf_qwen-image-lightning: https://huggingface.co/lightx2v/Qwen-Image-Lightning
.. _hf_qwen-image-edit-2509: https://huggingface.co/Qwen/Qwen-Image-Edit-2509
......@@ -34,4 +34,18 @@ See the example script below:
:caption: Running Qwen-Image-Edit-Lightning (`examples/v1/qwen-image-edit-lightning.py <https://github.com/nunchaku-tech/nunchaku/blob/main/examples/v1/qwen-image-edit-lightning.py>`__)
:linenos:
Qwen-Image-Edit-2509
--------------------
Qwen-Image-Edit-2509 is an monthly iteration of Qwen-Image-Edit.
Below is a minimal example for running the 4-bit quantized `Qwen-Image-Edit-2509 <hf_qwen-image-edit-2509>`_ model with Nunchaku.
.. literalinclude:: ../../../examples/v1/qwen-image-edit-2509.py
:language: python
:caption: Running Qwen-Image-Edit-2509 (`examples/v1/qwen-image-edit-2509.py <https://github.com/nunchaku-tech/nunchaku/blob/main/examples/v1/qwen-image-edit-2509.py>`__)
:linenos:
.. note::
This example requires ``diffusers`` version 0.36.0 or higher.
Custom LoRA support is under development.
import torch
from diffusers import QwenImageEditPlusPipeline
from diffusers.utils import load_image
from nunchaku import NunchakuQwenImageTransformer2DModel
from nunchaku.utils import get_gpu_memory, get_precision
rank = 128 # you can also use rank=128 model to improve the quality
# Load the model
transformer = NunchakuQwenImageTransformer2DModel.from_pretrained(
f"nunchaku-tech/nunchaku-qwen-image-edit-2509/svdq-{get_precision()}_r{rank}-qwen-image-edit-2509.safetensors"
)
pipeline = QwenImageEditPlusPipeline.from_pretrained(
"Qwen/Qwen-Image-Edit-2509", transformer=transformer, torch_dtype=torch.bfloat16
)
if get_gpu_memory() > 18:
pipeline.enable_model_cpu_offload()
else:
# use per-layer offloading for low VRAM. This only requires 3-4GB of VRAM.
transformer.set_offload(
True, use_pin_memory=False, num_blocks_on_gpu=1
) # increase num_blocks_on_gpu if you have more VRAM
pipeline._exclude_from_cpu_offload.append("transformer")
pipeline.enable_sequential_cpu_offload()
image1 = load_image("https://huggingface.co/datasets/nunchaku-tech/test-data/resolve/main/inputs/man.png")
image1 = image1.convert("RGB")
image2 = load_image("https://huggingface.co/datasets/nunchaku-tech/test-data/resolve/main/inputs/puppy.png")
image2 = image2.convert("RGB")
image3 = load_image("https://huggingface.co/datasets/nunchaku-tech/test-data/resolve/main/inputs/sofa.png")
image3 = image3.convert("RGB")
prompt = "Let the man in image 1 lie on the sofa in image 3, and let the puppy in image 2 lie on the floor to sleep."
inputs = {
"image": [image1, image2, image3],
"prompt": prompt,
"true_cfg_scale": 4.0,
"negative_prompt": " ",
"num_inference_steps": 40,
}
output = pipeline(**inputs)
output_image = output.images[0]
output_image.save(f"qwen-image-edit-2509-r{rank}.png")
......@@ -34,7 +34,7 @@ dependencies = [
optional-dependencies.ci = [
"controlnet-aux==0.0.10",
"datasets==3.6",
"diffusers @ git+https://github.com/huggingface/diffusers@5796735",
"diffusers @ git+https://github.com/huggingface/diffusers@a72bc0c",
"facexlib==0.3",
"image-gen-aux @ git+https://github.com/asomoza/image_gen_aux.git",
"insightface==0.7.3",
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment