qwen-image-edit.rst 3.12 KB
Newer Older
1
2
Qwen-Image-Edit
===============
3

4
5
Original Qwen-Image-Edit
------------------------
6

7
8
`Qwen-Image-Edit <hf_qwen-image-edit>`_ is the image editing version of Qwen-Image.
Below is a minimal example for running the 4-bit quantized `Qwen-Image-Edit <hf_qwen-image-edit>`_ model with Nunchaku.
9
10
Nunchaku offers an API compatible with `Diffusers <github_diffusers_>`_, allowing for a familiar user experience.

11
.. literalinclude:: ../../../examples/v1/qwen-image-edit.py
12
    :language: python
13
    :caption: Running Qwen-Image-Edit (`examples/v1/qwen-image-edit.py <https://github.com/nunchaku-tech/nunchaku/blob/main/examples/v1/qwen-image-edit.py>`__)
14
15
16
17
18
19
20
21
22
23
24
25
    :linenos:

When using Nunchaku, replace the standard ``QwenImageTransformer2dModel`` with :class:`~nunchaku.models.transformers.transformer_qwenimage.NunchakuQwenImageTransformer2DModel`.
The :meth:`~nunchaku.models.transformers.transformer_qwenimage.NunchakuQwenImageTransformer2DModel.from_pretrained` method loads quantized models from either Hugging Face or local file paths.

.. note::

   - The :func:`~nunchaku.utils.get_precision` function automatically detects whether your GPU supports INT4 or FP4 quantization.
     Use FP4 models for Blackwell GPUs (RTX 50-series) and INT4 models for other architectures.
   - Increasing the rank (e.g., to 128) can improve output quality.
   - To reduce VRAM usage, enable asynchronous CPU offloading with :meth:`~nunchaku.models.transformers.transformer_qwenimage.NunchakuQwenImageTransformer2DModel.set_offload`. For further savings, you may also enable Diffusers' ``pipeline.enable_sequential_cpu_offload()``, but be sure to exclude ``transformer`` from offloading, as Nunchaku's offloading mechanism differs from Diffusers'. With these settings, VRAM usage can be reduced to approximately 3GB.

26
27
Distilled Qwen-Image-Edit (Qwen-Image-Lightning)
------------------------------------------------
28

29
For faster inference, we provide pre-quantized 4-step and 8-step Qwen-Image-Edit models by integrating `Qwen-Image-Lightning LoRAs <hf_qwen-image-lightning>`_.
30
31
See the example script below:

32
.. literalinclude:: ../../../examples/v1/qwen-image-edit-lightning.py
33
    :language: python
34
    :caption: Running Qwen-Image-Edit-Lightning (`examples/v1/qwen-image-edit-lightning.py <https://github.com/nunchaku-tech/nunchaku/blob/main/examples/v1/qwen-image-edit-lightning.py>`__)
35
36
    :linenos:

37
38
39
Qwen-Image-Edit-2509
--------------------

Muyang Li's avatar
Muyang Li committed
40
41
42
.. image:: https://huggingface.co/datasets/nunchaku-tech/cdn/resolve/main/ComfyUI-nunchaku/workflows/nunchaku-qwen-image-edit-2509.png
   :alt: Nunchaku-Qwen-Image-Edit-2509

43
44
45
46
47
48
49
50
51
52
53
Qwen-Image-Edit-2509 is an monthly iteration of Qwen-Image-Edit.
Below is a minimal example for running the 4-bit quantized `Qwen-Image-Edit-2509 <hf_qwen-image-edit-2509>`_ model with Nunchaku.

.. literalinclude:: ../../../examples/v1/qwen-image-edit-2509.py
    :language: python
    :caption: Running Qwen-Image-Edit-2509 (`examples/v1/qwen-image-edit-2509.py <https://github.com/nunchaku-tech/nunchaku/blob/main/examples/v1/qwen-image-edit-2509.py>`__)
    :linenos:

.. note::
   This example requires ``diffusers`` version 0.36.0 or higher.

54
Custom LoRA support is under development.