qwen-image-edit.rst 2.37 KB
Newer Older
1
2
Qwen-Image-Edit
===============
3

4
5
Original Qwen-Image-Edit
------------------------
6

7
8
`Qwen-Image-Edit <hf_qwen-image-edit>`_ is the image editing version of Qwen-Image.
Below is a minimal example for running the 4-bit quantized `Qwen-Image-Edit <hf_qwen-image-edit>`_ model with Nunchaku.
9
10
Nunchaku offers an API compatible with `Diffusers <github_diffusers_>`_, allowing for a familiar user experience.

11
.. literalinclude:: ../../../examples/v1/qwen-image-edit.py
12
    :language: python
13
    :caption: Running Qwen-Image-Edit (`examples/v1/qwen-image-edit.py <https://github.com/nunchaku-tech/nunchaku/blob/main/examples/v1/qwen-image-edit.py>`__)
14
15
16
17
18
19
20
21
22
23
24
25
    :linenos:

When using Nunchaku, replace the standard ``QwenImageTransformer2dModel`` with :class:`~nunchaku.models.transformers.transformer_qwenimage.NunchakuQwenImageTransformer2DModel`.
The :meth:`~nunchaku.models.transformers.transformer_qwenimage.NunchakuQwenImageTransformer2DModel.from_pretrained` method loads quantized models from either Hugging Face or local file paths.

.. note::

   - The :func:`~nunchaku.utils.get_precision` function automatically detects whether your GPU supports INT4 or FP4 quantization.
     Use FP4 models for Blackwell GPUs (RTX 50-series) and INT4 models for other architectures.
   - Increasing the rank (e.g., to 128) can improve output quality.
   - To reduce VRAM usage, enable asynchronous CPU offloading with :meth:`~nunchaku.models.transformers.transformer_qwenimage.NunchakuQwenImageTransformer2DModel.set_offload`. For further savings, you may also enable Diffusers' ``pipeline.enable_sequential_cpu_offload()``, but be sure to exclude ``transformer`` from offloading, as Nunchaku's offloading mechanism differs from Diffusers'. With these settings, VRAM usage can be reduced to approximately 3GB.

26
27
Distilled Qwen-Image-Edit (Qwen-Image-Lightning)
------------------------------------------------
28

29
For faster inference, we provide pre-quantized 4-step and 8-step Qwen-Image-Edit models by integrating `Qwen-Image-Lightning LoRAs <hf_qwen-image-lightning>`_.
30
31
See the example script below:

32
.. literalinclude:: ../../../examples/v1/qwen-image-edit-lightning.py
33
    :language: python
34
    :caption: Running Qwen-Image-Edit-Lightning (`examples/v1/qwen-image-edit-lightning.py <https://github.com/nunchaku-tech/nunchaku/blob/main/examples/v1/qwen-image-edit-lightning.py>`__)
35
36
37
    :linenos:

Custom LoRA support is under development.