Unverified Commit 955b5523 authored by Amchii's avatar Amchii Committed by GitHub
Browse files

fix: qwen-image-edit-2509 docs and examples (#731)

* fix: add missing scheduler to qwen-image-edit-2509 lightning example

* docs: add documentation for qwen-image-edit-2509 lightning

* docs: add Distilled Qwen-Image-Edit-2509-Lightning section and fix Hugging Face link refs

  - Add a new "Distilled Qwen-Image-Edit-2509 (Qwen-Image-Edit-2509-Lightning)" section to
    docs/source/usage/qwen-image-edit.rst, referencing
    examples/v1/qwen-image-edit-2509-lightning.py.
  - Fix incorrect reStructuredText named-reference syntax that produced relative links:
    changed occurrences like <hf_qwen-image>/<hf_qwen-image-edit>/<hf_qwen-image-lightning>
    to the correct named-reference form with trailing underscore (e.g. <hf_qwen-image_>).
    This makes those links resolve to the external URLs defined in
    docs/source/links/huggingface.txt instead of becoming relative links.
  - Files modified:
      - docs/source/usage/qwen-image-edit.rst
      - docs/source/usage/qwen-image.rst

* style: make linter happy

* fix the repo 
parent 06a989ff
......@@ -4,8 +4,8 @@ Qwen-Image-Edit
Original Qwen-Image-Edit
------------------------
`Qwen-Image-Edit <hf_qwen-image-edit>`_ is the image editing version of Qwen-Image.
Below is a minimal example for running the 4-bit quantized `Qwen-Image-Edit <hf_qwen-image-edit>`_ model with Nunchaku.
`Qwen-Image-Edit <hf_qwen-image-edit_>`_ is the image editing version of Qwen-Image.
Below is a minimal example for running the 4-bit quantized `Qwen-Image-Edit <hf_qwen-image-edit_>`_ model with Nunchaku.
Nunchaku offers an API compatible with `Diffusers <github_diffusers_>`_, allowing for a familiar user experience.
.. literalinclude:: ../../../examples/v1/qwen-image-edit.py
......@@ -26,7 +26,7 @@ The :meth:`~nunchaku.models.transformers.transformer_qwenimage.NunchakuQwenImage
Distilled Qwen-Image-Edit (Qwen-Image-Lightning)
------------------------------------------------
For faster inference, we provide pre-quantized 4-step and 8-step Qwen-Image-Edit models by integrating `Qwen-Image-Lightning LoRAs <hf_qwen-image-lightning>`_.
For faster inference, we provide pre-quantized 4-step and 8-step Qwen-Image-Edit models by integrating `Qwen-Image-Lightning LoRAs <hf_qwen-image-lightning_>`_.
See the example script below:
.. literalinclude:: ../../../examples/v1/qwen-image-edit-lightning.py
......@@ -41,7 +41,7 @@ Qwen-Image-Edit-2509
:alt: Nunchaku-Qwen-Image-Edit-2509
Qwen-Image-Edit-2509 is an monthly iteration of Qwen-Image-Edit.
Below is a minimal example for running the 4-bit quantized `Qwen-Image-Edit-2509 <hf_qwen-image-edit-2509>`_ model with Nunchaku.
Below is a minimal example for running the 4-bit quantized `Qwen-Image-Edit-2509 <hf_qwen-image-edit-2509_>`_ model with Nunchaku.
.. literalinclude:: ../../../examples/v1/qwen-image-edit-2509.py
:language: python
......@@ -52,3 +52,14 @@ Below is a minimal example for running the 4-bit quantized `Qwen-Image-Edit-2509
This example requires ``diffusers`` version 0.36.0 or higher.
Custom LoRA support is under development.
Distilled Qwen-Image-Edit-2509 (Qwen-Image-Edit-2509-Lightning)
---------------------------------------------------------------
For faster inference of the 2509 branch, we provide pre-quantized Lightning variants by integrating `Qwen-Image-Lightning LoRAs <hf_qwen-image-lightning_>`_.
See the example script below:
.. literalinclude:: ../../../examples/v1/qwen-image-edit-2509-lightning.py
:language: python
:caption: Running Qwen-Image-Edit-2509-Lightning (`examples/v1/qwen-image-edit-2509-lightning.py <https://github.com/nunchaku-tech/nunchaku/blob/main/examples/v1/qwen-image-edit-2509-lightning.py>`__)
:linenos:
......@@ -7,8 +7,8 @@ Original Qwen-Image
.. image:: https://huggingface.co/datasets/nunchaku-tech/cdn/resolve/main/nunchaku/assets/qwen-image.jpg
:alt: Qwen-Image with Nunchaku
`Qwen-Image <hf_qwen-image>`_ is an image generation foundation model in the Qwen series that achieves significant advances in complex text rendering.
Below is a minimal example for running the 4-bit quantized `Qwen-Image <hf_qwen-image>`_ model with Nunchaku.
`Qwen-Image <hf_qwen-image_>`_ is an image generation foundation model in the Qwen series that achieves significant advances in complex text rendering.
Below is a minimal example for running the 4-bit quantized `Qwen-Image <hf_qwen-image_>`_ model with Nunchaku.
Nunchaku offers an API compatible with `Diffusers <github_diffusers_>`_, allowing for a familiar user experience.
.. literalinclude:: ../../../examples/v1/qwen-image.py
......@@ -29,7 +29,7 @@ The :meth:`~nunchaku.models.transformers.transformer_qwenimage.NunchakuQwenImage
Distilled Qwen-Image (Qwen-Image-Lightning)
-------------------------------------------
For faster inference, we provide pre-quantized 4-step and 8-step Qwen-Image models by integrating `Qwen-Image-Lightning LoRAs <hf_qwen-image-lightning>`_.
For faster inference, we provide pre-quantized 4-step and 8-step Qwen-Image models by integrating `Qwen-Image-Lightning LoRAs <hf_qwen-image-lightning_>`_.
See the example script below:
.. literalinclude:: ../../../examples/v1/qwen-image-lightning.py
......
......@@ -28,13 +28,13 @@ scheduler = FlowMatchEulerDiscreteScheduler.from_config(scheduler_config)
num_inference_steps = 4 # you can also use the 8-step model to improve the quality
rank = 32 # you can also use the rank=128 model to improve the quality
model_path = f"nunchaku-tech/nunchaku-qwen-image-edit-2509-lightning/svdq-{get_precision()}_r{rank}-qwen-image-edit-2509-lightningv2.0-{num_inference_steps}steps.safetensors"
model_path = f"nunchaku-tech/nunchaku-qwen-image-edit-2509/svdq-{get_precision()}_r{rank}-qwen-image-edit-2509-lightningv2.0-{num_inference_steps}steps.safetensors"
# Load the model
transformer = NunchakuQwenImageTransformer2DModel.from_pretrained(model_path)
pipeline = QwenImageEditPlusPipeline.from_pretrained(
"Qwen/Qwen-Image-Edit-2509", transformer=transformer, torch_dtype=torch.bfloat16
"Qwen/Qwen-Image-Edit-2509", transformer=transformer, scheduler=scheduler, torch_dtype=torch.bfloat16
)
if get_gpu_memory() > 18:
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment