contribution_guide.rst 4.18 KB
Newer Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
.. Adapting from https://docs.sglang.ai/references/contribution_guide.html

Contribution Guide
==================

Welcome to **Nunchaku**! We appreciate your interest in contributing.
This guide outlines how to set up your environment, run tests, and submit a Pull Request (PR).
Whether you're fixing a minor bug or implementing a major feature, we encourage you to
follow these steps for a smooth and efficient contribution process.

🚀 Setting Up & Building from Source
------------------------------------

1. Fork and Clone the Repository

   .. note::

      As a new contributor, you won't have write access to the `Nunchaku repository <nunchaku_repo_>`_.
      Please fork the repository to your own GitHub account, then clone your fork locally:

   .. code-block:: shell

      git clone https://github.com/<your_username>/nunchaku.git

2. Install Dependencies & Build

   To install dependencies and build the project, follow the instructions in :doc:`Installation <../installation/installation>`.

🧹 Code Formatting with Pre-Commit
----------------------------------

We use `pre-commit <https://pre-commit.com/>`_ hooks to ensure code style consistency. Please install and run it before submitting your changes:

.. code-block:: shell

   pip install pre-commit
   pre-commit install
   pre-commit run --all-files

- ``pre-commit run --all-files`` manually triggers all checks and automatically fixes issues where possible. If it fails initially, re-run until all checks pass.

- ✅ **Ensure your code passes all checks before opening a PR.**

- 🚫 **Do not commit directly to the** ``main`` **branch.**
- Always create a feature branch (e.g., ``feat/my-new-feature``),
- commit your changes there, and open a PR from that branch.

🧪 Running Unit Tests & Integrating with CI
-------------------------------------------

Nunchaku uses ``pytest`` for unit testing. If you're adding a new feature,
please include corresponding test cases in the ``tests`` directory.
**Please avoid modifying existing tests.**

Running the Tests
~~~~~~~~~~~~~~~~~

.. code-block:: shell

   HF_TOKEN=$YOUR_HF_TOKEN pytest -v tests/flux/test_flux_memory.py
   HF_TOKEN=$YOUR_HF_TOKEN pytest -v tests/flux --ignore=tests/flux/test_flux_memory.py
   HF_TOKEN=$YOUR_HF_TOKEN pytest -v tests/sana

.. note::

   ``$YOUR_HF_TOKEN`` refers to your Hugging Face access token, required to download models and datasets.
   You can create one at https://huggingface.co/settings/tokens.
   If you've already logged in using ``huggingface-cli login``,
   you can skip setting this environment variable.

Some tests generate images using the original 16-bit models. You can cache these results to speed up future test runs by setting the environment variable ``NUNCHAKU_TEST_CACHE_ROOT``. If not set, the images will be saved in ``test_results/ref``.

Writing Tests
~~~~~~~~~~~~~

When adding a new feature, please include corresponding test cases in the ``tests`` directory. **Please avoid modifying existing tests.**

To test visual output correctness, you can:

1. **Generate reference images:** Use the original 16-bit model to produce a small number of reference images (e.g., 4).

2. **Generate comparison images:** Run your method using the **same inputs and seeds** to ensure deterministic outputs. You can control the seed by setting the ``generator`` parameter in the diffusers pipeline.

3. **Compute similarity:** Evaluate the similarity between your outputs and the reference images using the `LPIPS <https://arxiv.org/abs/1801.03924>`_ metric. Use the ``compute_lpips`` function provided in ``tests/flux/utils.py``:

   .. code-block:: shell

      lpips = compute_lpips(dir1, dir2)

   Here, ``dir1`` should point to the directory containing the reference images, and ``dir2`` should contain the images generated by your method.

Setting the LPIPS Threshold
~~~~~~~~~~~~~~~~~~~~~~~~~~~

To pass the test, the LPIPS score must be below a predefined threshold—typically **< 0.3**.
We recommend first running the comparison locally to observe the LPIPS value,
and then setting the threshold slightly above that value to allow for minor variations.
Since the test is based on a small sample of images, slight fluctuations are expected;
a margin of **+0.04** is generally sufficient.