Unverified Commit 14b46061 authored by YiYi Xu's avatar YiYi Xu Committed by GitHub
Browse files

[doc] add link to training script (#3271)



add link to training script
Co-authored-by: default avataryiyixuxu <yixu@yis-macbook-pro.lan>
parent 4d35d7fe
...@@ -33,7 +33,12 @@ cd diffusers ...@@ -33,7 +33,12 @@ cd diffusers
pip install -e . pip install -e .
``` ```
Then navigate into the example folder and run: Then navigate into the [example folder](https://github.com/huggingface/diffusers/tree/main/examples/controlnet)
```bash
cd examples/controlnet
```
Now run:
```bash ```bash
pip install -r requirements.txt pip install -r requirements.txt
``` ```
......
...@@ -33,7 +33,13 @@ cd diffusers ...@@ -33,7 +33,13 @@ cd diffusers
pip install -e . pip install -e .
``` ```
Then cd in the example folder and run Then cd into the [example folder](https://github.com/huggingface/diffusers/tree/main/examples/custom_diffusion)
```
cd examples/custom_diffusion
```
Now run
```bash ```bash
pip install -r requirements.txt pip install -r requirements.txt
......
...@@ -24,7 +24,7 @@ The output is an "edited" image that reflects the edit instruction applied on th ...@@ -24,7 +24,7 @@ The output is an "edited" image that reflects the edit instruction applied on th
<img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/output-gs%407-igs%401-steps%4050.png" alt="instructpix2pix-output" width=600/> <img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/output-gs%407-igs%401-steps%4050.png" alt="instructpix2pix-output" width=600/>
</p> </p>
The `train_instruct_pix2pix.py` script shows how to implement the training procedure and adapt it for Stable Diffusion. The `train_instruct_pix2pix.py` script (you can find the it [here](https://github.com/huggingface/diffusers/blob/main/examples/instruct_pix2pix/train_instruct_pix2pix.py)) shows how to implement the training procedure and adapt it for Stable Diffusion.
***Disclaimer: Even though `train_instruct_pix2pix.py` implements the InstructPix2Pix ***Disclaimer: Even though `train_instruct_pix2pix.py` implements the InstructPix2Pix
training procedure while being faithful to the [original implementation](https://github.com/timothybrooks/instruct-pix2pix) we have only tested it on a [small-scale dataset](https://huggingface.co/datasets/fusing/instructpix2pix-1000-samples). This can impact the end results. For better results, we recommend longer training runs with a larger dataset. [Here](https://huggingface.co/datasets/timbrooks/instructpix2pix-clip-filtered) you can find a large dataset for InstructPix2Pix training.*** training procedure while being faithful to the [original implementation](https://github.com/timothybrooks/instruct-pix2pix) we have only tested it on a [small-scale dataset](https://huggingface.co/datasets/fusing/instructpix2pix-1000-samples). This can impact the end results. For better results, we recommend longer training runs with a larger dataset. [Here](https://huggingface.co/datasets/timbrooks/instructpix2pix-clip-filtered) you can find a large dataset for InstructPix2Pix training.***
...@@ -44,7 +44,12 @@ cd diffusers ...@@ -44,7 +44,12 @@ cd diffusers
pip install -e . pip install -e .
``` ```
Then cd in the example folder and run Then cd in the example folder
```bash
cd examples/instruct_pix2pix
```
Now run
```bash ```bash
pip install -r requirements.txt pip install -r requirements.txt
``` ```
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment