Unverified Commit fddbab79 authored by Sayak Paul's avatar Sayak Paul Committed by GitHub
Browse files

[research_projects] Update README.md to include a note about NF5 T5-xxl (#9775)

Update README.md
parent 298ab6eb
......@@ -6,6 +6,7 @@
This example shows how to fine-tune [Flux.1 Dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) with LoRA and quantization. We show this by using the [`Norod78/Yarn-art-style`](https://huggingface.co/datasets/Norod78/Yarn-art-style) dataset. Steps below summarize the workflow:
* We precompute the text embeddings in `compute_embeddings.py` and serialize them into a parquet file.
* Even though optional, we load the T5-xxl in NF4 to further reduce the memory foot-print.
* `train_dreambooth_lora_flux_miniature.py` takes care of training:
* Since we already precomputed the text embeddings, we don't load the text encoders.
* We load the VAE and use it to precompute the image latents and we then delete it.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment