"git@developer.sourcefind.cn:renzhc/diffusers_dcu.git" did not exist on "ea39cd7e644b1d7a5c8ca65a1ab893f1e75c544c"
Unverified Commit 6aaa0518 authored by Aryan's avatar Aryan Committed by GitHub
Browse files

Community hosted weights for diffusers format HunyuanVideo weights (#10344)

update docs and example to use community weights
parent 233dffdc
...@@ -18,7 +18,7 @@ The model can be loaded with the following code snippet. ...@@ -18,7 +18,7 @@ The model can be loaded with the following code snippet.
```python ```python
from diffusers import AutoencoderKLHunyuanVideo from diffusers import AutoencoderKLHunyuanVideo
vae = AutoencoderKLHunyuanVideo.from_pretrained("tencent/HunyuanVideo", torch_dtype=torch.float16) vae = AutoencoderKLHunyuanVideo.from_pretrained("hunyuanvideo-community/HunyuanVideo", subfolder="vae", torch_dtype=torch.float16)
``` ```
## AutoencoderKLHunyuanVideo ## AutoencoderKLHunyuanVideo
......
...@@ -18,7 +18,7 @@ The model can be loaded with the following code snippet. ...@@ -18,7 +18,7 @@ The model can be loaded with the following code snippet.
```python ```python
from diffusers import HunyuanVideoTransformer3DModel from diffusers import HunyuanVideoTransformer3DModel
transformer = HunyuanVideoTransformer3DModel.from_pretrained("tencent/HunyuanVideo", torch_dtype=torch.bfloat16) transformer = HunyuanVideoTransformer3DModel.from_pretrained("hunyuanvideo-community/HunyuanVideo", subfolder="transformer", torch_dtype=torch.bfloat16)
``` ```
## HunyuanVideoTransformer3DModel ## HunyuanVideoTransformer3DModel
......
...@@ -29,7 +29,7 @@ Recommendations for inference: ...@@ -29,7 +29,7 @@ Recommendations for inference:
- Transformer should be in `torch.bfloat16`. - Transformer should be in `torch.bfloat16`.
- VAE should be in `torch.float16`. - VAE should be in `torch.float16`.
- `num_frames` should be of the form `4 * k + 1`, for example `49` or `129`. - `num_frames` should be of the form `4 * k + 1`, for example `49` or `129`.
- For smaller resolution images, try lower values of `shift` (between `2.0` to `5.0`) in the [Scheduler](https://huggingface.co/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler.shift). For larger resolution images, try higher values (between `7.0` and `12.0`). The default value is `7.0` for HunyuanVideo. - For smaller resolution videos, try lower values of `shift` (between `2.0` to `5.0`) in the [Scheduler](https://huggingface.co/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler.shift). For larger resolution images, try higher values (between `7.0` and `12.0`). The default value is `7.0` for HunyuanVideo.
- For more information about supported resolutions and other details, please refer to the original repository [here](https://github.com/Tencent/HunyuanVideo/). - For more information about supported resolutions and other details, please refer to the original repository [here](https://github.com/Tencent/HunyuanVideo/).
## HunyuanVideoPipeline ## HunyuanVideoPipeline
......
...@@ -39,7 +39,7 @@ EXAMPLE_DOC_STRING = """ ...@@ -39,7 +39,7 @@ EXAMPLE_DOC_STRING = """
>>> from diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel >>> from diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel
>>> from diffusers.utils import export_to_video >>> from diffusers.utils import export_to_video
>>> model_id = "tencent/HunyuanVideo" >>> model_id = "hunyuanvideo-community/HunyuanVideo"
>>> transformer = HunyuanVideoTransformer3DModel.from_pretrained( >>> transformer = HunyuanVideoTransformer3DModel.from_pretrained(
... model_id, subfolder="transformer", torch_dtype=torch.bfloat16 ... model_id, subfolder="transformer", torch_dtype=torch.bfloat16
... ) ... )
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment