Unverified Commit f00a9957 authored by co63oc's avatar co63oc Committed by GitHub
Browse files

Fix typos in strings and comments (#11407)

parent e8312e7c
...@@ -24,7 +24,7 @@ ...@@ -24,7 +24,7 @@
## Generating Videos with Wan 2.1 ## Generating Videos with Wan 2.1
We will first need to install some addtional dependencies. We will first need to install some additional dependencies.
```shell ```shell
pip install -u ftfy imageio-ffmpeg imageio pip install -u ftfy imageio-ffmpeg imageio
......
...@@ -216,7 +216,7 @@ Setting the `<ID_TOKEN>` is not necessary. From some limited experimentation, we ...@@ -216,7 +216,7 @@ Setting the `<ID_TOKEN>` is not necessary. From some limited experimentation, we
> - The original repository uses a `lora_alpha` of `1`. We found this not suitable in many runs, possibly due to difference in modeling backends and training settings. Our recommendation is to set to the `lora_alpha` to either `rank` or `rank // 2`. > - The original repository uses a `lora_alpha` of `1`. We found this not suitable in many runs, possibly due to difference in modeling backends and training settings. Our recommendation is to set to the `lora_alpha` to either `rank` or `rank // 2`.
> - If you're training on data whose captions generate bad results with the original model, a `rank` of 64 and above is good and also the recommendation by the team behind CogVideoX. If the generations are already moderately good on your training captions, a `rank` of 16/32 should work. We found that setting the rank too low, say `4`, is not ideal and doesn't produce promising results. > - If you're training on data whose captions generate bad results with the original model, a `rank` of 64 and above is good and also the recommendation by the team behind CogVideoX. If the generations are already moderately good on your training captions, a `rank` of 16/32 should work. We found that setting the rank too low, say `4`, is not ideal and doesn't produce promising results.
> - The authors of CogVideoX recommend 4000 training steps and 100 training videos overall to achieve the best result. While that might yield the best results, we found from our limited experimentation that 2000 steps and 25 videos could also be sufficient. > - The authors of CogVideoX recommend 4000 training steps and 100 training videos overall to achieve the best result. While that might yield the best results, we found from our limited experimentation that 2000 steps and 25 videos could also be sufficient.
> - When using the Prodigy opitimizer for training, one can follow the recommendations from [this](https://huggingface.co/blog/sdxl_lora_advanced_script) blog. Prodigy tends to overfit quickly. From my very limited testing, I found a learning rate of `0.5` to be suitable in addition to `--prodigy_use_bias_correction`, `prodigy_safeguard_warmup` and `--prodigy_decouple`. > - When using the Prodigy optimizer for training, one can follow the recommendations from [this](https://huggingface.co/blog/sdxl_lora_advanced_script) blog. Prodigy tends to overfit quickly. From my very limited testing, I found a learning rate of `0.5` to be suitable in addition to `--prodigy_use_bias_correction`, `prodigy_safeguard_warmup` and `--prodigy_decouple`.
> - The recommended learning rate by the CogVideoX authors and from our experimentation with Adam/AdamW is between `1e-3` and `1e-4` for a dataset of 25+ videos. > - The recommended learning rate by the CogVideoX authors and from our experimentation with Adam/AdamW is between `1e-3` and `1e-4` for a dataset of 25+ videos.
> >
> Note that our testing is not exhaustive due to limited time for exploration. Our recommendation would be to play around with the different knobs and dials to find the best settings for your data. > Note that our testing is not exhaustive due to limited time for exploration. Our recommendation would be to play around with the different knobs and dials to find the best settings for your data.
......
...@@ -589,7 +589,7 @@ For stage 2 of DeepFloyd IF with DreamBooth, pay attention to these parameters: ...@@ -589,7 +589,7 @@ For stage 2 of DeepFloyd IF with DreamBooth, pay attention to these parameters:
* `--learning_rate=5e-6`, use a lower learning rate with a smaller effective batch size * `--learning_rate=5e-6`, use a lower learning rate with a smaller effective batch size
* `--resolution=256`, the expected resolution for the upscaler * `--resolution=256`, the expected resolution for the upscaler
* `--train_batch_size=2` and `--gradient_accumulation_steps=6`, to effectively train on images wiht faces requires larger batch sizes * `--train_batch_size=2` and `--gradient_accumulation_steps=6`, to effectively train on images with faces requires larger batch sizes
```bash ```bash
export MODEL_NAME="DeepFloyd/IF-II-L-v1.0" export MODEL_NAME="DeepFloyd/IF-II-L-v1.0"
......
...@@ -89,7 +89,7 @@ Many of the basic and important parameters are described in the [Text-to-image]( ...@@ -89,7 +89,7 @@ Many of the basic and important parameters are described in the [Text-to-image](
As with the script parameters, a walkthrough of the training script is provided in the [Text-to-image](text2image#training-script) training guide. Instead, this guide takes a look at the T2I-Adapter relevant parts of the script. As with the script parameters, a walkthrough of the training script is provided in the [Text-to-image](text2image#training-script) training guide. Instead, this guide takes a look at the T2I-Adapter relevant parts of the script.
The training script begins by preparing the dataset. This incudes [tokenizing](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/t2i_adapter/train_t2i_adapter_sdxl.py#L674) the prompt and [applying transforms](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/t2i_adapter/train_t2i_adapter_sdxl.py#L714) to the images and conditioning images. The training script begins by preparing the dataset. This includes [tokenizing](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/t2i_adapter/train_t2i_adapter_sdxl.py#L674) the prompt and [applying transforms](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/t2i_adapter/train_t2i_adapter_sdxl.py#L714) to the images and conditioning images.
```py ```py
conditioning_image_transforms = transforms.Compose( conditioning_image_transforms = transforms.Compose(
......
...@@ -2181,7 +2181,7 @@ def main(args): ...@@ -2181,7 +2181,7 @@ def main(args):
# Predict the noise residual # Predict the noise residual
model_pred = transformer( model_pred = transformer(
hidden_states=packed_noisy_model_input, hidden_states=packed_noisy_model_input,
# YiYi notes: divide it by 1000 for now because we scale it by 1000 in the transforme rmodel (we should not keep it but I want to keep the inputs same for the model for testing) # YiYi notes: divide it by 1000 for now because we scale it by 1000 in the transformer model (we should not keep it but I want to keep the inputs same for the model for testing)
timestep=timesteps / 1000, timestep=timesteps / 1000,
guidance=guidance, guidance=guidance,
pooled_projections=pooled_prompt_embeds, pooled_projections=pooled_prompt_embeds,
......
...@@ -5381,7 +5381,7 @@ pipe = DiffusionPipeline.from_pretrained( ...@@ -5381,7 +5381,7 @@ pipe = DiffusionPipeline.from_pretrained(
# Here we need use pipeline internal unet model # Here we need use pipeline internal unet model
pipe.unet = pipe.unet_model.from_pretrained(model_id, subfolder="unet", variant="fp16", use_safetensors=True) pipe.unet = pipe.unet_model.from_pretrained(model_id, subfolder="unet", variant="fp16", use_safetensors=True)
# Load aditional layers to the model # Load additional layers to the model
pipe.unet.load_additional_layers(weight_path="proc_data/faithdiff/FaithDiff.bin", dtype=dtype) pipe.unet.load_additional_layers(weight_path="proc_data/faithdiff/FaithDiff.bin", dtype=dtype)
# Enable vae tiling # Enable vae tiling
......
...@@ -312,9 +312,9 @@ if __name__ == "__main__": ...@@ -312,9 +312,9 @@ if __name__ == "__main__":
# These are the coordinates of the output image # These are the coordinates of the output image
out_coordinates = np.arange(1, out_length + 1) out_coordinates = np.arange(1, out_length + 1)
# since both scale-factor and output size can be provided simulatneously, perserving the center of the image requires shifting # since both scale-factor and output size can be provided simultaneously, preserving the center of the image requires shifting
# the output coordinates. the deviation is because out_length doesn't necesary equal in_length*scale. # the output coordinates. the deviation is because out_length doesn't necessary equal in_length*scale.
# to keep the center we need to subtract half of this deivation so that we get equal margins for boths sides and center is preserved. # to keep the center we need to subtract half of this deviation so that we get equal margins for both sides and center is preserved.
shifted_out_coordinates = out_coordinates - (out_length - in_length * scale) / 2 shifted_out_coordinates = out_coordinates - (out_length - in_length * scale) / 2
# These are the matching positions of the output-coordinates on the input image coordinates. # These are the matching positions of the output-coordinates on the input image coordinates.
......
...@@ -351,7 +351,7 @@ def my_forward( ...@@ -351,7 +351,7 @@ def my_forward(
cross_attention_kwargs (`dict`, *optional*): cross_attention_kwargs (`dict`, *optional*):
A kwargs dictionary that if specified is passed along to the [`AttnProcessor`]. A kwargs dictionary that if specified is passed along to the [`AttnProcessor`].
added_cond_kwargs: (`dict`, *optional*): added_cond_kwargs: (`dict`, *optional*):
A kwargs dictionary containin additional embeddings that if specified are added to the embeddings that A kwargs dictionary containing additional embeddings that if specified are added to the embeddings that
are passed along to the UNet blocks. are passed along to the UNet blocks.
Returns: Returns:
...@@ -864,9 +864,9 @@ def get_flow_and_interframe_paras(flow_model, imgs): ...@@ -864,9 +864,9 @@ def get_flow_and_interframe_paras(flow_model, imgs):
class AttentionControl: class AttentionControl:
""" """
Control FRESCO-based attention Control FRESCO-based attention
* enable/diable spatial-guided attention * enable/disable spatial-guided attention
* enable/diable temporal-guided attention * enable/disable temporal-guided attention
* enable/diable cross-frame attention * enable/disable cross-frame attention
* collect intermediate attention feature (for spatial-guided attention) * collect intermediate attention feature (for spatial-guided attention)
""" """
......
...@@ -34,7 +34,7 @@ class RASGAttnProcessor: ...@@ -34,7 +34,7 @@ class RASGAttnProcessor:
temb: Optional[torch.Tensor] = None, temb: Optional[torch.Tensor] = None,
scale: float = 1.0, scale: float = 1.0,
) -> torch.Tensor: ) -> torch.Tensor:
# Same as the default AttnProcessor up untill the part where similarity matrix gets saved # Same as the default AttnProcessor up until the part where similarity matrix gets saved
downscale_factor = self.mask_resoltuion // hidden_states.shape[1] downscale_factor = self.mask_resoltuion // hidden_states.shape[1]
residual = hidden_states residual = hidden_states
......
...@@ -889,7 +889,7 @@ def main(args): ...@@ -889,7 +889,7 @@ def main(args):
mixed_precision=args.mixed_precision, mixed_precision=args.mixed_precision,
log_with=args.report_to, log_with=args.report_to,
project_config=accelerator_project_config, project_config=accelerator_project_config,
split_batches=True, # It's important to set this to True when using webdataset to get the right number of steps for lr scheduling. If set to False, the number of steps will be devide by the number of processes assuming batches are multiplied by the number of processes split_batches=True, # It's important to set this to True when using webdataset to get the right number of steps for lr scheduling. If set to False, the number of steps will be divided by the number of processes assuming batches are multiplied by the number of processes
) )
# Make one log on every process with the configuration for debugging. # Make one log on every process with the configuration for debugging.
......
...@@ -721,7 +721,7 @@ def main(args): ...@@ -721,7 +721,7 @@ def main(args):
mixed_precision=args.mixed_precision, mixed_precision=args.mixed_precision,
log_with=args.report_to, log_with=args.report_to,
project_config=accelerator_project_config, project_config=accelerator_project_config,
split_batches=True, # It's important to set this to True when using webdataset to get the right number of steps for lr scheduling. If set to False, the number of steps will be devide by the number of processes assuming batches are multiplied by the number of processes split_batches=True, # It's important to set this to True when using webdataset to get the right number of steps for lr scheduling. If set to False, the number of steps will be divided by the number of processes assuming batches are multiplied by the number of processes
) )
# Make one log on every process with the configuration for debugging. # Make one log on every process with the configuration for debugging.
......
...@@ -884,7 +884,7 @@ def main(args): ...@@ -884,7 +884,7 @@ def main(args):
mixed_precision=args.mixed_precision, mixed_precision=args.mixed_precision,
log_with=args.report_to, log_with=args.report_to,
project_config=accelerator_project_config, project_config=accelerator_project_config,
split_batches=True, # It's important to set this to True when using webdataset to get the right number of steps for lr scheduling. If set to False, the number of steps will be devide by the number of processes assuming batches are multiplied by the number of processes split_batches=True, # It's important to set this to True when using webdataset to get the right number of steps for lr scheduling. If set to False, the number of steps will be divided by the number of processes assuming batches are multiplied by the number of processes
) )
# Make one log on every process with the configuration for debugging. # Make one log on every process with the configuration for debugging.
......
...@@ -854,7 +854,7 @@ def main(args): ...@@ -854,7 +854,7 @@ def main(args):
mixed_precision=args.mixed_precision, mixed_precision=args.mixed_precision,
log_with=args.report_to, log_with=args.report_to,
project_config=accelerator_project_config, project_config=accelerator_project_config,
split_batches=True, # It's important to set this to True when using webdataset to get the right number of steps for lr scheduling. If set to False, the number of steps will be devide by the number of processes assuming batches are multiplied by the number of processes split_batches=True, # It's important to set this to True when using webdataset to get the right number of steps for lr scheduling. If set to False, the number of steps will be divided by the number of processes assuming batches are multiplied by the number of processes
) )
# Make one log on every process with the configuration for debugging. # Make one log on every process with the configuration for debugging.
......
...@@ -894,7 +894,7 @@ def main(args): ...@@ -894,7 +894,7 @@ def main(args):
mixed_precision=args.mixed_precision, mixed_precision=args.mixed_precision,
log_with=args.report_to, log_with=args.report_to,
project_config=accelerator_project_config, project_config=accelerator_project_config,
split_batches=True, # It's important to set this to True when using webdataset to get the right number of steps for lr scheduling. If set to False, the number of steps will be devide by the number of processes assuming batches are multiplied by the number of processes split_batches=True, # It's important to set this to True when using webdataset to get the right number of steps for lr scheduling. If set to False, the number of steps will be divided by the number of processes assuming batches are multiplied by the number of processes
) )
# Make one log on every process with the configuration for debugging. # Make one log on every process with the configuration for debugging.
......
...@@ -1634,7 +1634,7 @@ def main(args): ...@@ -1634,7 +1634,7 @@ def main(args):
# Predict the noise residual # Predict the noise residual
model_pred = transformer( model_pred = transformer(
hidden_states=packed_noisy_model_input, hidden_states=packed_noisy_model_input,
# YiYi notes: divide it by 1000 for now because we scale it by 1000 in the transforme rmodel (we should not keep it but I want to keep the inputs same for the model for testing) # YiYi notes: divide it by 1000 for now because we scale it by 1000 in the transformer model (we should not keep it but I want to keep the inputs same for the model for testing)
timestep=timesteps / 1000, timestep=timesteps / 1000,
guidance=guidance, guidance=guidance,
pooled_projections=pooled_prompt_embeds, pooled_projections=pooled_prompt_embeds,
......
...@@ -1749,7 +1749,7 @@ def main(args): ...@@ -1749,7 +1749,7 @@ def main(args):
# Predict the noise residual # Predict the noise residual
model_pred = transformer( model_pred = transformer(
hidden_states=packed_noisy_model_input, hidden_states=packed_noisy_model_input,
# YiYi notes: divide it by 1000 for now because we scale it by 1000 in the transforme rmodel (we should not keep it but I want to keep the inputs same for the model for testing) # YiYi notes: divide it by 1000 for now because we scale it by 1000 in the transformer model (we should not keep it but I want to keep the inputs same for the model for testing)
timestep=timesteps / 1000, timestep=timesteps / 1000,
guidance=guidance, guidance=guidance,
pooled_projections=pooled_prompt_embeds, pooled_projections=pooled_prompt_embeds,
......
...@@ -1088,7 +1088,7 @@ def main(args): ...@@ -1088,7 +1088,7 @@ def main(args):
text_ids = batch["text_ids"].to(device=accelerator.device, dtype=weight_dtype) text_ids = batch["text_ids"].to(device=accelerator.device, dtype=weight_dtype)
model_pred = transformer( model_pred = transformer(
hidden_states=packed_noisy_model_input, hidden_states=packed_noisy_model_input,
# YiYi notes: divide it by 1000 for now because we scale it by 1000 in the transforme rmodel (we should not keep it but I want to keep the inputs same for the model for testing) # YiYi notes: divide it by 1000 for now because we scale it by 1000 in the transformer model (we should not keep it but I want to keep the inputs same for the model for testing)
timestep=timesteps / 1000, timestep=timesteps / 1000,
guidance=guidance, guidance=guidance,
pooled_projections=pooled_prompt_embeds, pooled_projections=pooled_prompt_embeds,
......
...@@ -286,7 +286,7 @@ class KDownsample2D(nn.Module): ...@@ -286,7 +286,7 @@ class KDownsample2D(nn.Module):
class CogVideoXDownsample3D(nn.Module): class CogVideoXDownsample3D(nn.Module):
# Todo: Wait for paper relase. # Todo: Wait for paper release.
r""" r"""
A 3D Downsampling layer using in [CogVideoX]() by Tsinghua University & ZhipuAI A 3D Downsampling layer using in [CogVideoX]() by Tsinghua University & ZhipuAI
......
...@@ -358,7 +358,7 @@ class KUpsample2D(nn.Module): ...@@ -358,7 +358,7 @@ class KUpsample2D(nn.Module):
class CogVideoXUpsample3D(nn.Module): class CogVideoXUpsample3D(nn.Module):
r""" r"""
A 3D Upsample layer using in CogVideoX by Tsinghua University & ZhipuAI # Todo: Wait for paper relase. A 3D Upsample layer using in CogVideoX by Tsinghua University & ZhipuAI # Todo: Wait for paper release.
Args: Args:
in_channels (`int`): in_channels (`int`):
......
...@@ -514,7 +514,7 @@ class AllegroPipeline(DiffusionPipeline): ...@@ -514,7 +514,7 @@ class AllegroPipeline(DiffusionPipeline):
# &amp # &amp
caption = re.sub(r"&amp", "", caption) caption = re.sub(r"&amp", "", caption)
# ip adresses: # ip addresses:
caption = re.sub(r"\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}", " ", caption) caption = re.sub(r"\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}", " ", caption)
# article ids: # article ids:
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment