Unverified Commit 76d492ea authored by Yuta Hayashibe's avatar Yuta Hayashibe Committed by GitHub
Browse files

Fix typos and add Typo check GitHub Action (#483)

* Fix typos

* Add a typo check action

* Fix a bug

* Changed to manual typo check currently

Ref: https://github.com/huggingface/diffusers/pull/483#pullrequestreview-1104468010

Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>

* Removed a confusing message

* Renamed "nin_shortcut" to "in_shortcut"

* Add memo about NIN
Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
parent c0493723
...@@ -85,7 +85,7 @@ class LDMTextToImagePipeline(DiffusionPipeline): ...@@ -85,7 +85,7 @@ class LDMTextToImagePipeline(DiffusionPipeline):
deterministic. deterministic.
output_type (`str`, *optional*, defaults to `"pil"`): output_type (`str`, *optional*, defaults to `"pil"`):
The output format of the generate image. Choose between The output format of the generate image. Choose between
[PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `nd.array`. [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
return_dict (`bool`, *optional*): return_dict (`bool`, *optional*):
Whether or not to return a [`~pipeline_utils.ImagePipelineOutput`] instead of a plain tuple. Whether or not to return a [`~pipeline_utils.ImagePipelineOutput`] instead of a plain tuple.
......
...@@ -50,7 +50,7 @@ class LDMPipeline(DiffusionPipeline): ...@@ -50,7 +50,7 @@ class LDMPipeline(DiffusionPipeline):
expense of slower inference. expense of slower inference.
output_type (`str`, *optional*, defaults to `"pil"`): output_type (`str`, *optional*, defaults to `"pil"`):
The output format of the generate image. Choose between The output format of the generate image. Choose between
[PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `nd.array`. [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
return_dict (`bool`, *optional*, defaults to `True`): return_dict (`bool`, *optional*, defaults to `True`):
Whether or not to return a [`~pipeline_utils.ImagePipelineOutput`] instead of a plain tuple. Whether or not to return a [`~pipeline_utils.ImagePipelineOutput`] instead of a plain tuple.
......
...@@ -63,7 +63,7 @@ class PNDMPipeline(DiffusionPipeline): ...@@ -63,7 +63,7 @@ class PNDMPipeline(DiffusionPipeline):
generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
deterministic. deterministic.
output_type (`str`, `optional`, defaults to `"pil"`): The output format of the generate image. Choose output_type (`str`, `optional`, defaults to `"pil"`): The output format of the generate image. Choose
between [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `nd.array`. between [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
return_dict (`bool`, `optional`, defaults to `True`): Whether or not to return a return_dict (`bool`, `optional`, defaults to `True`): Whether or not to return a
[`~pipeline_utils.ImagePipelineOutput`] instead of a plain tuple. [`~pipeline_utils.ImagePipelineOutput`] instead of a plain tuple.
......
...@@ -43,7 +43,7 @@ class ScoreSdeVePipeline(DiffusionPipeline): ...@@ -43,7 +43,7 @@ class ScoreSdeVePipeline(DiffusionPipeline):
deterministic. deterministic.
output_type (`str`, *optional*, defaults to `"pil"`): output_type (`str`, *optional*, defaults to `"pil"`):
The output format of the generate image. Choose between The output format of the generate image. Choose between
[PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `nd.array`. [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
return_dict (`bool`, *optional*, defaults to `True`): return_dict (`bool`, *optional*, defaults to `True`):
Whether or not to return a [`~pipeline_utils.ImagePipelineOutput`] instead of a plain tuple. Whether or not to return a [`~pipeline_utils.ImagePipelineOutput`] instead of a plain tuple.
......
...@@ -12,7 +12,7 @@ The summary of the model is the following: ...@@ -12,7 +12,7 @@ The summary of the model is the following:
- Stable Diffusion has the same architecture as [Latent Diffusion](https://arxiv.org/abs/2112.10752) but uses a frozen CLIP Text Encoder instead of training the text encoder jointly with the diffusion model. - Stable Diffusion has the same architecture as [Latent Diffusion](https://arxiv.org/abs/2112.10752) but uses a frozen CLIP Text Encoder instead of training the text encoder jointly with the diffusion model.
- An in-detail explanation of the Stable Diffusion model can be found under [Stable Diffusion with 🧨 Diffusers](https://huggingface.co/blog/stable_diffusion). - An in-detail explanation of the Stable Diffusion model can be found under [Stable Diffusion with 🧨 Diffusers](https://huggingface.co/blog/stable_diffusion).
- If you don't want to rely on the Hugging Face Hub and having to pass a authentification token, you can - If you don't want to rely on the Hugging Face Hub and having to pass a authentication token, you can
download the weights with `git lfs install; git clone https://huggingface.co/CompVis/stable-diffusion-v1-4` and instead pass the local path to the cloned folder to `from_pretrained` as shown below. download the weights with `git lfs install; git clone https://huggingface.co/CompVis/stable-diffusion-v1-4` and instead pass the local path to the cloned folder to `from_pretrained` as shown below.
- Stable Diffusion can work with a variety of different samplers as is shown below. - Stable Diffusion can work with a variety of different samplers as is shown below.
......
...@@ -136,7 +136,7 @@ class StableDiffusionPipeline(DiffusionPipeline): ...@@ -136,7 +136,7 @@ class StableDiffusionPipeline(DiffusionPipeline):
tensor will ge generated by sampling using the supplied random `generator`. tensor will ge generated by sampling using the supplied random `generator`.
output_type (`str`, *optional*, defaults to `"pil"`): output_type (`str`, *optional*, defaults to `"pil"`):
The output format of the generate image. Choose between The output format of the generate image. Choose between
[PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `nd.array`. [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
return_dict (`bool`, *optional*, defaults to `True`): return_dict (`bool`, *optional*, defaults to `True`):
Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
plain tuple. plain tuple.
...@@ -224,7 +224,7 @@ class StableDiffusionPipeline(DiffusionPipeline): ...@@ -224,7 +224,7 @@ class StableDiffusionPipeline(DiffusionPipeline):
self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs) self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs)
# if we use LMSDiscreteScheduler, let's make sure latents are mulitplied by sigmas # if we use LMSDiscreteScheduler, let's make sure latents are multiplied by sigmas
if isinstance(self.scheduler, LMSDiscreteScheduler): if isinstance(self.scheduler, LMSDiscreteScheduler):
latents = latents * self.scheduler.sigmas[0] latents = latents * self.scheduler.sigmas[0]
......
...@@ -146,7 +146,7 @@ class StableDiffusionImg2ImgPipeline(DiffusionPipeline): ...@@ -146,7 +146,7 @@ class StableDiffusionImg2ImgPipeline(DiffusionPipeline):
deterministic. deterministic.
output_type (`str`, *optional*, defaults to `"pil"`): output_type (`str`, *optional*, defaults to `"pil"`):
The output format of the generate image. Choose between The output format of the generate image. Choose between
[PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `nd.array`. [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
return_dict (`bool`, *optional*, defaults to `True`): return_dict (`bool`, *optional*, defaults to `True`):
Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
plain tuple. plain tuple.
...@@ -249,7 +249,7 @@ class StableDiffusionImg2ImgPipeline(DiffusionPipeline): ...@@ -249,7 +249,7 @@ class StableDiffusionImg2ImgPipeline(DiffusionPipeline):
# expand the latents if we are doing classifier free guidance # expand the latents if we are doing classifier free guidance
latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
# if we use LMSDiscreteScheduler, let's make sure latents are mulitplied by sigmas # if we use LMSDiscreteScheduler, let's make sure latents are multiplied by sigmas
if isinstance(self.scheduler, LMSDiscreteScheduler): if isinstance(self.scheduler, LMSDiscreteScheduler):
sigma = self.scheduler.sigmas[t_index] sigma = self.scheduler.sigmas[t_index]
# the model input needs to be scaled to match the continuous ODE formulation in K-LMS # the model input needs to be scaled to match the continuous ODE formulation in K-LMS
......
...@@ -169,7 +169,7 @@ class StableDiffusionInpaintPipeline(DiffusionPipeline): ...@@ -169,7 +169,7 @@ class StableDiffusionInpaintPipeline(DiffusionPipeline):
deterministic. deterministic.
output_type (`str`, *optional*, defaults to `"pil"`): output_type (`str`, *optional*, defaults to `"pil"`):
The output format of the generate image. Choose between The output format of the generate image. Choose between
[PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `nd.array`. [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
return_dict (`bool`, *optional*, defaults to `True`): return_dict (`bool`, *optional*, defaults to `True`):
Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
plain tuple. plain tuple.
......
...@@ -107,7 +107,7 @@ class StableDiffusionOnnxPipeline(DiffusionPipeline): ...@@ -107,7 +107,7 @@ class StableDiffusionOnnxPipeline(DiffusionPipeline):
self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs) self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs)
# if we use LMSDiscreteScheduler, let's make sure latents are mulitplied by sigmas # if we use LMSDiscreteScheduler, let's make sure latents are multiplied by sigmas
if isinstance(self.scheduler, LMSDiscreteScheduler): if isinstance(self.scheduler, LMSDiscreteScheduler):
latents = latents * self.scheduler.sigmas[0] latents = latents * self.scheduler.sigmas[0]
......
...@@ -55,7 +55,7 @@ class KarrasVePipeline(DiffusionPipeline): ...@@ -55,7 +55,7 @@ class KarrasVePipeline(DiffusionPipeline):
expense of slower inference. expense of slower inference.
output_type (`str`, *optional*, defaults to `"pil"`): output_type (`str`, *optional*, defaults to `"pil"`):
The output format of the generate image. Choose between The output format of the generate image. Choose between
[PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `nd.array`. [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
return_dict (`bool`, *optional*, defaults to `True`): return_dict (`bool`, *optional*, defaults to `True`):
Whether or not to return a [`~pipeline_utils.ImagePipelineOutput`] instead of a plain tuple. Whether or not to return a [`~pipeline_utils.ImagePipelineOutput`] instead of a plain tuple.
......
# Schedulers # Schedulers
- Schedulers are the algorithms to use diffusion models in inference as well as for training. They include the noise schedules and define algorithm-specific diffusion steps. - Schedulers are the algorithms to use diffusion models in inference as well as for training. They include the noise schedules and define algorithm-specific diffusion steps.
- Schedulers can be used interchangable between diffusion models in inference to find the preferred trade-off between speed and generation quality. - Schedulers can be used interchangeable between diffusion models in inference to find the preferred trade-off between speed and generation quality.
- Schedulers are available in numpy, but can easily be transformed into PyTorch. - Schedulers are available in numpy, but can easily be transformed into PyTorch.
## API ## API
......
...@@ -34,7 +34,7 @@ class KarrasVeOutput(BaseOutput): ...@@ -34,7 +34,7 @@ class KarrasVeOutput(BaseOutput):
Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the
denoising loop. denoising loop.
derivative (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images): derivative (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images):
Derivate of predicted original image sample (x_0). Derivative of predicted original image sample (x_0).
""" """
prev_sample: torch.FloatTensor prev_sample: torch.FloatTensor
......
...@@ -14,7 +14,7 @@ ...@@ -14,7 +14,7 @@
# DISCLAIMER: This file is strongly influenced by https://github.com/yang-song/score_sde_pytorch # DISCLAIMER: This file is strongly influenced by https://github.com/yang-song/score_sde_pytorch
# TODO(Patrick, Anton, Suraj) - make scheduler framework indepedent and clean-up a bit # TODO(Patrick, Anton, Suraj) - make scheduler framework independent and clean-up a bit
import numpy as np import numpy as np
import torch import torch
......
...@@ -145,7 +145,7 @@ class ModelTesterMixin: ...@@ -145,7 +145,7 @@ class ModelTesterMixin:
new_model.to(torch_device) new_model.to(torch_device)
new_model.eval() new_model.eval()
# check if all paramters shape are the same # check if all parameters shape are the same
for param_name in model.state_dict().keys(): for param_name in model.state_dict().keys():
param_1 = model.state_dict()[param_name] param_1 = model.state_dict()[param_name]
param_2 = new_model.state_dict()[param_name] param_2 = new_model.state_dict()[param_name]
......
...@@ -288,7 +288,7 @@ def check_submodules(): ...@@ -288,7 +288,7 @@ def check_submodules():
if len(module_not_registered) > 0: if len(module_not_registered) > 0:
list_of_modules = "\n".join(f"- {module}" for module in module_not_registered) list_of_modules = "\n".join(f"- {module}" for module in module_not_registered)
raise ValueError( raise ValueError(
"The following submodules are not properly registed in the main init of Transformers:\n" "The following submodules are not properly registered in the main init of Transformers:\n"
f"{list_of_modules}\n" f"{list_of_modules}\n"
"Make sure they appear somewhere in the keys of `_import_structure` with an empty list as value." "Make sure they appear somewhere in the keys of `_import_structure` with an empty list as value."
) )
......
...@@ -53,7 +53,7 @@ def _find_text_in_file(filename, start_prompt, end_prompt): ...@@ -53,7 +53,7 @@ def _find_text_in_file(filename, start_prompt, end_prompt):
return "".join(lines[start_index:end_index]), start_index, end_index, lines return "".join(lines[start_index:end_index]), start_index, end_index, lines
# Add here suffixes that are used to identify models, seperated by | # Add here suffixes that are used to identify models, separated by |
ALLOWED_MODEL_SUFFIXES = "Model|Encoder|Decoder|ForConditionalGeneration" ALLOWED_MODEL_SUFFIXES = "Model|Encoder|Decoder|ForConditionalGeneration"
# Regexes that match TF/Flax/PT model names. # Regexes that match TF/Flax/PT model names.
_re_tf_models = re.compile(r"TF(.*)(?:Model|Encoder|Decoder|ForConditionalGeneration)") _re_tf_models = re.compile(r"TF(.*)(?:Model|Encoder|Decoder|ForConditionalGeneration)")
...@@ -88,11 +88,11 @@ def _center_text(text, width): ...@@ -88,11 +88,11 @@ def _center_text(text, width):
def get_model_table_from_auto_modules(): def get_model_table_from_auto_modules():
"""Generates an up-to-date model table from the content of the auto modules.""" """Generates an up-to-date model table from the content of the auto modules."""
# Dictionary model names to config. # Dictionary model names to config.
config_maping_names = diffusers_module.models.auto.configuration_auto.CONFIG_MAPPING_NAMES config_mapping_names = diffusers_module.models.auto.configuration_auto.CONFIG_MAPPING_NAMES
model_name_to_config = { model_name_to_config = {
name: config_maping_names[code] name: config_mapping_names[code]
for code, name in diffusers_module.MODEL_NAMES_MAPPING.items() for code, name in diffusers_module.MODEL_NAMES_MAPPING.items()
if code in config_maping_names if code in config_mapping_names
} }
model_name_to_prefix = {name: config.replace("ConfigMixin", "") for name, config in model_name_to_config.items()} model_name_to_prefix = {name: config.replace("ConfigMixin", "") for name, config in model_name_to_config.items()}
......
...@@ -41,7 +41,7 @@ INTERNAL_OPS = [ ...@@ -41,7 +41,7 @@ INTERNAL_OPS = [
] ]
def onnx_compliancy(saved_model_path, strict, opset): def onnx_compliance(saved_model_path, strict, opset):
saved_model = SavedModel() saved_model = SavedModel()
onnx_ops = [] onnx_ops = []
...@@ -98,4 +98,4 @@ if __name__ == "__main__": ...@@ -98,4 +98,4 @@ if __name__ == "__main__":
args = parser.parse_args() args = parser.parse_args()
if args.framework == "onnx": if args.framework == "onnx":
onnx_compliancy(args.saved_model_path, args.strict, args.opset) onnx_compliance(args.saved_model_path, args.strict, args.opset)
...@@ -178,7 +178,7 @@ def sort_imports(file, check_only=True): ...@@ -178,7 +178,7 @@ def sort_imports(file, check_only=True):
code, start_prompt="_import_structure = {", end_prompt="if TYPE_CHECKING:" code, start_prompt="_import_structure = {", end_prompt="if TYPE_CHECKING:"
) )
# We ignore block 0 (everything untils start_prompt) and the last block (everything after end_prompt). # We ignore block 0 (everything until start_prompt) and the last block (everything after end_prompt).
for block_idx in range(1, len(main_blocks) - 1): for block_idx in range(1, len(main_blocks) - 1):
# Check if the block contains some `_import_structure`s thingy to sort. # Check if the block contains some `_import_structure`s thingy to sort.
block = main_blocks[block_idx] block = main_blocks[block_idx]
...@@ -202,7 +202,7 @@ def sort_imports(file, check_only=True): ...@@ -202,7 +202,7 @@ def sort_imports(file, check_only=True):
internal_blocks = split_code_in_indented_blocks(internal_block_code, indent_level=indent) internal_blocks = split_code_in_indented_blocks(internal_block_code, indent_level=indent)
# We have two categories of import key: list or _import_structu[key].append/extend # We have two categories of import key: list or _import_structu[key].append/extend
pattern = _re_direct_key if "_import_structure" in block_lines[0] else _re_indirect_key pattern = _re_direct_key if "_import_structure" in block_lines[0] else _re_indirect_key
# Grab the keys, but there is a trap: some lines are empty or jsut comments. # Grab the keys, but there is a trap: some lines are empty or just comments.
keys = [(pattern.search(b).groups()[0] if pattern.search(b) is not None else None) for b in internal_blocks] keys = [(pattern.search(b).groups()[0] if pattern.search(b) is not None else None) for b in internal_blocks]
# We only sort the lines with a key. # We only sort the lines with a key.
keys_to_sort = [(i, key) for i, key in enumerate(keys) if key is not None] keys_to_sort = [(i, key) for i, key in enumerate(keys) if key is not None]
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment