You need to sign in or sign up before continuing.
Unverified Commit 76d492ea authored by Yuta Hayashibe's avatar Yuta Hayashibe Committed by GitHub
Browse files

Fix typos and add Typo check GitHub Action (#483)

* Fix typos

* Add a typo check action

* Fix a bug

* Changed to manual typo check currently

Ref: https://github.com/huggingface/diffusers/pull/483#pullrequestreview-1104468010

Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>

* Removed a confusing message

* Renamed "nin_shortcut" to "in_shortcut"

* Add memo about NIN
Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
parent c0493723
name: Check typos
on:
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: typos-action
uses: crate-ci/typos@v1.12.4
...@@ -21,7 +21,7 @@ as a modular toolbox for inference and training of diffusion models. ...@@ -21,7 +21,7 @@ as a modular toolbox for inference and training of diffusion models.
More precisely, 🤗 Diffusers offers: More precisely, 🤗 Diffusers offers:
- State-of-the-art diffusion pipelines that can be run in inference with just a couple of lines of code (see [src/diffusers/pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines)). Check [this overview](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/README.md#pipelines-summary) to see all supported pipelines and their corresponding official papers. - State-of-the-art diffusion pipelines that can be run in inference with just a couple of lines of code (see [src/diffusers/pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines)). Check [this overview](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/README.md#pipelines-summary) to see all supported pipelines and their corresponding official papers.
- Various noise schedulers that can be used interchangeably for the prefered speed vs. quality trade-off in inference (see [src/diffusers/schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers)). - Various noise schedulers that can be used interchangeably for the preferred speed vs. quality trade-off in inference (see [src/diffusers/schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers)).
- Multiple types of models, such as UNet, can be used as building blocks in an end-to-end diffusion system (see [src/diffusers/models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models)). - Multiple types of models, such as UNet, can be used as building blocks in an end-to-end diffusion system (see [src/diffusers/models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models)).
- Training examples to show how to train the most popular diffusion model tasks (see [examples](https://github.com/huggingface/diffusers/tree/main/examples), *e.g.* [unconditional-image-generation](https://github.com/huggingface/diffusers/tree/main/examples/unconditional_image_generation)). - Training examples to show how to train the most popular diffusion model tasks (see [examples](https://github.com/huggingface/diffusers/tree/main/examples), *e.g.* [unconditional-image-generation](https://github.com/huggingface/diffusers/tree/main/examples/unconditional_image_generation)).
...@@ -297,7 +297,7 @@ with autocast("cuda"): ...@@ -297,7 +297,7 @@ with autocast("cuda"):
image.save("ddpm_generated_image.png") image.save("ddpm_generated_image.png")
``` ```
- [Unconditional Latent Diffusion](https://huggingface.co/CompVis/ldm-celebahq-256) - [Unconditional Latent Diffusion](https://huggingface.co/CompVis/ldm-celebahq-256)
- [Unconditional Diffusion with continous scheduler](https://huggingface.co/google/ncsnpp-ffhq-1024) - [Unconditional Diffusion with continuous scheduler](https://huggingface.co/google/ncsnpp-ffhq-1024)
**Other Notebooks**: **Other Notebooks**:
* [image-to-image generation with Stable Diffusion](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb) ![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg), * [image-to-image generation with Stable Diffusion](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb) ![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg),
...@@ -346,8 +346,8 @@ The class provides functionality to compute previous image according to alpha, b ...@@ -346,8 +346,8 @@ The class provides functionality to compute previous image according to alpha, b
## Philosophy ## Philosophy
- Readability and clarity is prefered over highly optimized code. A strong importance is put on providing readable, intuitive and elementary code design. *E.g.*, the provided [schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers) are separated from the provided [models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models) and provide well-commented code that can be read alongside the original paper. - Readability and clarity is preferred over highly optimized code. A strong importance is put on providing readable, intuitive and elementary code design. *E.g.*, the provided [schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers) are separated from the provided [models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models) and provide well-commented code that can be read alongside the original paper.
- Diffusers is **modality independent** and focuses on providing pretrained models and tools to build systems that generate **continous outputs**, *e.g.* vision and audio. - Diffusers is **modality independent** and focuses on providing pretrained models and tools to build systems that generate **continuous outputs**, *e.g.* vision and audio.
- Diffusion models and schedulers are provided as concise, elementary building blocks. In contrast, diffusion pipelines are a collection of end-to-end diffusion systems that can be used out-of-the-box, should stay as close as possible to their original implementation and can include components of another library, such as text-encoders. Examples for diffusion pipelines are [Glide](https://github.com/openai/glide-text2im) and [Latent Diffusion](https://github.com/CompVis/latent-diffusion). - Diffusion models and schedulers are provided as concise, elementary building blocks. In contrast, diffusion pipelines are a collection of end-to-end diffusion systems that can be used out-of-the-box, should stay as close as possible to their original implementation and can include components of another library, such as text-encoders. Examples for diffusion pipelines are [Glide](https://github.com/openai/glide-text2im) and [Latent Diffusion](https://github.com/CompVis/latent-diffusion).
## In the works ## In the works
......
# Files for typos
# Instruction: https://github.com/marketplace/actions/typos-action#getting-started
[default.extend-identifiers]
[default.extend-words]
NIN_="NIN" # NIN is used in scripts/convert_ncsnpp_original_checkpoint_to_diffusers.py
nd="np" # nd may be np (numpy)
[files]
extend-exclude = ["_typos.toml"]
...@@ -44,7 +44,7 @@ To this end, the design of schedulers is such that: ...@@ -44,7 +44,7 @@ To this end, the design of schedulers is such that:
The core API for any new scheduler must follow a limited structure. The core API for any new scheduler must follow a limited structure.
- Schedulers should provide one or more `def step(...)` functions that should be called to update the generated sample iteratively. - Schedulers should provide one or more `def step(...)` functions that should be called to update the generated sample iteratively.
- Schedulers should provide a `set_timesteps(...)` method that configures the parameters of a schedule function for a specific inference task. - Schedulers should provide a `set_timesteps(...)` method that configures the parameters of a schedule function for a specific inference task.
- Schedulers should be framework-agonstic, but provide a simple functionality to convert the scheduler into a specific framework, such as PyTorch - Schedulers should be framework-agnostic, but provide a simple functionality to convert the scheduler into a specific framework, such as PyTorch
with a `set_format(...)` method. with a `set_format(...)` method.
The base class [`SchedulerMixin`] implements low level utilities used by multiple schedulers. The base class [`SchedulerMixin`] implements low level utilities used by multiple schedulers.
...@@ -53,7 +53,7 @@ The base class [`SchedulerMixin`] implements low level utilities used by multipl ...@@ -53,7 +53,7 @@ The base class [`SchedulerMixin`] implements low level utilities used by multipl
[[autodoc]] SchedulerMixin [[autodoc]] SchedulerMixin
### SchedulerOutput ### SchedulerOutput
The class [`SchedulerOutput`] contains the ouputs from any schedulers `step(...)` call. The class [`SchedulerOutput`] contains the outputs from any schedulers `step(...)` call.
[[autodoc]] schedulers.scheduling_utils.SchedulerOutput [[autodoc]] schedulers.scheduling_utils.SchedulerOutput
...@@ -71,7 +71,7 @@ Original paper can be found [here](https://arxiv.org/abs/2010.02502). ...@@ -71,7 +71,7 @@ Original paper can be found [here](https://arxiv.org/abs/2010.02502).
[[autodoc]] DDPMScheduler [[autodoc]] DDPMScheduler
#### Varience exploding, stochastic sampling from Karras et. al #### Variance exploding, stochastic sampling from Karras et. al
Original paper can be found [here](https://arxiv.org/abs/2006.11239). Original paper can be found [here](https://arxiv.org/abs/2006.11239).
......
...@@ -86,11 +86,11 @@ just like we did before only that now you need to pass your `AUTH_TOKEN`: ...@@ -86,11 +86,11 @@ just like we did before only that now you need to pass your `AUTH_TOKEN`:
>>> generator = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", use_auth_token=AUTH_TOKEN) >>> generator = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", use_auth_token=AUTH_TOKEN)
``` ```
If you do not pass your authentification token you will see that the diffusion system will not be correctly If you do not pass your authentication token you will see that the diffusion system will not be correctly
downloaded. Forcing the user to pass an authentification token ensures that it can be verified that the downloaded. Forcing the user to pass an authentication token ensures that it can be verified that the
user has indeed read and accepted the license, which also means that an internet connection is required. user has indeed read and accepted the license, which also means that an internet connection is required.
**Note**: If you do not want to be forced to pass an authentification token, you can also simply download **Note**: If you do not want to be forced to pass an authentication token, you can also simply download
the weights locally via: the weights locally via:
``` ```
...@@ -98,7 +98,7 @@ git lfs install ...@@ -98,7 +98,7 @@ git lfs install
git clone https://huggingface.co/CompVis/stable-diffusion-v1-4 git clone https://huggingface.co/CompVis/stable-diffusion-v1-4
``` ```
and then load locally saved weights into the pipeline. This way, you do not need to pass an authentification and then load locally saved weights into the pipeline. This way, you do not need to pass an authentication
token. Assuming that `"./stable-diffusion-v1-4"` is the local path to the cloned stable-diffusion-v1-4 repo, token. Assuming that `"./stable-diffusion-v1-4"` is the local path to the cloned stable-diffusion-v1-4 repo,
you can also load the pipeline as follows: you can also load the pipeline as follows:
......
...@@ -49,7 +49,7 @@ The `textual_inversion.py` script [here](https://github.com/huggingface/diffuser ...@@ -49,7 +49,7 @@ The `textual_inversion.py` script [here](https://github.com/huggingface/diffuser
### Installing the dependencies ### Installing the dependencies
Before running the scipts, make sure to install the library's training dependencies: Before running the scripts, make sure to install the library's training dependencies:
```bash ```bash
pip install diffusers[training] accelerate transformers pip install diffusers[training] accelerate transformers
...@@ -68,7 +68,7 @@ You need to accept the model license before downloading or using the weights. In ...@@ -68,7 +68,7 @@ You need to accept the model license before downloading or using the weights. In
You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to [this section of the documentation](https://huggingface.co/docs/hub/security-tokens). You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to [this section of the documentation](https://huggingface.co/docs/hub/security-tokens).
Run the following command to autheticate your token Run the following command to authenticate your token
```bash ```bash
huggingface-cli login huggingface-cli login
......
...@@ -18,7 +18,7 @@ distribution. ...@@ -18,7 +18,7 @@ distribution.
## Installing the dependencies ## Installing the dependencies
Before running the scipts, make sure to install the library's training dependencies: Before running the scripts, make sure to install the library's training dependencies:
```bash ```bash
pip install diffusers[training] accelerate datasets pip install diffusers[training] accelerate datasets
...@@ -117,7 +117,7 @@ from datasets import load_dataset ...@@ -117,7 +117,7 @@ from datasets import load_dataset
# example 1: local folder # example 1: local folder
dataset = load_dataset("imagefolder", data_dir="path_to_your_folder") dataset = load_dataset("imagefolder", data_dir="path_to_your_folder")
# example 2: local files (suppoted formats are tar, gzip, zip, xz, rar, zstd) # example 2: local files (supported formats are tar, gzip, zip, xz, rar, zstd)
dataset = load_dataset("imagefolder", data_files="path_to_zip_file") dataset = load_dataset("imagefolder", data_files="path_to_zip_file")
# example 3: remote files (supported formats are tar, gzip, zip, xz, rar, zstd) # example 3: remote files (supported formats are tar, gzip, zip, xz, rar, zstd)
......
...@@ -14,7 +14,7 @@ Colab for inference ...@@ -14,7 +14,7 @@ Colab for inference
## Running locally ## Running locally
### Installing the dependencies ### Installing the dependencies
Before running the scipts, make sure to install the library's training dependencies: Before running the scripts, make sure to install the library's training dependencies:
```bash ```bash
pip install diffusers[training] accelerate transformers pip install diffusers[training] accelerate transformers
...@@ -33,7 +33,7 @@ You need to accept the model license before downloading or using the weights. In ...@@ -33,7 +33,7 @@ You need to accept the model license before downloading or using the weights. In
You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to [this section of the documentation](https://huggingface.co/docs/hub/security-tokens). You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to [this section of the documentation](https://huggingface.co/docs/hub/security-tokens).
Run the following command to autheticate your token Run the following command to authenticate your token
```bash ```bash
huggingface-cli login huggingface-cli login
......
...@@ -422,7 +422,7 @@ def main(): ...@@ -422,7 +422,7 @@ def main():
eps=args.adam_epsilon, eps=args.adam_epsilon,
) )
# TODO (patil-suraj): laod scheduler using args # TODO (patil-suraj): load scheduler using args
noise_scheduler = DDPMScheduler( noise_scheduler = DDPMScheduler(
beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000, tensor_format="pt" beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000, tensor_format="pt"
) )
......
...@@ -4,7 +4,7 @@ Creating a training image set is [described in a different document](https://hug ...@@ -4,7 +4,7 @@ Creating a training image set is [described in a different document](https://hug
### Installing the dependencies ### Installing the dependencies
Before running the scipts, make sure to install the library's training dependencies: Before running the scripts, make sure to install the library's training dependencies:
```bash ```bash
pip install diffusers[training] accelerate datasets pip install diffusers[training] accelerate datasets
...@@ -102,7 +102,7 @@ from datasets import load_dataset ...@@ -102,7 +102,7 @@ from datasets import load_dataset
# example 1: local folder # example 1: local folder
dataset = load_dataset("imagefolder", data_dir="path_to_your_folder") dataset = load_dataset("imagefolder", data_dir="path_to_your_folder")
# example 2: local files (suppoted formats are tar, gzip, zip, xz, rar, zstd) # example 2: local files (supported formats are tar, gzip, zip, xz, rar, zstd)
dataset = load_dataset("imagefolder", data_files="path_to_zip_file") dataset = load_dataset("imagefolder", data_files="path_to_zip_file")
# example 3: remote files (supported formats are tar, gzip, zip, xz, rar, zstd) # example 3: remote files (supported formats are tar, gzip, zip, xz, rar, zstd)
......
...@@ -22,7 +22,7 @@ def renew_resnet_paths(old_list, n_shave_prefix_segments=0): ...@@ -22,7 +22,7 @@ def renew_resnet_paths(old_list, n_shave_prefix_segments=0):
new_item = old_item new_item = old_item
new_item = new_item.replace("block.", "resnets.") new_item = new_item.replace("block.", "resnets.")
new_item = new_item.replace("conv_shorcut", "conv1") new_item = new_item.replace("conv_shorcut", "conv1")
new_item = new_item.replace("nin_shortcut", "conv_shortcut") new_item = new_item.replace("in_shortcut", "conv_shortcut")
new_item = new_item.replace("temb_proj", "time_emb_proj") new_item = new_item.replace("temb_proj", "time_emb_proj")
new_item = shave_segments(new_item, n_shave_prefix_segments=n_shave_prefix_segments) new_item = shave_segments(new_item, n_shave_prefix_segments=n_shave_prefix_segments)
......
...@@ -124,4 +124,4 @@ for mod in models: ...@@ -124,4 +124,4 @@ for mod in models:
assert torch.allclose( assert torch.allclose(
logits[0, 0, 0, :30], results["_".join("_".join(mod.modelId.split("/")).split("-"))], atol=1e-3 logits[0, 0, 0, :30], results["_".join("_".join(mod.modelId.split("/")).split("-"))], atol=1e-3
) )
print(f"{mod.modelId} has passed succesfully!!!") print(f"{mod.modelId} has passed successfully!!!")
...@@ -45,9 +45,9 @@ class ConfigMixin: ...@@ -45,9 +45,9 @@ class ConfigMixin:
Class attributes: Class attributes:
- **config_name** (`str`) -- A filename under which the config should stored when calling - **config_name** (`str`) -- A filename under which the config should stored when calling
[`~ConfigMixin.save_config`] (should be overriden by parent class). [`~ConfigMixin.save_config`] (should be overridden by parent class).
- **ignore_for_config** (`List[str]`) -- A list of attributes that should not be saved in the config (should be - **ignore_for_config** (`List[str]`) -- A list of attributes that should not be saved in the config (should be
overriden by parent class). overridden by parent class).
""" """
config_name = None config_name = None
ignore_for_config = [] ignore_for_config = []
...@@ -125,7 +125,7 @@ class ConfigMixin: ...@@ -125,7 +125,7 @@ class ConfigMixin:
A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128',
'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
output_loading_info(`bool`, *optional*, defaults to `False`): output_loading_info(`bool`, *optional*, defaults to `False`):
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.
local_files_only(`bool`, *optional*, defaults to `False`): local_files_only(`bool`, *optional*, defaults to `False`):
Whether or not to only look at local files (i.e., do not try to download the model). Whether or not to only look at local files (i.e., do not try to download the model).
use_auth_token (`str` or *bool*, *optional*): use_auth_token (`str` or *bool*, *optional*):
......
...@@ -218,7 +218,7 @@ class ModelMixin(torch.nn.Module): ...@@ -218,7 +218,7 @@ class ModelMixin(torch.nn.Module):
A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128',
'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
output_loading_info(`bool`, *optional*, defaults to `False`): output_loading_info(`bool`, *optional*, defaults to `False`):
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.
local_files_only(`bool`, *optional*, defaults to `False`): local_files_only(`bool`, *optional*, defaults to `False`):
Whether or not to only look at local files (i.e., do not try to download the model). Whether or not to only look at local files (i.e., do not try to download the model).
use_auth_token (`str` or *bool*, *optional*): use_auth_token (`str` or *bool*, *optional*):
......
...@@ -264,7 +264,7 @@ class ResnetBlock2D(nn.Module): ...@@ -264,7 +264,7 @@ class ResnetBlock2D(nn.Module):
time_embedding_norm="default", time_embedding_norm="default",
kernel=None, kernel=None,
output_scale_factor=1.0, output_scale_factor=1.0,
use_nin_shortcut=None, use_in_shortcut=None,
up=False, up=False,
down=False, down=False,
): ):
...@@ -321,10 +321,10 @@ class ResnetBlock2D(nn.Module): ...@@ -321,10 +321,10 @@ class ResnetBlock2D(nn.Module):
else: else:
self.downsample = Downsample2D(in_channels, use_conv=False, padding=1, name="op") self.downsample = Downsample2D(in_channels, use_conv=False, padding=1, name="op")
self.use_nin_shortcut = self.in_channels != self.out_channels if use_nin_shortcut is None else use_nin_shortcut self.use_in_shortcut = self.in_channels != self.out_channels if use_in_shortcut is None else use_in_shortcut
self.conv_shortcut = None self.conv_shortcut = None
if self.use_nin_shortcut: if self.use_in_shortcut:
self.conv_shortcut = torch.nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0) self.conv_shortcut = torch.nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0)
def forward(self, x, temb): def forward(self, x, temb):
......
...@@ -820,7 +820,7 @@ class AttnSkipDownBlock2D(nn.Module): ...@@ -820,7 +820,7 @@ class AttnSkipDownBlock2D(nn.Module):
non_linearity=resnet_act_fn, non_linearity=resnet_act_fn,
output_scale_factor=output_scale_factor, output_scale_factor=output_scale_factor,
pre_norm=resnet_pre_norm, pre_norm=resnet_pre_norm,
use_nin_shortcut=True, use_in_shortcut=True,
down=True, down=True,
kernel="fir", kernel="fir",
) )
...@@ -900,7 +900,7 @@ class SkipDownBlock2D(nn.Module): ...@@ -900,7 +900,7 @@ class SkipDownBlock2D(nn.Module):
non_linearity=resnet_act_fn, non_linearity=resnet_act_fn,
output_scale_factor=output_scale_factor, output_scale_factor=output_scale_factor,
pre_norm=resnet_pre_norm, pre_norm=resnet_pre_norm,
use_nin_shortcut=True, use_in_shortcut=True,
down=True, down=True,
kernel="fir", kernel="fir",
) )
...@@ -1355,7 +1355,7 @@ class AttnSkipUpBlock2D(nn.Module): ...@@ -1355,7 +1355,7 @@ class AttnSkipUpBlock2D(nn.Module):
non_linearity=resnet_act_fn, non_linearity=resnet_act_fn,
output_scale_factor=output_scale_factor, output_scale_factor=output_scale_factor,
pre_norm=resnet_pre_norm, pre_norm=resnet_pre_norm,
use_nin_shortcut=True, use_in_shortcut=True,
up=True, up=True,
kernel="fir", kernel="fir",
) )
...@@ -1452,7 +1452,7 @@ class SkipUpBlock2D(nn.Module): ...@@ -1452,7 +1452,7 @@ class SkipUpBlock2D(nn.Module):
non_linearity=resnet_act_fn, non_linearity=resnet_act_fn,
output_scale_factor=output_scale_factor, output_scale_factor=output_scale_factor,
pre_norm=resnet_pre_norm, pre_norm=resnet_pre_norm,
use_nin_shortcut=True, use_in_shortcut=True,
up=True, up=True,
kernel="fir", kernel="fir",
) )
......
...@@ -86,7 +86,7 @@ class DiffusionPipeline(ConfigMixin): ...@@ -86,7 +86,7 @@ class DiffusionPipeline(ConfigMixin):
Class attributes: Class attributes:
- **config_name** ([`str`]) -- name of the config file that will store the class and module names of all - **config_name** ([`str`]) -- name of the config file that will store the class and module names of all
compenents of the diffusion pipeline. components of the diffusion pipeline.
""" """
config_name = "model_index.json" config_name = "model_index.json"
...@@ -95,7 +95,7 @@ class DiffusionPipeline(ConfigMixin): ...@@ -95,7 +95,7 @@ class DiffusionPipeline(ConfigMixin):
from diffusers import pipelines from diffusers import pipelines
for name, module in kwargs.items(): for name, module in kwargs.items():
# retrive library # retrieve library
library = module.__module__.split(".")[0] library = module.__module__.split(".")[0]
# check if the module is a pipeline module # check if the module is a pipeline module
...@@ -109,7 +109,7 @@ class DiffusionPipeline(ConfigMixin): ...@@ -109,7 +109,7 @@ class DiffusionPipeline(ConfigMixin):
if library not in LOADABLE_CLASSES or is_pipeline_module: if library not in LOADABLE_CLASSES or is_pipeline_module:
library = pipeline_dir library = pipeline_dir
# retrive class_name # retrieve class_name
class_name = module.__class__.__name__ class_name = module.__class__.__name__
register_dict = {name: (library, class_name)} register_dict = {name: (library, class_name)}
...@@ -217,7 +217,7 @@ class DiffusionPipeline(ConfigMixin): ...@@ -217,7 +217,7 @@ class DiffusionPipeline(ConfigMixin):
A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128',
'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
output_loading_info(`bool`, *optional*, defaults to `False`): output_loading_info(`bool`, *optional*, defaults to `False`):
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.
local_files_only(`bool`, *optional*, defaults to `False`): local_files_only(`bool`, *optional*, defaults to `False`):
Whether or not to only look at local files (i.e., do not try to download the model). Whether or not to only look at local files (i.e., do not try to download the model).
use_auth_token (`str` or *bool*, *optional*): use_auth_token (`str` or *bool*, *optional*):
...@@ -234,7 +234,7 @@ class DiffusionPipeline(ConfigMixin): ...@@ -234,7 +234,7 @@ class DiffusionPipeline(ConfigMixin):
kwargs (remaining dictionary of keyword arguments, *optional*): kwargs (remaining dictionary of keyword arguments, *optional*):
Can be used to overwrite load - and saveable variables - *i.e.* the pipeline components - of the Can be used to overwrite load - and saveable variables - *i.e.* the pipeline components - of the
speficic pipeline class. The overritten components are then directly passed to the pipelines `__init__` specific pipeline class. The overritten components are then directly passed to the pipelines `__init__`
method. See example below for more information. method. See example below for more information.
<Tip> <Tip>
......
...@@ -70,7 +70,7 @@ not be used for training. If you want to store the gradients during the forward ...@@ -70,7 +70,7 @@ not be used for training. If you want to store the gradients during the forward
## Contribution ## Contribution
We are more than happy about any contribution to the offically supported pipelines 🤗. We aspire We are more than happy about any contribution to the officially supported pipelines 🤗. We aspire
all of our pipelines to be **self-contained**, **easy-to-tweak**, **beginner-friendly** and for **one-purpose-only**. all of our pipelines to be **self-contained**, **easy-to-tweak**, **beginner-friendly** and for **one-purpose-only**.
- **Self-contained**: A pipeline shall be as self-contained as possible. More specifically, this means that all functionality should be either directly defined in the pipeline file iteslf, should be inherited from (and only from) the [`DiffusionPipeline` class](https://github.com/huggingface/diffusers/blob/5cbed8e0d157f65d3ddc2420dfd09f2df630e978/src/diffusers/pipeline_utils.py#L56) or be directly attached to the model and scheduler components of the pipeline. - **Self-contained**: A pipeline shall be as self-contained as possible. More specifically, this means that all functionality should be either directly defined in the pipeline file iteslf, should be inherited from (and only from) the [`DiffusionPipeline` class](https://github.com/huggingface/diffusers/blob/5cbed8e0d157f65d3ddc2420dfd09f2df630e978/src/diffusers/pipeline_utils.py#L56) or be directly attached to the model and scheduler components of the pipeline.
......
...@@ -64,7 +64,7 @@ class DDIMPipeline(DiffusionPipeline): ...@@ -64,7 +64,7 @@ class DDIMPipeline(DiffusionPipeline):
expense of slower inference. expense of slower inference.
output_type (`str`, *optional*, defaults to `"pil"`): output_type (`str`, *optional*, defaults to `"pil"`):
The output format of the generate image. Choose between The output format of the generate image. Choose between
[PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `nd.array`. [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
return_dict (`bool`, *optional*, defaults to `True`): return_dict (`bool`, *optional*, defaults to `True`):
Whether or not to return a [`~pipeline_utils.ImagePipelineOutput`] instead of a plain tuple. Whether or not to return a [`~pipeline_utils.ImagePipelineOutput`] instead of a plain tuple.
......
...@@ -57,7 +57,7 @@ class DDPMPipeline(DiffusionPipeline): ...@@ -57,7 +57,7 @@ class DDPMPipeline(DiffusionPipeline):
deterministic. deterministic.
output_type (`str`, *optional*, defaults to `"pil"`): output_type (`str`, *optional*, defaults to `"pil"`):
The output format of the generate image. Choose between The output format of the generate image. Choose between
[PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `nd.array`. [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
return_dict (`bool`, *optional*, defaults to `True`): return_dict (`bool`, *optional*, defaults to `True`):
Whether or not to return a [`~pipeline_utils.ImagePipelineOutput`] instead of a plain tuple. Whether or not to return a [`~pipeline_utils.ImagePipelineOutput`] instead of a plain tuple.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment