Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
renzhc
diffusers_dcu
Commits
f0725c58
Unverified
Commit
f0725c58
authored
Aug 07, 2023
by
George He
Committed by
GitHub
Aug 07, 2023
Browse files
Fix misc typos (#4479)
Fix typos
parent
aef11cbf
Changes
7
Hide whitespace changes
Inline
Side-by-side
Showing
7 changed files
with
9 additions
and
9 deletions
+9
-9
src/diffusers/pipelines/controlnet/pipeline_controlnet_img2img.py
...users/pipelines/controlnet/pipeline_controlnet_img2img.py
+1
-1
src/diffusers/pipelines/kandinsky/pipeline_kandinsky_combined.py
...fusers/pipelines/kandinsky/pipeline_kandinsky_combined.py
+2
-2
src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_combined.py
.../pipelines/kandinsky2_2/pipeline_kandinsky2_2_combined.py
+2
-2
src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet_img2img.py
.../kandinsky2_2/pipeline_kandinsky2_2_controlnet_img2img.py
+1
-1
src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_img2img.py
...s/pipelines/kandinsky2_2/pipeline_kandinsky2_2_img2img.py
+1
-1
src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_pix2pix_zero.py
...table_diffusion/pipeline_stable_diffusion_pix2pix_zero.py
+1
-1
src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth_img2img.py
...o_video_synthesis/pipeline_text_to_video_synth_img2img.py
+1
-1
No files found.
src/diffusers/pipelines/controlnet/pipeline_controlnet_img2img.py
View file @
f0725c58
...
@@ -797,7 +797,7 @@ class StableDiffusionControlNetImg2ImgPipeline(
...
@@ -797,7 +797,7 @@ class StableDiffusionControlNetImg2ImgPipeline(
instead.
instead.
image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`,:
image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`,:
`List[List[torch.FloatTensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
`List[List[torch.FloatTensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
The initial image will be used as the starting point for the image generation process. Can also acc
p
et
The initial image will be used as the starting point for the image generation process. Can also acce
p
t
image latents as `image`, if passing latents directly, it will not be encoded again.
image latents as `image`, if passing latents directly, it will not be encoded again.
control_image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`,:
control_image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`,:
`List[List[torch.FloatTensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
`List[List[torch.FloatTensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
...
...
src/diffusers/pipelines/kandinsky/pipeline_kandinsky_combined.py
View file @
f0725c58
...
@@ -474,7 +474,7 @@ class KandinskyImg2ImgCombinedPipeline(DiffusionPipeline):
...
@@ -474,7 +474,7 @@ class KandinskyImg2ImgCombinedPipeline(DiffusionPipeline):
The prompt or prompts to guide the image generation.
The prompt or prompts to guide the image generation.
image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`):
image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`):
`Image`, or tensor representing an image batch, that will be used as the starting point for the
`Image`, or tensor representing an image batch, that will be used as the starting point for the
process. Can also acc
p
et image latents as `image`, if passing latents directly, it will not be encoded
process. Can also acce
p
t image latents as `image`, if passing latents directly, it will not be encoded
again.
again.
negative_prompt (`str` or `List[str]`, *optional*):
negative_prompt (`str` or `List[str]`, *optional*):
The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
...
@@ -720,7 +720,7 @@ class KandinskyInpaintCombinedPipeline(DiffusionPipeline):
...
@@ -720,7 +720,7 @@ class KandinskyInpaintCombinedPipeline(DiffusionPipeline):
The prompt or prompts to guide the image generation.
The prompt or prompts to guide the image generation.
image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`):
image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`):
`Image`, or tensor representing an image batch, that will be used as the starting point for the
`Image`, or tensor representing an image batch, that will be used as the starting point for the
process. Can also acc
p
et image latents as `image`, if passing latents directly, it will not be encoded
process. Can also acce
p
t image latents as `image`, if passing latents directly, it will not be encoded
again.
again.
mask_image (`np.array`):
mask_image (`np.array`):
Tensor representing an image batch, to mask `image`. White pixels in the mask will be repainted, while
Tensor representing an image batch, to mask `image`. White pixels in the mask will be repainted, while
...
...
src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_combined.py
View file @
f0725c58
...
@@ -455,7 +455,7 @@ class KandinskyV22Img2ImgCombinedPipeline(DiffusionPipeline):
...
@@ -455,7 +455,7 @@ class KandinskyV22Img2ImgCombinedPipeline(DiffusionPipeline):
The prompt or prompts to guide the image generation.
The prompt or prompts to guide the image generation.
image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`):
image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`):
`Image`, or tensor representing an image batch, that will be used as the starting point for the
`Image`, or tensor representing an image batch, that will be used as the starting point for the
process. Can also acc
p
et image latents as `image`, if passing latents directly, it will not be encoded
process. Can also acce
p
t image latents as `image`, if passing latents directly, it will not be encoded
again.
again.
negative_prompt (`str` or `List[str]`, *optional*):
negative_prompt (`str` or `List[str]`, *optional*):
The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
...
@@ -692,7 +692,7 @@ class KandinskyV22InpaintCombinedPipeline(DiffusionPipeline):
...
@@ -692,7 +692,7 @@ class KandinskyV22InpaintCombinedPipeline(DiffusionPipeline):
The prompt or prompts to guide the image generation.
The prompt or prompts to guide the image generation.
image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`):
image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`):
`Image`, or tensor representing an image batch, that will be used as the starting point for the
`Image`, or tensor representing an image batch, that will be used as the starting point for the
process. Can also acc
p
et image latents as `image`, if passing latents directly, it will not be encoded
process. Can also acce
p
t image latents as `image`, if passing latents directly, it will not be encoded
again.
again.
mask_image (`np.array`):
mask_image (`np.array`):
Tensor representing an image batch, to mask `image`. White pixels in the mask will be repainted, while
Tensor representing an image batch, to mask `image`. White pixels in the mask will be repainted, while
...
...
src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet_img2img.py
View file @
f0725c58
...
@@ -258,7 +258,7 @@ class KandinskyV22ControlnetImg2ImgPipeline(DiffusionPipeline):
...
@@ -258,7 +258,7 @@ class KandinskyV22ControlnetImg2ImgPipeline(DiffusionPipeline):
The clip image embeddings for text prompt, that will be used to condition the image generation.
The clip image embeddings for text prompt, that will be used to condition the image generation.
image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`):
image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`):
`Image`, or tensor representing an image batch, that will be used as the starting point for the
`Image`, or tensor representing an image batch, that will be used as the starting point for the
process. Can also acc
p
et image latents as `image`, if passing latents directly, it will not be encoded
process. Can also acce
p
t image latents as `image`, if passing latents directly, it will not be encoded
again.
again.
strength (`float`, *optional*, defaults to 0.8):
strength (`float`, *optional*, defaults to 0.8):
Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
...
...
src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_img2img.py
View file @
f0725c58
...
@@ -230,7 +230,7 @@ class KandinskyV22Img2ImgPipeline(DiffusionPipeline):
...
@@ -230,7 +230,7 @@ class KandinskyV22Img2ImgPipeline(DiffusionPipeline):
The clip image embeddings for text prompt, that will be used to condition the image generation.
The clip image embeddings for text prompt, that will be used to condition the image generation.
image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`):
image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`):
`Image`, or tensor representing an image batch, that will be used as the starting point for the
`Image`, or tensor representing an image batch, that will be used as the starting point for the
process. Can also acc
p
et image latents as `image`, if passing latents directly, it will not be encoded
process. Can also acce
p
t image latents as `image`, if passing latents directly, it will not be encoded
again.
again.
strength (`float`, *optional*, defaults to 0.8):
strength (`float`, *optional*, defaults to 0.8):
Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
...
...
src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_pix2pix_zero.py
View file @
f0725c58
...
@@ -1095,7 +1095,7 @@ class StableDiffusionPix2PixZeroPipeline(DiffusionPipeline):
...
@@ -1095,7 +1095,7 @@ class StableDiffusionPix2PixZeroPipeline(DiffusionPipeline):
The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
instead.
instead.
image (`torch.FloatTensor` `np.ndarray`, `PIL.Image.Image`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`):
image (`torch.FloatTensor` `np.ndarray`, `PIL.Image.Image`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`):
`Image`, or tensor representing an image batch which will be used for conditioning. Can also acc
p
et
`Image`, or tensor representing an image batch which will be used for conditioning. Can also acce
p
t
image latents as `image`, if passing latents directly, it will not be encoded again.
image latents as `image`, if passing latents directly, it will not be encoded again.
num_inference_steps (`int`, *optional*, defaults to 50):
num_inference_steps (`int`, *optional*, defaults to 50):
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
...
...
src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth_img2img.py
View file @
f0725c58
...
@@ -561,7 +561,7 @@ class VideoToVideoSDPipeline(DiffusionPipeline, TextualInversionLoaderMixin, Lor
...
@@ -561,7 +561,7 @@ class VideoToVideoSDPipeline(DiffusionPipeline, TextualInversionLoaderMixin, Lor
The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
video (`List[np.ndarray]` or `torch.FloatTensor`):
video (`List[np.ndarray]` or `torch.FloatTensor`):
`video` frames or tensor representing a video batch to be used as the starting point for the process.
`video` frames or tensor representing a video batch to be used as the starting point for the process.
Can also acc
p
et video latents as `image`, if passing latents directly, it will not be encoded again.
Can also acce
p
t video latents as `image`, if passing latents directly, it will not be encoded again.
strength (`float`, *optional*, defaults to 0.8):
strength (`float`, *optional*, defaults to 0.8):
Indicates extent to transform the reference `video`. Must be between 0 and 1. `video` is used as a
Indicates extent to transform the reference `video`. Must be between 0 and 1. `video` is used as a
starting point, adding more noise to it the larger the `strength`. The number of denoising steps
starting point, adding more noise to it the larger the `strength`. The number of denoising steps
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment