Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
diffusers
Commits
3028089e
Unverified
Commit
3028089e
authored
Mar 21, 2024
by
M. Tolga Cangöz
Committed by
GitHub
Mar 20, 2024
Browse files
Fix typos (#7411)
* Fix typos * Fix typo in SVD.md
parent
b536f398
Changes
26
Show whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
26 additions
and
26 deletions
+26
-26
docs/source/en/using-diffusers/svd.md
docs/source/en/using-diffusers/svd.md
+2
-2
examples/community/unclip_text_interpolation.py
examples/community/unclip_text_interpolation.py
+1
-1
src/diffusers/pipelines/kandinsky/pipeline_kandinsky_combined.py
...fusers/pipelines/kandinsky/pipeline_kandinsky_combined.py
+3
-3
src/diffusers/pipelines/kandinsky/pipeline_kandinsky_prior.py
...diffusers/pipelines/kandinsky/pipeline_kandinsky_prior.py
+1
-1
src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_combined.py
.../pipelines/kandinsky2_2/pipeline_kandinsky2_2_combined.py
+3
-3
src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_prior.py
...ers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_prior.py
+1
-1
src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_prior_emb2emb.py
...lines/kandinsky2_2/pipeline_kandinsky2_2_prior_emb2emb.py
+1
-1
src/diffusers/pipelines/shap_e/pipeline_shap_e_img2img.py
src/diffusers/pipelines/shap_e/pipeline_shap_e_img2img.py
+1
-1
src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_depth2img.py
...s/stable_diffusion/pipeline_stable_diffusion_depth2img.py
+2
-2
src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py
...e_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py
+1
-1
src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip.py
...sers/pipelines/stable_diffusion/pipeline_stable_unclip.py
+1
-1
src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_instruct_pix2pix.py
...usion_xl/pipeline_stable_diffusion_xl_instruct_pix2pix.py
+1
-1
src/diffusers/schedulers/scheduling_consistency_models.py
src/diffusers/schedulers/scheduling_consistency_models.py
+1
-1
src/diffusers/schedulers/scheduling_deis_multistep.py
src/diffusers/schedulers/scheduling_deis_multistep.py
+1
-1
src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
+1
-1
src/diffusers/schedulers/scheduling_dpmsolver_sde.py
src/diffusers/schedulers/scheduling_dpmsolver_sde.py
+1
-1
src/diffusers/schedulers/scheduling_dpmsolver_singlestep.py
src/diffusers/schedulers/scheduling_dpmsolver_singlestep.py
+1
-1
src/diffusers/schedulers/scheduling_edm_dpmsolver_multistep.py
...iffusers/schedulers/scheduling_edm_dpmsolver_multistep.py
+1
-1
src/diffusers/schedulers/scheduling_edm_euler.py
src/diffusers/schedulers/scheduling_edm_euler.py
+1
-1
src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py
...ffusers/schedulers/scheduling_euler_ancestral_discrete.py
+1
-1
No files found.
docs/source/en/using-diffusers/svd.md
View file @
3028089e
...
...
@@ -86,7 +86,7 @@ Video generation is very memory intensive because you're essentially generating
+ frames = pipe(image, decode_chunk_size=2, generator=generator, num_frames=25).frames[0]
```
Using all these tricks together
e
should lower the memory requirement to less than 8GB VRAM.
Using all these tricks together should lower the memory requirement to less than 8GB VRAM.
## Micro-conditioning
...
...
examples/community/unclip_text_interpolation.py
View file @
3028089e
...
...
@@ -48,7 +48,7 @@ class UnCLIPTextInterpolationPipeline(DiffusionPipeline):
Tokenizer of class
[CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
prior ([`PriorTransformer`]):
The canoni
n
cal unCLIP prior to approximate the image embedding from the text embedding.
The canonical unCLIP prior to approximate the image embedding from the text embedding.
text_proj ([`UnCLIPTextProjModel`]):
Utility class to prepare and combine the embeddings before they are passed to the decoder.
decoder ([`UNet2DConditionModel`]):
...
...
src/diffusers/pipelines/kandinsky/pipeline_kandinsky_combined.py
View file @
3028089e
...
...
@@ -129,7 +129,7 @@ class KandinskyCombinedPipeline(DiffusionPipeline):
movq ([`VQModel`]):
MoVQ Decoder to generate the image from the latents.
prior_prior ([`PriorTransformer`]):
The canoni
n
cal unCLIP prior to approximate the image embedding from the text embedding.
The canonical unCLIP prior to approximate the image embedding from the text embedding.
prior_image_encoder ([`CLIPVisionModelWithProjection`]):
Frozen image-encoder.
prior_text_encoder ([`CLIPTextModelWithProjection`]):
...
...
@@ -346,7 +346,7 @@ class KandinskyImg2ImgCombinedPipeline(DiffusionPipeline):
movq ([`VQModel`]):
MoVQ Decoder to generate the image from the latents.
prior_prior ([`PriorTransformer`]):
The canoni
n
cal unCLIP prior to approximate the image embedding from the text embedding.
The canonical unCLIP prior to approximate the image embedding from the text embedding.
prior_image_encoder ([`CLIPVisionModelWithProjection`]):
Frozen image-encoder.
prior_text_encoder ([`CLIPTextModelWithProjection`]):
...
...
@@ -586,7 +586,7 @@ class KandinskyInpaintCombinedPipeline(DiffusionPipeline):
movq ([`VQModel`]):
MoVQ Decoder to generate the image from the latents.
prior_prior ([`PriorTransformer`]):
The canoni
n
cal unCLIP prior to approximate the image embedding from the text embedding.
The canonical unCLIP prior to approximate the image embedding from the text embedding.
prior_image_encoder ([`CLIPVisionModelWithProjection`]):
Frozen image-encoder.
prior_text_encoder ([`CLIPTextModelWithProjection`]):
...
...
src/diffusers/pipelines/kandinsky/pipeline_kandinsky_prior.py
View file @
3028089e
...
...
@@ -134,7 +134,7 @@ class KandinskyPriorPipeline(DiffusionPipeline):
Args:
prior ([`PriorTransformer`]):
The canoni
n
cal unCLIP prior to approximate the image embedding from the text embedding.
The canonical unCLIP prior to approximate the image embedding from the text embedding.
image_encoder ([`CLIPVisionModelWithProjection`]):
Frozen image-encoder.
text_encoder ([`CLIPTextModelWithProjection`]):
...
...
src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_combined.py
View file @
3028089e
...
...
@@ -119,7 +119,7 @@ class KandinskyV22CombinedPipeline(DiffusionPipeline):
movq ([`VQModel`]):
MoVQ Decoder to generate the image from the latents.
prior_prior ([`PriorTransformer`]):
The canoni
n
cal unCLIP prior to approximate the image embedding from the text embedding.
The canonical unCLIP prior to approximate the image embedding from the text embedding.
prior_image_encoder ([`CLIPVisionModelWithProjection`]):
Frozen image-encoder.
prior_text_encoder ([`CLIPTextModelWithProjection`]):
...
...
@@ -346,7 +346,7 @@ class KandinskyV22Img2ImgCombinedPipeline(DiffusionPipeline):
movq ([`VQModel`]):
MoVQ Decoder to generate the image from the latents.
prior_prior ([`PriorTransformer`]):
The canoni
n
cal unCLIP prior to approximate the image embedding from the text embedding.
The canonical unCLIP prior to approximate the image embedding from the text embedding.
prior_image_encoder ([`CLIPVisionModelWithProjection`]):
Frozen image-encoder.
prior_text_encoder ([`CLIPTextModelWithProjection`]):
...
...
@@ -594,7 +594,7 @@ class KandinskyV22InpaintCombinedPipeline(DiffusionPipeline):
movq ([`VQModel`]):
MoVQ Decoder to generate the image from the latents.
prior_prior ([`PriorTransformer`]):
The canoni
n
cal unCLIP prior to approximate the image embedding from the text embedding.
The canonical unCLIP prior to approximate the image embedding from the text embedding.
prior_image_encoder ([`CLIPVisionModelWithProjection`]):
Frozen image-encoder.
prior_text_encoder ([`CLIPTextModelWithProjection`]):
...
...
src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_prior.py
View file @
3028089e
...
...
@@ -90,7 +90,7 @@ class KandinskyV22PriorPipeline(DiffusionPipeline):
Args:
prior ([`PriorTransformer`]):
The canoni
n
cal unCLIP prior to approximate the image embedding from the text embedding.
The canonical unCLIP prior to approximate the image embedding from the text embedding.
image_encoder ([`CLIPVisionModelWithProjection`]):
Frozen image-encoder.
text_encoder ([`CLIPTextModelWithProjection`]):
...
...
src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_prior_emb2emb.py
View file @
3028089e
...
...
@@ -108,7 +108,7 @@ class KandinskyV22PriorEmb2EmbPipeline(DiffusionPipeline):
Args:
prior ([`PriorTransformer`]):
The canoni
n
cal unCLIP prior to approximate the image embedding from the text embedding.
The canonical unCLIP prior to approximate the image embedding from the text embedding.
image_encoder ([`CLIPVisionModelWithProjection`]):
Frozen image-encoder.
text_encoder ([`CLIPTextModelWithProjection`]):
...
...
src/diffusers/pipelines/shap_e/pipeline_shap_e_img2img.py
View file @
3028089e
...
...
@@ -86,7 +86,7 @@ class ShapEImg2ImgPipeline(DiffusionPipeline):
Args:
prior ([`PriorTransformer`]):
The canoni
n
cal unCLIP prior to approximate the image embedding from the text embedding.
The canonical unCLIP prior to approximate the image embedding from the text embedding.
image_encoder ([`~transformers.CLIPVisionModel`]):
Frozen image-encoder.
image_processor ([`~transformers.CLIPImageProcessor`]):
...
...
src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_depth2img.py
View file @
3028089e
...
...
@@ -700,8 +700,8 @@ class StableDiffusionDepth2ImgPipeline(DiffusionPipeline, TextualInversionLoader
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> init_image = Image.open(requests.get(url, stream=True).raw)
>>> prompt = "two tigers"
>>> n_pro
p
mt = "bad, deformed, ugly, bad anotomy"
>>> image = pipe(prompt=prompt, image=init_image, negative_prompt=n_pro
p
mt, strength=0.7).images[0]
>>> n_prom
p
t = "bad, deformed, ugly, bad anotomy"
>>> image = pipe(prompt=prompt, image=init_image, negative_prompt=n_prom
p
t, strength=0.7).images[0]
```
Returns:
...
...
src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py
View file @
3028089e
...
...
@@ -194,7 +194,7 @@ class StableDiffusionInstructPix2PixPipeline(
A higher guidance scale value encourages the model to generate images closely linked to the text
`prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
image_guidance_scale (`float`, *optional*, defaults to 1.5):
Push the generated image towards the inital `image`. Image guidance scale is enabled by setting
Push the generated image towards the init
i
al `image`. Image guidance scale is enabled by setting
`image_guidance_scale > 1`. Higher image guidance scale encourages generated images that are closely
linked to the source `image`, usually at the expense of lower image quality. This pipeline requires a
value of at least `1`.
...
...
src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip.py
View file @
3028089e
...
...
@@ -76,7 +76,7 @@ class StableUnCLIPPipeline(DiffusionPipeline, StableDiffusionMixin, TextualInver
prior_text_encoder ([`CLIPTextModelWithProjection`]):
Frozen [`CLIPTextModelWithProjection`] text-encoder.
prior ([`PriorTransformer`]):
The canoni
n
cal unCLIP prior to approximate the image embedding from the text embedding.
The canonical unCLIP prior to approximate the image embedding from the text embedding.
prior_scheduler ([`KarrasDiffusionSchedulers`]):
Scheduler used in the prior denoising process.
image_normalizer ([`StableUnCLIPImageNormalizer`]):
...
...
src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_instruct_pix2pix.py
View file @
3028089e
...
...
@@ -659,7 +659,7 @@ class StableDiffusionXLInstructPix2PixPipeline(
1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
usually at the expense of lower image quality.
image_guidance_scale (`float`, *optional*, defaults to 1.5):
Image guidance scale is to push the generated image towards the inital image `image`. Image guidance
Image guidance scale is to push the generated image towards the init
i
al image `image`. Image guidance
scale is enabled by setting `image_guidance_scale > 1`. Higher image guidance scale encourages to
generate images that are closely linked to the source image `image`, usually at the expense of lower
image quality. This pipeline requires a value of at least `1`.
...
...
src/diffusers/schedulers/scheduling_consistency_models.py
View file @
3028089e
...
...
@@ -438,7 +438,7 @@ class CMStochasticIterativeScheduler(SchedulerMixin, ConfigMixin):
# add_noise is called after first denoising step (for inpainting)
step_indices
=
[
self
.
step_index
]
*
timesteps
.
shape
[
0
]
else
:
# add noise is called be
v
ore first denoising step to create inital latent(img2img)
# add noise is called be
f
ore first denoising step to create init
i
al latent(img2img)
step_indices
=
[
self
.
begin_index
]
*
timesteps
.
shape
[
0
]
sigma
=
sigmas
[
step_indices
].
flatten
()
...
...
src/diffusers/schedulers/scheduling_deis_multistep.py
View file @
3028089e
...
...
@@ -775,7 +775,7 @@ class DEISMultistepScheduler(SchedulerMixin, ConfigMixin):
# add_noise is called after first denoising step (for inpainting)
step_indices
=
[
self
.
step_index
]
*
timesteps
.
shape
[
0
]
else
:
# add noise is called be
v
ore first denoising step to create inital latent(img2img)
# add noise is called be
f
ore first denoising step to create init
i
al latent(img2img)
step_indices
=
[
self
.
begin_index
]
*
timesteps
.
shape
[
0
]
sigma
=
sigmas
[
step_indices
].
flatten
()
...
...
src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
View file @
3028089e
...
...
@@ -1018,7 +1018,7 @@ class DPMSolverMultistepScheduler(SchedulerMixin, ConfigMixin):
# add_noise is called after first denoising step (for inpainting)
step_indices
=
[
self
.
step_index
]
*
timesteps
.
shape
[
0
]
else
:
# add noise is called be
v
ore first denoising step to create inital latent(img2img)
# add noise is called be
f
ore first denoising step to create init
i
al latent(img2img)
step_indices
=
[
self
.
begin_index
]
*
timesteps
.
shape
[
0
]
sigma
=
sigmas
[
step_indices
].
flatten
()
...
...
src/diffusers/schedulers/scheduling_dpmsolver_sde.py
View file @
3028089e
...
...
@@ -547,7 +547,7 @@ class DPMSolverSDEScheduler(SchedulerMixin, ConfigMixin):
# add_noise is called after first denoising step (for inpainting)
step_indices
=
[
self
.
step_index
]
*
timesteps
.
shape
[
0
]
else
:
# add noise is called be
v
ore first denoising step to create inital latent(img2img)
# add noise is called be
f
ore first denoising step to create init
i
al latent(img2img)
step_indices
=
[
self
.
begin_index
]
*
timesteps
.
shape
[
0
]
sigma
=
sigmas
[
step_indices
].
flatten
()
...
...
src/diffusers/schedulers/scheduling_dpmsolver_singlestep.py
View file @
3028089e
...
...
@@ -968,7 +968,7 @@ class DPMSolverSinglestepScheduler(SchedulerMixin, ConfigMixin):
# add_noise is called after first denoising step (for inpainting)
step_indices
=
[
self
.
step_index
]
*
timesteps
.
shape
[
0
]
else
:
# add noise is called be
v
ore first denoising step to create inital latent(img2img)
# add noise is called be
f
ore first denoising step to create init
i
al latent(img2img)
step_indices
=
[
self
.
begin_index
]
*
timesteps
.
shape
[
0
]
sigma
=
sigmas
[
step_indices
].
flatten
()
...
...
src/diffusers/schedulers/scheduling_edm_dpmsolver_multistep.py
View file @
3028089e
...
...
@@ -673,7 +673,7 @@ class EDMDPMSolverMultistepScheduler(SchedulerMixin, ConfigMixin):
# add_noise is called after first denoising step (for inpainting)
step_indices
=
[
self
.
step_index
]
*
timesteps
.
shape
[
0
]
else
:
# add noise is called be
v
ore first denoising step to create inital latent(img2img)
# add noise is called be
f
ore first denoising step to create init
i
al latent(img2img)
step_indices
=
[
self
.
begin_index
]
*
timesteps
.
shape
[
0
]
sigma
=
sigmas
[
step_indices
].
flatten
()
...
...
src/diffusers/schedulers/scheduling_edm_euler.py
View file @
3028089e
...
...
@@ -371,7 +371,7 @@ class EDMEulerScheduler(SchedulerMixin, ConfigMixin):
# add_noise is called after first denoising step (for inpainting)
step_indices
=
[
self
.
step_index
]
*
timesteps
.
shape
[
0
]
else
:
# add noise is called be
v
ore first denoising step to create inital latent(img2img)
# add noise is called be
f
ore first denoising step to create init
i
al latent(img2img)
step_indices
=
[
self
.
begin_index
]
*
timesteps
.
shape
[
0
]
sigma
=
sigmas
[
step_indices
].
flatten
()
...
...
src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py
View file @
3028089e
...
...
@@ -471,7 +471,7 @@ class EulerAncestralDiscreteScheduler(SchedulerMixin, ConfigMixin):
# add_noise is called after first denoising step (for inpainting)
step_indices
=
[
self
.
step_index
]
*
timesteps
.
shape
[
0
]
else
:
# add noise is called be
v
ore first denoising step to create inital latent(img2img)
# add noise is called be
f
ore first denoising step to create init
i
al latent(img2img)
step_indices
=
[
self
.
begin_index
]
*
timesteps
.
shape
[
0
]
sigma
=
sigmas
[
step_indices
].
flatten
()
...
...
Prev
1
2
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment