Unverified Commit cc5b31ff authored by Steven Liu's avatar Steven Liu Committed by GitHub
Browse files

[docs] Migrate syntax (#12390)

* change syntax

* make style
parent d7a1a036
...@@ -276,12 +276,8 @@ class FlaxDiffusionPipeline(ConfigMixin, PushToHubMixin): ...@@ -276,12 +276,8 @@ class FlaxDiffusionPipeline(ConfigMixin, PushToHubMixin):
Can be used to overwrite load and saveable variables (the pipeline components) of the specific pipeline Can be used to overwrite load and saveable variables (the pipeline components) of the specific pipeline
class. The overwritten components are passed directly to the pipelines `__init__` method. class. The overwritten components are passed directly to the pipelines `__init__` method.
<Tip> > [!TIP] > To use private or [gated models](https://huggingface.co/docs/hub/models-gated#gated-models), log-in
with `hf > auth login`.
To use private or [gated models](https://huggingface.co/docs/hub/models-gated#gated-models), log-in with `hf
auth login`.
</Tip>
Examples: Examples:
......
...@@ -372,12 +372,8 @@ class DiffusionPipeline(ConfigMixin, PushToHubMixin): ...@@ -372,12 +372,8 @@ class DiffusionPipeline(ConfigMixin, PushToHubMixin):
Performs Pipeline dtype and/or device conversion. A torch.dtype and torch.device are inferred from the Performs Pipeline dtype and/or device conversion. A torch.dtype and torch.device are inferred from the
arguments of `self.to(*args, **kwargs).` arguments of `self.to(*args, **kwargs).`
<Tip> > [!TIP] > If the pipeline already has the correct torch.dtype and torch.device, then it is returned as is.
Otherwise, > the returned pipeline is a copy of self with the desired torch.dtype and torch.device.
If the pipeline already has the correct torch.dtype and torch.device, then it is returned as is. Otherwise,
the returned pipeline is a copy of self with the desired torch.dtype and torch.device.
</Tip>
Here are the ways to call `to`: Here are the ways to call `to`:
...@@ -627,11 +623,7 @@ class DiffusionPipeline(ConfigMixin, PushToHubMixin): ...@@ -627,11 +623,7 @@ class DiffusionPipeline(ConfigMixin, PushToHubMixin):
`torch.float32` is used. `torch.float32` is used.
custom_pipeline (`str`, *optional*): custom_pipeline (`str`, *optional*):
<Tip warning={true}> > [!WARNING] > 🧪 This is an experimental feature and may change in the future.
🧪 This is an experimental feature and may change in the future.
</Tip>
Can be either: Can be either:
...@@ -716,12 +708,8 @@ class DiffusionPipeline(ConfigMixin, PushToHubMixin): ...@@ -716,12 +708,8 @@ class DiffusionPipeline(ConfigMixin, PushToHubMixin):
dduf_file(`str`, *optional*): dduf_file(`str`, *optional*):
Load weights from the specified dduf file. Load weights from the specified dduf file.
<Tip> > [!TIP] > To use private or [gated](https://huggingface.co/docs/hub/models-gated#gated-models) models, log-in
with `hf > auth login`.
To use private or [gated](https://huggingface.co/docs/hub/models-gated#gated-models) models, log-in with `hf
auth login`.
</Tip>
Examples: Examples:
...@@ -1508,11 +1496,7 @@ class DiffusionPipeline(ConfigMixin, PushToHubMixin): ...@@ -1508,11 +1496,7 @@ class DiffusionPipeline(ConfigMixin, PushToHubMixin):
- A path to a *directory* (`./my_pipeline_directory/`) containing a custom pipeline. The directory - A path to a *directory* (`./my_pipeline_directory/`) containing a custom pipeline. The directory
must contain a file called `pipeline.py` that defines the custom pipeline. must contain a file called `pipeline.py` that defines the custom pipeline.
<Tip warning={true}> > [!WARNING] > 🧪 This is an experimental feature and may change in the future.
🧪 This is an experimental feature and may change in the future.
</Tip>
For more information on how to load and create custom pipelines, take a look at [How to contribute a For more information on how to load and create custom pipelines, take a look at [How to contribute a
community pipeline](https://huggingface.co/docs/diffusers/main/en/using-diffusers/contribute_pipeline). community pipeline](https://huggingface.co/docs/diffusers/main/en/using-diffusers/contribute_pipeline).
...@@ -1566,12 +1550,8 @@ class DiffusionPipeline(ConfigMixin, PushToHubMixin): ...@@ -1566,12 +1550,8 @@ class DiffusionPipeline(ConfigMixin, PushToHubMixin):
`os.PathLike`: `os.PathLike`:
A path to the downloaded pipeline. A path to the downloaded pipeline.
<Tip> > [!TIP] > To use private or [gated models](https://huggingface.co/docs/hub/models-gated#gated-models), log-in
with `hf > auth login
To use private or [gated models](https://huggingface.co/docs/hub/models-gated#gated-models), log-in with `hf
auth login
</Tip>
""" """
cache_dir = kwargs.pop("cache_dir", None) cache_dir = kwargs.pop("cache_dir", None)
...@@ -1944,12 +1924,8 @@ class DiffusionPipeline(ConfigMixin, PushToHubMixin): ...@@ -1944,12 +1924,8 @@ class DiffusionPipeline(ConfigMixin, PushToHubMixin):
option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed
up during training is not guaranteed. up during training is not guaranteed.
<Tip warning={true}> > [!WARNING] > ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient
attention takes > precedent.
⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes
precedent.
</Tip>
Parameters: Parameters:
attention_op (`Callable`, *optional*): attention_op (`Callable`, *optional*):
...@@ -2005,13 +1981,10 @@ class DiffusionPipeline(ConfigMixin, PushToHubMixin): ...@@ -2005,13 +1981,10 @@ class DiffusionPipeline(ConfigMixin, PushToHubMixin):
in slices to compute attention in several steps. For more than one attention head, the computation is performed in slices to compute attention in several steps. For more than one attention head, the computation is performed
sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. sequentially over each head. This is useful to save some memory in exchange for a small speed decrease.
<Tip warning={true}> > [!WARNING] > ⚠️ Don't enable attention slicing if you're already using `scaled_dot_product_attention` (SDPA)
from PyTorch > 2.0 or xFormers. These attention computations are already very memory efficient so you won't
⚠️ Don't enable attention slicing if you're already using `scaled_dot_product_attention` (SDPA) from PyTorch need to enable > this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious
2.0 or xFormers. These attention computations are already very memory efficient so you won't need to enable slow downs!
this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs!
</Tip>
Args: Args:
slice_size (`str` or `int`, *optional*, defaults to `"auto"`): slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
...@@ -2288,11 +2261,7 @@ class StableDiffusionMixin: ...@@ -2288,11 +2261,7 @@ class StableDiffusionMixin:
Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value) Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value)
are fused. For cross-attention modules, key and value projection matrices are fused. are fused. For cross-attention modules, key and value projection matrices are fused.
<Tip warning={true}> > [!WARNING] > This API is 🧪 experimental.
This API is 🧪 experimental.
</Tip>
Args: Args:
unet (`bool`, defaults to `True`): To apply fusion on the UNet. unet (`bool`, defaults to `True`): To apply fusion on the UNet.
...@@ -2317,11 +2286,7 @@ class StableDiffusionMixin: ...@@ -2317,11 +2286,7 @@ class StableDiffusionMixin:
def unfuse_qkv_projections(self, unet: bool = True, vae: bool = True): def unfuse_qkv_projections(self, unet: bool = True, vae: bool = True):
"""Disable QKV projection fusion if enabled. """Disable QKV projection fusion if enabled.
<Tip warning={true}> > [!WARNING] > This API is 🧪 experimental.
This API is 🧪 experimental.
</Tip>
Args: Args:
unet (`bool`, defaults to `True`): To apply fusion on the UNet. unet (`bool`, defaults to `True`): To apply fusion on the UNet.
......
...@@ -349,12 +349,8 @@ class FlaxStableDiffusionPipeline(FlaxDiffusionPipeline): ...@@ -349,12 +349,8 @@ class FlaxStableDiffusionPipeline(FlaxDiffusionPipeline):
jit (`bool`, defaults to `False`): jit (`bool`, defaults to `False`):
Whether to run `pmap` versions of the generation and safety scoring functions. Whether to run `pmap` versions of the generation and safety scoring functions.
<Tip warning={true}> > [!WARNING] > This argument exists because `__call__` is not yet end-to-end pmap-able. It will be
removed in a > future release.
This argument exists because `__call__` is not yet end-to-end pmap-able. It will be removed in a
future release.
</Tip>
return_dict (`bool`, *optional*, defaults to `True`): return_dict (`bool`, *optional*, defaults to `True`):
Whether or not to return a [`~pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput`] instead of Whether or not to return a [`~pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput`] instead of
......
...@@ -389,12 +389,8 @@ class FlaxStableDiffusionImg2ImgPipeline(FlaxDiffusionPipeline): ...@@ -389,12 +389,8 @@ class FlaxStableDiffusionImg2ImgPipeline(FlaxDiffusionPipeline):
jit (`bool`, defaults to `False`): jit (`bool`, defaults to `False`):
Whether to run `pmap` versions of the generation and safety scoring functions. Whether to run `pmap` versions of the generation and safety scoring functions.
<Tip warning={true}> > [!WARNING] > This argument exists because `__call__` is not yet end-to-end pmap-able. It will be
removed in a > future release.
This argument exists because `__call__` is not yet end-to-end pmap-able. It will be removed in a
future release.
</Tip>
Examples: Examples:
......
...@@ -103,11 +103,7 @@ class FlaxStableDiffusionInpaintPipeline(FlaxDiffusionPipeline): ...@@ -103,11 +103,7 @@ class FlaxStableDiffusionInpaintPipeline(FlaxDiffusionPipeline):
r""" r"""
Flax-based pipeline for text-guided image inpainting using Stable Diffusion. Flax-based pipeline for text-guided image inpainting using Stable Diffusion.
<Tip warning={true}> > [!WARNING] > 🧪 This is an experimental feature!
🧪 This is an experimental feature!
</Tip>
This model inherits from [`FlaxDiffusionPipeline`]. Check the superclass documentation for the generic methods This model inherits from [`FlaxDiffusionPipeline`]. Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.). implemented for all pipelines (downloading, saving, running on a particular device, etc.).
...@@ -435,12 +431,8 @@ class FlaxStableDiffusionInpaintPipeline(FlaxDiffusionPipeline): ...@@ -435,12 +431,8 @@ class FlaxStableDiffusionInpaintPipeline(FlaxDiffusionPipeline):
jit (`bool`, defaults to `False`): jit (`bool`, defaults to `False`):
Whether to run `pmap` versions of the generation and safety scoring functions. Whether to run `pmap` versions of the generation and safety scoring functions.
<Tip warning={true}> > [!WARNING] > This argument exists because `__call__` is not yet end-to-end pmap-able. It will be
removed in a > future release.
This argument exists because `__call__` is not yet end-to-end pmap-able. It will be removed in a
future release.
</Tip>
return_dict (`bool`, *optional*, defaults to `True`): return_dict (`bool`, *optional*, defaults to `True`):
Whether or not to return a [`~pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput`] instead of Whether or not to return a [`~pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput`] instead of
......
...@@ -249,11 +249,7 @@ class StableDiffusionDiffEditPipeline( ...@@ -249,11 +249,7 @@ class StableDiffusionDiffEditPipeline(
StableDiffusionLoraLoaderMixin, StableDiffusionLoraLoaderMixin,
): ):
r""" r"""
<Tip warning={true}> > [!WARNING] > This is an experimental feature!
This is an experimental feature!
</Tip>
Pipeline for text-guided image inpainting using Stable Diffusion and DiffEdit. Pipeline for text-guided image inpainting using Stable Diffusion and DiffEdit.
......
...@@ -81,11 +81,7 @@ class StableDiffusionKDiffusionPipeline( ...@@ -81,11 +81,7 @@ class StableDiffusionKDiffusionPipeline(
- [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_weights`] for loading LoRA weights - [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_weights`] for loading LoRA weights
- [`~loaders.StableDiffusionLoraLoaderMixin.save_lora_weights`] for saving LoRA weights - [`~loaders.StableDiffusionLoraLoaderMixin.save_lora_weights`] for saving LoRA weights
<Tip warning={true}> > [!WARNING] > This is an experimental pipeline and is likely to change in the future.
This is an experimental pipeline and is likely to change in the future.
</Tip>
Args: Args:
vae ([`AutoencoderKL`]): vae ([`AutoencoderKL`]):
......
...@@ -53,13 +53,9 @@ class KarrasVeScheduler(SchedulerMixin, ConfigMixin): ...@@ -53,13 +53,9 @@ class KarrasVeScheduler(SchedulerMixin, ConfigMixin):
This model inherits from [`SchedulerMixin`] and [`ConfigMixin`]. Check the superclass documentation for the generic This model inherits from [`SchedulerMixin`] and [`ConfigMixin`]. Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving. methods the library implements for all schedulers such as loading and saving.
<Tip> > [!TIP] > For more details on the parameters, see [Appendix E](https://huggingface.co/papers/2206.00364). The grid
search > values used to find the optimal `{s_noise, s_churn, s_min, s_max}` for a specific model are described in
For more details on the parameters, see [Appendix E](https://huggingface.co/papers/2206.00364). The grid search Table 5 of > the paper.
values used to find the optimal `{s_noise, s_churn, s_min, s_max}` for a specific model are described in Table 5 of
the paper.
</Tip>
Args: Args:
sigma_min (`float`, defaults to 0.02): sigma_min (`float`, defaults to 0.02):
......
...@@ -268,11 +268,7 @@ class CMStochasticIterativeScheduler(SchedulerMixin, ConfigMixin): ...@@ -268,11 +268,7 @@ class CMStochasticIterativeScheduler(SchedulerMixin, ConfigMixin):
Gets the scalings used in the consistency model parameterization (from Appendix C of the Gets the scalings used in the consistency model parameterization (from Appendix C of the
[paper](https://huggingface.co/papers/2303.01469)) to enforce boundary condition. [paper](https://huggingface.co/papers/2303.01469)) to enforce boundary condition.
<Tip> > [!TIP] > `epsilon` in the equations for `c_skip` and `c_out` is set to `sigma_min`.
`epsilon` in the equations for `c_skip` and `c_out` is set to `sigma_min`.
</Tip>
Args: Args:
sigma (`torch.Tensor`): sigma (`torch.Tensor`):
......
...@@ -304,12 +304,8 @@ class CosineDPMSolverMultistepScheduler(SchedulerMixin, ConfigMixin): ...@@ -304,12 +304,8 @@ class CosineDPMSolverMultistepScheduler(SchedulerMixin, ConfigMixin):
designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an
integral of the data prediction model. integral of the data prediction model.
<Tip> > [!TIP] > The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both
noise > prediction and data prediction models.
The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise
prediction and data prediction models.
</Tip>
Args: Args:
model_output (`torch.Tensor`): model_output (`torch.Tensor`):
......
...@@ -630,12 +630,8 @@ class DPMSolverMultistepScheduler(SchedulerMixin, ConfigMixin): ...@@ -630,12 +630,8 @@ class DPMSolverMultistepScheduler(SchedulerMixin, ConfigMixin):
designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an
integral of the data prediction model. integral of the data prediction model.
<Tip> > [!TIP] > The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both
noise > prediction and data prediction models.
The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise
prediction and data prediction models.
</Tip>
Args: Args:
model_output (`torch.Tensor`): model_output (`torch.Tensor`):
......
...@@ -491,12 +491,8 @@ class DPMSolverMultistepInverseScheduler(SchedulerMixin, ConfigMixin): ...@@ -491,12 +491,8 @@ class DPMSolverMultistepInverseScheduler(SchedulerMixin, ConfigMixin):
designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an
integral of the data prediction model. integral of the data prediction model.
<Tip> > [!TIP] > The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both
noise > prediction and data prediction models.
The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise
prediction and data prediction models.
</Tip>
Args: Args:
model_output (`torch.Tensor`): model_output (`torch.Tensor`):
......
...@@ -568,12 +568,8 @@ class DPMSolverSinglestepScheduler(SchedulerMixin, ConfigMixin): ...@@ -568,12 +568,8 @@ class DPMSolverSinglestepScheduler(SchedulerMixin, ConfigMixin):
designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an
integral of the data prediction model. integral of the data prediction model.
<Tip> > [!TIP] > The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both
noise > prediction and data prediction models.
The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise
prediction and data prediction models.
</Tip>
Args: Args:
model_output (`torch.Tensor`): model_output (`torch.Tensor`):
......
...@@ -370,12 +370,8 @@ class EDMDPMSolverMultistepScheduler(SchedulerMixin, ConfigMixin): ...@@ -370,12 +370,8 @@ class EDMDPMSolverMultistepScheduler(SchedulerMixin, ConfigMixin):
designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an
integral of the data prediction model. integral of the data prediction model.
<Tip> > [!TIP] > The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both
noise > prediction and data prediction models.
The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise
prediction and data prediction models.
</Tip>
Args: Args:
model_output (`torch.Tensor`): model_output (`torch.Tensor`):
......
...@@ -500,12 +500,8 @@ class SASolverScheduler(SchedulerMixin, ConfigMixin): ...@@ -500,12 +500,8 @@ class SASolverScheduler(SchedulerMixin, ConfigMixin):
Noise_prediction is designed to discretize an integral of the noise prediction model, and data_prediction is Noise_prediction is designed to discretize an integral of the noise prediction model, and data_prediction is
designed to discretize an integral of the data prediction model. designed to discretize an integral of the data prediction model.
<Tip> > [!TIP] > The algorithm and model type are decoupled. You can use either data_prediction or noise_prediction
for both > noise prediction and data prediction models.
The algorithm and model type are decoupled. You can use either data_prediction or noise_prediction for both
noise prediction and data prediction models.
</Tip>
Args: Args:
model_output (`torch.Tensor`): model_output (`torch.Tensor`):
......
...@@ -138,15 +138,11 @@ class SchedulerMixin(PushToHubMixin): ...@@ -138,15 +138,11 @@ class SchedulerMixin(PushToHubMixin):
The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
allowed by Git. allowed by Git.
<Tip> > [!TIP] > To use private or [gated models](https://huggingface.co/docs/hub/models-gated#gated-models), log-in
with `hf > auth login`. You can also activate the special >
To use private or [gated models](https://huggingface.co/docs/hub/models-gated#gated-models), log-in with `hf ["offline-mode"](https://huggingface.co/diffusers/installation.html#offline-mode) to use this method in a >
auth login`. You can also activate the special
["offline-mode"](https://huggingface.co/diffusers/installation.html#offline-mode) to use this method in a
firewalled environment. firewalled environment.
</Tip>
""" """
config, kwargs, commit_hash = cls.load_config( config, kwargs, commit_hash = cls.load_config(
pretrained_model_name_or_path=pretrained_model_name_or_path, pretrained_model_name_or_path=pretrained_model_name_or_path,
......
...@@ -120,19 +120,12 @@ class FlaxSchedulerMixin(PushToHubMixin): ...@@ -120,19 +120,12 @@ class FlaxSchedulerMixin(PushToHubMixin):
git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
identifier allowed by git. identifier allowed by git.
<Tip> > [!TIP] > It is required to be logged in (`hf auth login`) when you want to use private or [gated >
models](https://huggingface.co/docs/hub/models-gated#gated-models).
It is required to be logged in (`hf auth login`) when you want to use private or [gated > [!TIP] > Activate the special
models](https://huggingface.co/docs/hub/models-gated#gated-models). ["offline-mode"](https://huggingface.co/transformers/installation.html#offline-mode) to > use this method in a
firewalled environment.
</Tip>
<Tip>
Activate the special ["offline-mode"](https://huggingface.co/transformers/installation.html#offline-mode) to
use this method in a firewalled environment.
</Tip>
""" """
logger.warning( logger.warning(
......
...@@ -290,12 +290,8 @@ def get_cached_module_file( ...@@ -290,12 +290,8 @@ def get_cached_module_file(
local_files_only (`bool`, *optional*, defaults to `False`): local_files_only (`bool`, *optional*, defaults to `False`):
If `True`, will only try to load the tokenizer configuration from local files. If `True`, will only try to load the tokenizer configuration from local files.
<Tip> > [!TIP] > You may pass a token in `token` if you are not logged in (`hf auth login`) and want to use private or
[gated > models](https://huggingface.co/docs/hub/models-gated#gated-models).
You may pass a token in `token` if you are not logged in (`hf auth login`) and want to use private or [gated
models](https://huggingface.co/docs/hub/models-gated#gated-models).
</Tip>
Returns: Returns:
`str`: The path to the module inside the cache. `str`: The path to the module inside the cache.
...@@ -440,12 +436,8 @@ def get_class_from_dynamic_module( ...@@ -440,12 +436,8 @@ def get_class_from_dynamic_module(
""" """
Extracts a class from a module file, present in the local folder or repository of a model. Extracts a class from a module file, present in the local folder or repository of a model.
<Tip warning={true}> > [!WARNING] > Calling this function will execute the code in the module file found locally or downloaded from the
Hub. It should > therefore only be called on trusted repos.
Calling this function will execute the code in the module file found locally or downloaded from the Hub. It should
therefore only be called on trusted repos.
</Tip>
Args: Args:
pretrained_model_name_or_path (`str` or `os.PathLike`): pretrained_model_name_or_path (`str` or `os.PathLike`):
...@@ -480,12 +472,8 @@ def get_class_from_dynamic_module( ...@@ -480,12 +472,8 @@ def get_class_from_dynamic_module(
local_files_only (`bool`, *optional*, defaults to `False`): local_files_only (`bool`, *optional*, defaults to `False`):
If `True`, will only try to load the tokenizer configuration from local files. If `True`, will only try to load the tokenizer configuration from local files.
<Tip> > [!TIP] > You may pass a token in `token` if you are not logged in (`hf auth login`) and want to use private or
[gated > models](https://huggingface.co/docs/hub/models-gated#gated-models).
You may pass a token in `token` if you are not logged in (`hf auth login`) and want to use private or [gated
models](https://huggingface.co/docs/hub/models-gated#gated-models).
</Tip>
Returns: Returns:
`type`: The class, dynamically imported from the module. `type`: The class, dynamically imported from the module.
......
...@@ -43,12 +43,8 @@ class BaseOutput(OrderedDict): ...@@ -43,12 +43,8 @@ class BaseOutput(OrderedDict):
tuple) or strings (like a dictionary) that will ignore the `None` attributes. Otherwise behaves like a regular tuple) or strings (like a dictionary) that will ignore the `None` attributes. Otherwise behaves like a regular
Python dictionary. Python dictionary.
<Tip warning={true}> > [!WARNING] > You can't unpack a [`BaseOutput`] directly. Use the [`~utils.BaseOutput.to_tuple`] method to convert
it to a tuple > first.
You can't unpack a [`BaseOutput`] directly. Use the [`~utils.BaseOutput.to_tuple`] method to convert it to a tuple
first.
</Tip>
""" """
def __init_subclass__(cls) -> None: def __init_subclass__(cls) -> None:
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment