loading.md 22.4 KB
Newer Older
1
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Nathan Lambert's avatar
Nathan Lambert committed
2
3
4
5
6
7
8
9
10
11
12

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

13
# Load pipelines, models, and schedulers
Patrick von Platen's avatar
Patrick von Platen committed
14

15
16
[[open-in-colab]]

17
Having an easy way to use a diffusion system for inference is essential to 🧨 Diffusers. Diffusion systems often consist of multiple components like parameterized models, tokenizers, and schedulers that interact in complex ways. That is why we designed the [`DiffusionPipeline`] to wrap the complexity of the entire diffusion system into an easy-to-use API, while remaining flexible enough to be adapted for other use cases, such as loading each component individually as building blocks to assemble your own diffusion system.
18

19
Everything you need for inference or training is accessible with the `from_pretrained()` method.
20

21
This guide will show you how to load:
22

23
24
25
26
- pipelines from the Hub and locally
- different components into a pipeline
- checkpoint variants such as different floating point types or non-exponential mean averaged (EMA) weights
- models and schedulers
27

28
29
30
31
## Diffusion Pipeline

<Tip>

32
💡 Skip to the [DiffusionPipeline explained](#diffusionpipeline-explained) section if you are interested in learning in more detail about how the [`DiffusionPipeline`] class works.
33
34
35

</Tip>

36
The [`DiffusionPipeline`] class is the simplest and most generic way to load the latest trending diffusion model from the [Hub](https://huggingface.co/models?library=diffusers&sort=trending). The [`DiffusionPipeline.from_pretrained`] method automatically detects the correct pipeline class from the checkpoint, downloads, and caches all the required configuration and weight files, and returns a pipeline instance ready for inference.
37
38
39
40

```python
from diffusers import DiffusionPipeline

41
repo_id = "runwayml/stable-diffusion-v1-5"
42
pipe = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True)
43
44
```

45
You can also load a checkpoint with its specific pipeline class. The example above loaded a Stable Diffusion model; to get the same result, use the [`StableDiffusionPipeline`] class:
46
47

```python
48
49
50
from diffusers import StableDiffusionPipeline

repo_id = "runwayml/stable-diffusion-v1-5"
51
pipe = StableDiffusionPipeline.from_pretrained(repo_id, use_safetensors=True)
52
53
```

54
A checkpoint (such as [`CompVis/stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) or [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5)) may also be used for more than one task, like text-to-image or image-to-image. To differentiate what task you want to use the checkpoint for, you have to load it directly with its corresponding task-specific pipeline class:
55
56
57
58
59
60

```python
from diffusers import StableDiffusionImg2ImgPipeline

repo_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionImg2ImgPipeline.from_pretrained(repo_id)
61
62
```

63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
You can use the Space below to gauge the memory requirements of a pipeline you want to load beforehand without downloading the pipeline checkpoints:

<div class="block dark:hidden">
	<iframe 
        src="https://diffusers-compute-pipeline-size.hf.space?__theme=light"
        width="850"
        height="1600"
    ></iframe>
</div>
<div class="hidden dark:block">
    <iframe 
        src="https://diffusers-compute-pipeline-size.hf.space?__theme=dark"
        width="850"
        height="1600"
    ></iframe>
</div>

80
### Local pipeline
81

82
To load a diffusion pipeline locally, use [`git-lfs`](https://git-lfs.github.com/) to manually download the checkpoint (in this case, [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5)) to your local disk. This creates a local folder, `./stable-diffusion-v1-5`, on your disk:
83

84
```bash
85
git-lfs install
86
87
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5
```
88

89
Then pass the local path to [`~DiffusionPipeline.from_pretrained`]:
90
91
92
93

```python
from diffusers import DiffusionPipeline

94
repo_id = "./stable-diffusion-v1-5"
95
stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True)
96
97
```

98
The [`~DiffusionPipeline.from_pretrained`] method won't download any files from the Hub when it detects a local path, but this also means it won't download and cache the latest changes to a checkpoint.
99

100
### Swap components in a pipeline
101

102
You can customize the default components of any pipeline with another compatible component. Customization is important because:
103

104
105
106
- Changing the scheduler is important for exploring the trade-off between generation speed and quality.
- Different components of a model are typically trained independently and you can swap out a component with a better-performing one.
- During finetuning, usually only some components - like the UNet or text encoder - are trained.
107

108
To find out which schedulers are compatible for customization, you can use the `compatibles` method:
109

110
```py
111
112
113
from diffusers import DiffusionPipeline

repo_id = "runwayml/stable-diffusion-v1-5"
114
stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True)
115
stable_diffusion.scheduler.compatibles
116
117
```

118
Let's use the [`SchedulerMixin.from_pretrained`] method to replace the default [`PNDMScheduler`] with a more performant scheduler, [`EulerDiscreteScheduler`]. The `subfolder="scheduler"` argument is required to load the scheduler configuration from the correct [subfolder](https://huggingface.co/runwayml/stable-diffusion-v1-5/tree/main/scheduler) of the pipeline repository.
119

120
Then you can pass the new [`EulerDiscreteScheduler`] instance to the `scheduler` argument in [`DiffusionPipeline`]:
121
122

```python
123
from diffusers import DiffusionPipeline, EulerDiscreteScheduler
124
125

repo_id = "runwayml/stable-diffusion-v1-5"
126
scheduler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler")
127
stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, scheduler=scheduler, use_safetensors=True)
128
129
```

130
### Safety checker
131

132
Diffusion models like Stable Diffusion can generate harmful content, which is why 🧨 Diffusers has a [safety checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) to check generated outputs against known hardcoded NSFW content. If you'd like to disable the safety checker for whatever reason, pass `None` to the `safety_checker` argument:
133
134

```python
135
from diffusers import DiffusionPipeline
136

137
repo_id = "runwayml/stable-diffusion-v1-5"
138
stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, safety_checker=None, use_safetensors=True)
139
140
141
"""
You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing `safety_checker=None`. Ensure that you abide by the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend keeping the safety filter enabled in all public-facing circumstances, disabling it only for use cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
"""
142
143
```

144
145
### Reuse components across pipelines

Steven Liu's avatar
Steven Liu committed
146
You can also reuse the same components in multiple pipelines to avoid loading the weights into RAM twice. Use the [`~DiffusionPipeline.components`] method to save the components:
147
148
149
150
151

```python
from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline

model_id = "runwayml/stable-diffusion-v1-5"
152
stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True)
153
154

components = stable_diffusion_txt2img.components
155
```
156

157
158
159
Then you can pass the `components` to another pipeline without reloading the weights into RAM:

```py
160
161
162
stable_diffusion_img2img = StableDiffusionImg2ImgPipeline(**components)
```

Steven Liu's avatar
Steven Liu committed
163
164
165
166
167
168
You can also pass the components individually to the pipeline if you want more flexibility over which components to reuse or disable. For example, to reuse the same components in the text-to-image pipeline, except for the safety checker and feature extractor, in the image-to-image pipeline:

```py
from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline

model_id = "runwayml/stable-diffusion-v1-5"
169
stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True)
Steven Liu's avatar
Steven Liu committed
170
171
172
173
174
175
176
177
178
179
180
181
stable_diffusion_img2img = StableDiffusionImg2ImgPipeline(
    vae=stable_diffusion_txt2img.vae,
    text_encoder=stable_diffusion_txt2img.text_encoder,
    tokenizer=stable_diffusion_txt2img.tokenizer,
    unet=stable_diffusion_txt2img.unet,
    scheduler=stable_diffusion_txt2img.scheduler,
    safety_checker=None,
    feature_extractor=None,
    requires_safety_checker=False,
)
```

182
## Checkpoint variants
183

184
A checkpoint variant is usually a checkpoint whose weights are:
185

186
- Stored in a different floating point type for lower precision and lower storage, such as [`torch.float16`](https://pytorch.org/docs/stable/tensors.html#data-types), because it only requires half the bandwidth and storage to download. You can't use this variant if you're continuing training or using a CPU.
187
- Non-exponential mean averaged (EMA) weights, which shouldn't be used for inference. You should use these to continue fine-tuning a model.
188

189
<Tip>
190

191
💡 When the checkpoints have identical model structures, but they were trained on different datasets and with a different training setup, they should be stored in separate repositories instead of variations (for example, [`stable-diffusion-v1-4`] and [`stable-diffusion-v1-5`]).
192

193
</Tip>
194

195
Otherwise, a variant is **identical** to the original checkpoint. They have exactly the same serialization format (like [Safetensors](./using_safetensors)), model structure, and weights that have identical tensor shapes.
196

197
198
199
200
201
| **checkpoint type** | **weight name**                     | **argument for loading weights** |
|---------------------|-------------------------------------|----------------------------------|
| original            | diffusion_pytorch_model.bin         |                                  |
| floating point      | diffusion_pytorch_model.fp16.bin    | `variant`, `torch_dtype`         |
| non-EMA             | diffusion_pytorch_model.non_ema.bin | `variant`                        |
202

203
There are two important arguments to know for loading variants:
204

205
- `torch_dtype` defines the floating point precision of the loaded checkpoints. For example, if you want to save bandwidth by loading a `fp16` variant, you should specify `torch_dtype=torch.float16` to *convert the weights* to `fp16`. Otherwise, the `fp16` weights are converted to the default `fp32` precision. You can also load the original checkpoint without defining the `variant` argument, and convert it to `fp16` with `torch_dtype=torch.float16`. In this case, the default `fp32` weights are downloaded first, and then they're converted to `fp16` after loading.
206

207
- `variant` defines which files should be loaded from the repository. For example, if you want to load a `non_ema` variant from the [`diffusers/stable-diffusion-variants`](https://huggingface.co/diffusers/stable-diffusion-variants/tree/main/unet) repository, you should specify `variant="non_ema"` to download the `non_ema` files.
208

209
210
```python
from diffusers import DiffusionPipeline
211
import torch
212

213
214
# load fp16 variant
stable_diffusion = DiffusionPipeline.from_pretrained(
215
    "runwayml/stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True
216
217
)
# load non_ema variant
218
219
220
stable_diffusion = DiffusionPipeline.from_pretrained(
    "runwayml/stable-diffusion-v1-5", variant="non_ema", use_safetensors=True
)
221
```
222

223
To save a checkpoint stored in a different floating-point type or as a non-EMA variant, use the [`DiffusionPipeline.save_pretrained`] method and specify the `variant` argument. You should try and save a variant to the same folder as the original checkpoint, so you can load both from the same folder:
224
225

```python
226
227
from diffusers import DiffusionPipeline

228
229
230
231
# save as fp16 variant
stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="fp16")
# save as non-ema variant
stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="non_ema")
232
233
```

234
If you don't save the variant to an existing folder, you must specify the `variant` argument otherwise it'll throw an `Exception` because it can't find the original checkpoint:
235

236
237
```python
# 👎 this won't work
238
239
240
stable_diffusion = DiffusionPipeline.from_pretrained(
    "./stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True
)
241
242
# 👍 this works
stable_diffusion = DiffusionPipeline.from_pretrained(
243
    "./stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True
244
)
245
246
```

247
248
<!--
TODO(Patrick) - Make sure to uncomment this part as soon as things are deprecated.
249

250
#### Using `revision` to load pipeline variants is deprecated
251

252
Previously the `revision` argument of [`DiffusionPipeline.from_pretrained`] was heavily used to
253
load model variants, e.g.:
254

255
256
```python
from diffusers import DiffusionPipeline
257

258
pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", revision="fp16", use_safetensors=True)
259
260
```

261
However, this behavior is now deprecated since the "revision" argument should (just as it's done in GitHub) better be used to load model checkpoints from a specific commit or branch in development.
262

263
The above example is therefore deprecated and won't be supported anymore for `diffusers >= 1.0.0`.
264

265
<Tip warning={true}>
266

267
If you load diffusers pipelines or models with `revision="fp16"` or `revision="non_ema"`,
268
please make sure to update the code and use `variant="fp16"` or `variation="non_ema"` respectively
269
instead.
270

271
272
</Tip>
-->
273

274
## Models
275

276
Models are loaded from the [`ModelMixin.from_pretrained`] method, which downloads and caches the latest version of the model weights and configurations. If the latest files are available in the local cache, [`~ModelMixin.from_pretrained`] reuses files in the cache instead of re-downloading them.
277

278
Models can be loaded from a subfolder with the `subfolder` argument. For example, the model weights for `runwayml/stable-diffusion-v1-5` are stored in the [`unet`](https://huggingface.co/runwayml/stable-diffusion-v1-5/tree/main/unet) subfolder:
279

280
281
282
283
```python
from diffusers import UNet2DConditionModel

repo_id = "runwayml/stable-diffusion-v1-5"
284
model = UNet2DConditionModel.from_pretrained(repo_id, subfolder="unet", use_safetensors=True)
285
286
```

287
Or directly from a repository's [directory](https://huggingface.co/google/ddpm-cifar10-32/tree/main):
288

289
290
291
292
```python
from diffusers import UNet2DModel

repo_id = "google/ddpm-cifar10-32"
293
model = UNet2DModel.from_pretrained(repo_id, use_safetensors=True)
294
295
```

296
You can also load and save model variants by specifying the `variant` argument in [`ModelMixin.from_pretrained`] and [`ModelMixin.save_pretrained`]:
297

298
299
```python
from diffusers import UNet2DConditionModel
300

301
model = UNet2DConditionModel.from_pretrained(
302
    "runwayml/stable-diffusion-v1-5", subfolder="unet", variant="non_ema", use_safetensors=True
303
)
304
model.save_pretrained("./local-unet", variant="non_ema")
305
```
306

307
308
309
## Schedulers

Schedulers are loaded from the [`SchedulerMixin.from_pretrained`] method, and unlike models, schedulers are **not parameterized** or **trained**; they are defined by a configuration file.
310

311
Loading schedulers does not consume any significant amount of memory and the same configuration file can be used for a variety of different schedulers.
312
For example, the following schedulers are compatible with [`StableDiffusionPipeline`], which means you can load the same scheduler configuration file in any of these classes:
313
314

```python
315
316
317
318
319
320
321
from diffusers import StableDiffusionPipeline
from diffusers import (
    DDPMScheduler,
    DDIMScheduler,
    PNDMScheduler,
    LMSDiscreteScheduler,
    EulerAncestralDiscreteScheduler,
322
    EulerDiscreteScheduler,
323
324
    DPMSolverMultistepScheduler,
)
325

326
repo_id = "runwayml/stable-diffusion-v1-5"
327

328
329
330
331
332
333
334
ddpm = DDPMScheduler.from_pretrained(repo_id, subfolder="scheduler")
ddim = DDIMScheduler.from_pretrained(repo_id, subfolder="scheduler")
pndm = PNDMScheduler.from_pretrained(repo_id, subfolder="scheduler")
lms = LMSDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler")
euler_anc = EulerAncestralDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler")
euler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler")
dpm = DPMSolverMultistepScheduler.from_pretrained(repo_id, subfolder="scheduler")
335

336
# replace `dpm` with any of `ddpm`, `ddim`, `pndm`, `lms`, `euler_anc`, `euler`
337
pipeline = StableDiffusionPipeline.from_pretrained(repo_id, scheduler=dpm, use_safetensors=True)
338
```
339

340
## DiffusionPipeline explained
341
342
343

As a class method, [`DiffusionPipeline.from_pretrained`] is responsible for two things:

344
- Download the latest version of the folder structure required for inference and cache it. If the latest folder structure is available in the local cache, [`DiffusionPipeline.from_pretrained`] reuses the cache and won't redownload the files.
345
- Load the cached weights into the correct pipeline [class](../api/pipelines/overview#diffusers-summary) - retrieved from the `model_index.json` file - and return an instance of it.
346

347
The pipelines' underlying folder structure corresponds directly with their class instances. For example, the [`StableDiffusionPipeline`] corresponds to the folder structure in [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5).
348
349
350
351

```python
from diffusers import DiffusionPipeline

352
repo_id = "runwayml/stable-diffusion-v1-5"
353
pipeline = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True)
354
print(pipeline)
355
356
```

357
358
You'll see pipeline is an instance of [`StableDiffusionPipeline`], which consists of seven components:

359
- `"feature_extractor"`: a [`~transformers.CLIPImageProcessor`] from 🤗 Transformers.
360
361
362
363
364
- `"safety_checker"`: a [component](https://github.com/huggingface/diffusers/blob/e55687e1e15407f60f32242027b7bb8170e58266/src/diffusers/pipelines/stable_diffusion/safety_checker.py#L32) for screening against harmful content.
- `"scheduler"`: an instance of [`PNDMScheduler`].
- `"text_encoder"`: a [`~transformers.CLIPTextModel`] from 🤗 Transformers.
- `"tokenizer"`: a [`~transformers.CLIPTokenizer`] from 🤗 Transformers.
- `"unet"`: an instance of [`UNet2DConditionModel`].
365
- `"vae"`: an instance of [`AutoencoderKL`].
366
367

```json
368
369
370
StableDiffusionPipeline {
  "feature_extractor": [
    "transformers",
371
    "CLIPImageProcessor"
372
373
374
375
  ],
  "safety_checker": [
    "stable_diffusion",
    "StableDiffusionSafetyChecker"
376
377
378
  ],
  "scheduler": [
    "diffusers",
379
380
381
382
383
    "PNDMScheduler"
  ],
  "text_encoder": [
    "transformers",
    "CLIPTextModel"
384
385
386
  ],
  "tokenizer": [
    "transformers",
387
    "CLIPTokenizer"
388
389
390
391
392
  ],
  "unet": [
    "diffusers",
    "UNet2DConditionModel"
  ],
393
  "vae": [
394
395
396
397
398
399
    "diffusers",
    "AutoencoderKL"
  ]
}
```

400
Compare the components of the pipeline instance to the [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5/tree/main) folder structure, and you'll see there is a separate folder for each of the components in the repository:
401
402
403

```
.
404
405
406
407
├── feature_extractor
│   └── preprocessor_config.json
├── model_index.json
├── safety_checker
408
│   ├── config.json
409
410
411
412
|   ├── model.fp16.safetensors
│   ├── model.safetensors
│   ├── pytorch_model.bin
|   └── pytorch_model.fp16.bin
413
414
├── scheduler
│   └── scheduler_config.json
415
416
├── text_encoder
│   ├── config.json
417
418
419
420
|   ├── model.fp16.safetensors
│   ├── model.safetensors
│   |── pytorch_model.bin
|   └── pytorch_model.fp16.bin
421
├── tokenizer
422
│   ├── merges.txt
423
424
│   ├── special_tokens_map.json
│   ├── tokenizer_config.json
425
│   └── vocab.json
426
427
├── unet
│   ├── config.json
428
│   ├── diffusion_pytorch_model.bin
429
430
431
432
433
434
435
436
437
438
439
|   |── diffusion_pytorch_model.fp16.bin
│   |── diffusion_pytorch_model.f16.safetensors
│   |── diffusion_pytorch_model.non_ema.bin
│   |── diffusion_pytorch_model.non_ema.safetensors
│   └── diffusion_pytorch_model.safetensors
|── vae
.   ├── config.json
.   ├── diffusion_pytorch_model.bin
    ├── diffusion_pytorch_model.fp16.bin
    ├── diffusion_pytorch_model.fp16.safetensors
    └── diffusion_pytorch_model.safetensors
440
441
```

442
You can access each of the components of the pipeline as an attribute to view its configuration:
443

444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
```py
pipeline.tokenizer
CLIPTokenizer(
    name_or_path="/root/.cache/huggingface/hub/models--runwayml--stable-diffusion-v1-5/snapshots/39593d5650112b4cc580433f6b0435385882d819/tokenizer",
    vocab_size=49408,
    model_max_length=77,
    is_fast=False,
    padding_side="right",
    truncation_side="right",
    special_tokens={
        "bos_token": AddedToken("<|startoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True),
        "eos_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True),
        "unk_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True),
        "pad_token": "<|endoftext|>",
    },
459
    clean_up_tokenization_spaces=True
460
)
461
```
462

463
Every pipeline expects a [`model_index.json`](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/model_index.json) file that tells the [`DiffusionPipeline`]:
464
465
466
467
468
469

- which pipeline class to load from `_class_name`
- which version of 🧨 Diffusers was used to create the model in `_diffusers_version`
- what components from which library are stored in the subfolders (`name` corresponds to the component and subfolder name, `library` corresponds to the name of the library to load the class from, and `class` corresponds to the class name)

```json
470
{
471
472
473
474
  "_class_name": "StableDiffusionPipeline",
  "_diffusers_version": "0.6.0",
  "feature_extractor": [
    "transformers",
475
    "CLIPImageProcessor"
476
477
478
479
  ],
  "safety_checker": [
    "stable_diffusion",
    "StableDiffusionSafetyChecker"
480
481
482
  ],
  "scheduler": [
    "diffusers",
483
484
485
486
487
    "PNDMScheduler"
  ],
  "text_encoder": [
    "transformers",
    "CLIPTextModel"
488
489
490
  ],
  "tokenizer": [
    "transformers",
491
    "CLIPTokenizer"
492
493
494
495
496
  ],
  "unet": [
    "diffusers",
    "UNet2DConditionModel"
  ],
497
  "vae": [
498
499
500
501
    "diffusers",
    "AutoencoderKL"
  ]
}
502
```