Or if your environment doesn't support an interactive shell, like a notebook, you can use:
Or if your environment doesn't support an interactive shell, like a notebook, you can use:
```bash
```py
fromaccelerate.utilsimportwrite_basic_config
fromaccelerate.utilsimportwrite_basic_config
write_basic_config()
write_basic_config()
...
@@ -170,7 +170,7 @@ Aside from setting up the LoRA layers, the training script is more or less the s
...
@@ -170,7 +170,7 @@ Aside from setting up the LoRA layers, the training script is more or less the s
Once you've made all your changes or you're okay with the default configuration, you're ready to launch the training script! 🚀
Once you've made all your changes or you're okay with the default configuration, you're ready to launch the training script! 🚀
Let's train on the [Pokémon BLIP captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) dataset to generate our yown Pokémon. Set the environment variables `MODEL_NAME` and `DATASET_NAME` to the model and dataset respectively. You should also specify where to save the model in `OUTPUT_DIR`, and the name of the model to save to on the Hub with `HUB_MODEL_ID`. The script creates and saves the following files to your repository:
Let's train on the [Pokémon BLIP captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) dataset to generate our own Pokémon. Set the environment variables `MODEL_NAME` and `DATASET_NAME` to the model and dataset respectively. You should also specify where to save the model in `OUTPUT_DIR`, and the name of the model to save to on the Hub with `HUB_MODEL_ID`. The script creates and saves the following files to your repository:
- saved model checkpoints
- saved model checkpoints
-`pytorch_lora_weights.safetensors` (the trained LoRA weights)
-`pytorch_lora_weights.safetensors` (the trained LoRA weights)