Unverified Commit edcbe803 authored by Álvaro Somoza's avatar Álvaro Somoza Committed by GitHub
Browse files

Fix huggingface-hub failing tests (#11994)

* login

* more logins

* uploads

* missed login

* another missed login

* downloads

* examples and more logins

* fix

* setup

* Apply style fixes

* fix

* Apply style fixes
parent c02c4a6d
...@@ -31,7 +31,7 @@ pip install -r requirements.txt ...@@ -31,7 +31,7 @@ pip install -r requirements.txt
We need to be authenticated to access some of the checkpoints used during benchmarking: We need to be authenticated to access some of the checkpoints used during benchmarking:
```sh ```sh
huggingface-cli login hf auth login
``` ```
We use an L40 GPU with 128GB RAM to run the benchmark CI. As such, the benchmarks are configured to run on NVIDIA GPUs. So, make sure you have access to a similar machine (or modify the benchmarking scripts accordingly). We use an L40 GPU with 128GB RAM to run the benchmark CI. As such, the benchmarks are configured to run on NVIDIA GPUs. So, make sure you have access to a similar machine (or modify the benchmarking scripts accordingly).
......
...@@ -16,7 +16,7 @@ Schedulers from [`~schedulers.scheduling_utils.SchedulerMixin`] and models from ...@@ -16,7 +16,7 @@ Schedulers from [`~schedulers.scheduling_utils.SchedulerMixin`] and models from
<Tip> <Tip>
To use private or [gated](https://huggingface.co/docs/hub/models-gated#gated-models) models, log-in with `huggingface-cli login`. To use private or [gated](https://huggingface.co/docs/hub/models-gated#gated-models) models, log-in with `hf auth login`.
</Tip> </Tip>
......
...@@ -31,7 +31,7 @@ _As the model is gated, before using it with diffusers you first need to go to t ...@@ -31,7 +31,7 @@ _As the model is gated, before using it with diffusers you first need to go to t
Use the command below to log in: Use the command below to log in:
```bash ```bash
huggingface-cli login hf auth login
``` ```
<Tip> <Tip>
......
...@@ -145,10 +145,10 @@ When running `accelerate config`, if you use torch.compile, there can be dramati ...@@ -145,10 +145,10 @@ When running `accelerate config`, if you use torch.compile, there can be dramati
If you would like to push your model to the Hub after training is completed with a neat model card, make sure you're logged in: If you would like to push your model to the Hub after training is completed with a neat model card, make sure you're logged in:
```bash ```bash
huggingface-cli login hf auth login
# Alternatively, you could upload your model manually using: # Alternatively, you could upload your model manually using:
# huggingface-cli upload my-cool-account-name/my-cool-lora-name /path/to/awesome/lora # hf upload my-cool-account-name/my-cool-lora-name /path/to/awesome/lora
``` ```
Make sure your data is prepared as described in [Data Preparation](#data-preparation). When ready, you can begin training! Make sure your data is prepared as described in [Data Preparation](#data-preparation). When ready, you can begin training!
......
...@@ -67,7 +67,7 @@ dataset = load_dataset( ...@@ -67,7 +67,7 @@ dataset = load_dataset(
Then use the [`~datasets.Dataset.push_to_hub`] method to upload the dataset to the Hub: Then use the [`~datasets.Dataset.push_to_hub`] method to upload the dataset to the Hub:
```python ```python
# assuming you have ran the huggingface-cli login command in a terminal # assuming you have ran the hf auth login command in a terminal
dataset.push_to_hub("name_of_your_dataset") dataset.push_to_hub("name_of_your_dataset")
# if you want to push to a private repo, simply pass private=True: # if you want to push to a private repo, simply pass private=True:
......
...@@ -42,7 +42,7 @@ We encourage you to share your model with the community, and in order to do that ...@@ -42,7 +42,7 @@ We encourage you to share your model with the community, and in order to do that
Or login in from the terminal: Or login in from the terminal:
```bash ```bash
huggingface-cli login hf auth login
``` ```
Since the model checkpoints are quite large, install [Git-LFS](https://git-lfs.com/) to version these large files: Since the model checkpoints are quite large, install [Git-LFS](https://git-lfs.com/) to version these large files:
......
...@@ -37,7 +37,7 @@ Diffusers는 Stable Diffusion 추론을 위해 PyTorch `mps`를 사용해 Apple ...@@ -37,7 +37,7 @@ Diffusers는 Stable Diffusion 추론을 위해 PyTorch `mps`를 사용해 Apple
```python ```python
# `huggingface-cli login`에 로그인되어 있음을 확인 # `hf auth login`에 로그인되어 있음을 확인
from diffusers import DiffusionPipeline from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5") pipe = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5")
......
...@@ -75,7 +75,7 @@ dataset = load_dataset( ...@@ -75,7 +75,7 @@ dataset = load_dataset(
[push_to_hub(https://huggingface.co/docs/datasets/v2.13.1/en/package_reference/main_classes#datasets.Dataset.push_to_hub) 을 사용해서 Hub에 데이터셋을 업로드 합니다: [push_to_hub(https://huggingface.co/docs/datasets/v2.13.1/en/package_reference/main_classes#datasets.Dataset.push_to_hub) 을 사용해서 Hub에 데이터셋을 업로드 합니다:
```python ```python
# 터미널에서 huggingface-cli login 커맨드를 이미 실행했다고 가정합니다 # 터미널에서 hf auth login 커맨드를 이미 실행했다고 가정합니다
dataset.push_to_hub("name_of_your_dataset") dataset.push_to_hub("name_of_your_dataset")
# 개인 repo로 push 하고 싶다면, `private=True` 을 추가하세요: # 개인 repo로 push 하고 싶다면, `private=True` 을 추가하세요:
......
...@@ -39,7 +39,7 @@ specific language governing permissions and limitations under the License. ...@@ -39,7 +39,7 @@ specific language governing permissions and limitations under the License.
모델을 저장하거나 커뮤니티와 공유하려면 Hugging Face 계정에 로그인하세요(아직 계정이 없는 경우 [생성](https://huggingface.co/join)하세요): 모델을 저장하거나 커뮤니티와 공유하려면 Hugging Face 계정에 로그인하세요(아직 계정이 없는 경우 [생성](https://huggingface.co/join)하세요):
```bash ```bash
huggingface-cli login hf auth login
``` ```
## Text-to-image ## Text-to-image
......
...@@ -42,7 +42,7 @@ Unconditional 이미지 생성은 학습에 사용된 데이터셋과 유사한 ...@@ -42,7 +42,7 @@ Unconditional 이미지 생성은 학습에 사용된 데이터셋과 유사한
또는 터미널로 로그인할 수 있습니다: 또는 터미널로 로그인할 수 있습니다:
```bash ```bash
huggingface-cli login hf auth login
``` ```
모델 체크포인트가 상당히 크기 때문에 [Git-LFS](https://git-lfs.com/)에서 대용량 파일의 버전 관리를 할 수 있습니다. 모델 체크포인트가 상당히 크기 때문에 [Git-LFS](https://git-lfs.com/)에서 대용량 파일의 버전 관리를 할 수 있습니다.
......
...@@ -42,7 +42,7 @@ Stable Diffusion 모델들은 학습 및 저장된 프레임워크와 다운로 ...@@ -42,7 +42,7 @@ Stable Diffusion 모델들은 학습 및 저장된 프레임워크와 다운로
시작하기 전에 스크립트를 실행할 🤗 Diffusers의 로컬 클론(clone)이 있는지 확인하고 Hugging Face 계정에 로그인하여 pull request를 열고 변환된 모델을 허브에 푸시할 수 있도록 하세요. 시작하기 전에 스크립트를 실행할 🤗 Diffusers의 로컬 클론(clone)이 있는지 확인하고 Hugging Face 계정에 로그인하여 pull request를 열고 변환된 모델을 허브에 푸시할 수 있도록 하세요.
```bash ```bash
huggingface-cli login hf auth login
``` ```
스크립트를 사용하려면: 스크립트를 사용하려면:
......
...@@ -69,7 +69,7 @@ Note also that we use PEFT library as backend for LoRA training, make sure to ha ...@@ -69,7 +69,7 @@ Note also that we use PEFT library as backend for LoRA training, make sure to ha
Lastly, we recommend logging into your HF account so that your trained LoRA is automatically uploaded to the hub: Lastly, we recommend logging into your HF account so that your trained LoRA is automatically uploaded to the hub:
```bash ```bash
huggingface-cli login hf auth login
``` ```
This command will prompt you for a token. Copy-paste yours from your [settings/tokens](https://huggingface.co/settings/tokens),and press Enter. This command will prompt you for a token. Copy-paste yours from your [settings/tokens](https://huggingface.co/settings/tokens),and press Enter.
......
...@@ -67,7 +67,7 @@ Note also that we use PEFT library as backend for LoRA training, make sure to ha ...@@ -67,7 +67,7 @@ Note also that we use PEFT library as backend for LoRA training, make sure to ha
Lastly, we recommend logging into your HF account so that your trained LoRA is automatically uploaded to the hub: Lastly, we recommend logging into your HF account so that your trained LoRA is automatically uploaded to the hub:
```bash ```bash
huggingface-cli login hf auth login
``` ```
This command will prompt you for a token. Copy-paste yours from your [settings/tokens](https://huggingface.co/settings/tokens),and press Enter. This command will prompt you for a token. Copy-paste yours from your [settings/tokens](https://huggingface.co/settings/tokens),and press Enter.
......
...@@ -1321,7 +1321,7 @@ def main(args): ...@@ -1321,7 +1321,7 @@ def main(args):
if args.report_to == "wandb" and args.hub_token is not None: if args.report_to == "wandb" and args.hub_token is not None:
raise ValueError( raise ValueError(
"You cannot use both --report_to=wandb and --hub_token due to a security risk of exposing your token." "You cannot use both --report_to=wandb and --hub_token due to a security risk of exposing your token."
" Please use `huggingface-cli login` to authenticate with the Hub." " Please use `hf auth login` to authenticate with the Hub."
) )
if torch.backends.mps.is_available() and args.mixed_precision == "bf16": if torch.backends.mps.is_available() and args.mixed_precision == "bf16":
......
...@@ -1050,7 +1050,7 @@ def main(args): ...@@ -1050,7 +1050,7 @@ def main(args):
if args.report_to == "wandb" and args.hub_token is not None: if args.report_to == "wandb" and args.hub_token is not None:
raise ValueError( raise ValueError(
"You cannot use both --report_to=wandb and --hub_token due to a security risk of exposing your token." "You cannot use both --report_to=wandb and --hub_token due to a security risk of exposing your token."
" Please use `huggingface-cli login` to authenticate with the Hub." " Please use `hf auth login` to authenticate with the Hub."
) )
logging_dir = Path(args.output_dir, args.logging_dir) logging_dir = Path(args.output_dir, args.logging_dir)
......
...@@ -1292,7 +1292,7 @@ def main(args): ...@@ -1292,7 +1292,7 @@ def main(args):
if args.report_to == "wandb" and args.hub_token is not None: if args.report_to == "wandb" and args.hub_token is not None:
raise ValueError( raise ValueError(
"You cannot use both --report_to=wandb and --hub_token due to a security risk of exposing your token." "You cannot use both --report_to=wandb and --hub_token due to a security risk of exposing your token."
" Please use `huggingface-cli login` to authenticate with the Hub." " Please use `hf auth login` to authenticate with the Hub."
) )
if args.do_edm_style_training and args.snr_gamma is not None: if args.do_edm_style_training and args.snr_gamma is not None:
......
...@@ -125,10 +125,10 @@ When running `accelerate config`, if we specify torch compile mode to True there ...@@ -125,10 +125,10 @@ When running `accelerate config`, if we specify torch compile mode to True there
If you would like to push your model to the HF Hub after training is completed with a neat model card, make sure you're logged in: If you would like to push your model to the HF Hub after training is completed with a neat model card, make sure you're logged in:
``` ```
huggingface-cli login hf auth login
# Alternatively, you could upload your model manually using: # Alternatively, you could upload your model manually using:
# huggingface-cli upload my-cool-account-name/my-cool-lora-name /path/to/awesome/lora # hf upload my-cool-account-name/my-cool-lora-name /path/to/awesome/lora
``` ```
Make sure your data is prepared as described in [Data Preparation](#data-preparation). When ready, you can begin training! Make sure your data is prepared as described in [Data Preparation](#data-preparation). When ready, you can begin training!
......
...@@ -962,7 +962,7 @@ def main(args): ...@@ -962,7 +962,7 @@ def main(args):
if args.report_to == "wandb" and args.hub_token is not None: if args.report_to == "wandb" and args.hub_token is not None:
raise ValueError( raise ValueError(
"You cannot use both --report_to=wandb and --hub_token due to a security risk of exposing your token." "You cannot use both --report_to=wandb and --hub_token due to a security risk of exposing your token."
" Please use `huggingface-cli login` to authenticate with the Hub." " Please use `hf auth login` to authenticate with the Hub."
) )
if torch.backends.mps.is_available() and args.mixed_precision == "bf16": if torch.backends.mps.is_available() and args.mixed_precision == "bf16":
......
...@@ -984,7 +984,7 @@ def main(args): ...@@ -984,7 +984,7 @@ def main(args):
if args.report_to == "wandb" and args.hub_token is not None: if args.report_to == "wandb" and args.hub_token is not None:
raise ValueError( raise ValueError(
"You cannot use both --report_to=wandb and --hub_token due to a security risk of exposing your token." "You cannot use both --report_to=wandb and --hub_token due to a security risk of exposing your token."
" Please use `huggingface-cli login` to authenticate with the Hub." " Please use `hf auth login` to authenticate with the Hub."
) )
if torch.backends.mps.is_available() and args.mixed_precision == "bf16": if torch.backends.mps.is_available() and args.mixed_precision == "bf16":
......
...@@ -10,7 +10,7 @@ To incorporate additional condition latents, we expand the input features of Cog ...@@ -10,7 +10,7 @@ To incorporate additional condition latents, we expand the input features of Cog
> As the model is gated, before using it with diffusers you first need to go to the [CogView4 Hugging Face page](https://huggingface.co/THUDM/CogView4-6B), fill in the form and accept the gate. Once you are in, you need to log in so that your system knows you’ve accepted the gate. Use the command below to log in: > As the model is gated, before using it with diffusers you first need to go to the [CogView4 Hugging Face page](https://huggingface.co/THUDM/CogView4-6B), fill in the form and accept the gate. Once you are in, you need to log in so that your system knows you’ve accepted the gate. Use the command below to log in:
```bash ```bash
huggingface-cli login hf auth login
``` ```
The example command below shows how to launch fine-tuning for pose conditions. The dataset ([`raulc0399/open_pose_controlnet`](https://huggingface.co/datasets/raulc0399/open_pose_controlnet)) being used here already has the pose conditions of the original images, so we don't have to compute them. The example command below shows how to launch fine-tuning for pose conditions. The dataset ([`raulc0399/open_pose_controlnet`](https://huggingface.co/datasets/raulc0399/open_pose_controlnet)) being used here already has the pose conditions of the original images, so we don't have to compute them.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment