Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
renzhc
diffusers_dcu
Commits
5fbb4d32
Unverified
Commit
5fbb4d32
authored
Jul 25, 2024
by
Dhruv Nair
Committed by
GitHub
Jul 25, 2024
Browse files
[CI] Slow Test Updates (#8870)
* update * update * update
parent
d8bcb33f
Changes
7
Show whitespace changes
Inline
Side-by-side
Showing
7 changed files
with
29 additions
and
148 deletions
+29
-148
.github/workflows/nightly_tests.yml
.github/workflows/nightly_tests.yml
+17
-84
.github/workflows/push_tests.yml
.github/workflows/push_tests.yml
+8
-64
docker/diffusers-onnxruntime-cuda/Dockerfile
docker/diffusers-onnxruntime-cuda/Dockerfile
+1
-0
docker/diffusers-pytorch-compile-cuda/Dockerfile
docker/diffusers-pytorch-compile-cuda/Dockerfile
+1
-0
docker/diffusers-pytorch-cuda/Dockerfile
docker/diffusers-pytorch-cuda/Dockerfile
+1
-0
docker/diffusers-pytorch-xformers-cuda/Dockerfile
docker/diffusers-pytorch-xformers-cuda/Dockerfile
+1
-0
utils/log_reports.py
utils/log_reports.py
+0
-0
No files found.
.github/workflows/nightly_tests.yml
View file @
5fbb4d32
...
@@ -7,7 +7,7 @@ on:
...
@@ -7,7 +7,7 @@ on:
env
:
env
:
DIFFUSERS_IS_CI
:
yes
DIFFUSERS_IS_CI
:
yes
HF_H
OME
:
/mnt/cache
HF_H
UB_ENABLE_HF_TRANSFER
:
1
OMP_NUM_THREADS
:
8
OMP_NUM_THREADS
:
8
MKL_NUM_THREADS
:
8
MKL_NUM_THREADS
:
8
PYTEST_TIMEOUT
:
600
PYTEST_TIMEOUT
:
600
...
@@ -27,10 +27,6 @@ jobs:
...
@@ -27,10 +27,6 @@ jobs:
uses
:
actions/checkout@v3
uses
:
actions/checkout@v3
with
:
with
:
fetch-depth
:
2
fetch-depth
:
2
-
name
:
Set up Python
uses
:
actions/setup-python@v4
with
:
python-version
:
"
3.8"
-
name
:
Install dependencies
-
name
:
Install dependencies
run
:
|
run
:
|
pip install -e .
pip install -e .
...
@@ -50,16 +46,17 @@ jobs:
...
@@ -50,16 +46,17 @@ jobs:
path
:
reports
path
:
reports
run_nightly_tests_for_torch_pipelines
:
run_nightly_tests_for_torch_pipelines
:
name
:
Torch Pipelines CUDA
Nightly
Tests
name
:
Nightly
Torch Pipelines CUDA Tests
needs
:
setup_torch_cuda_pipeline_matrix
needs
:
setup_torch_cuda_pipeline_matrix
strategy
:
strategy
:
fail-fast
:
false
fail-fast
:
false
max-parallel
:
8
matrix
:
matrix
:
module
:
${{ fromJson(needs.setup_torch_cuda_pipeline_matrix.outputs.pipeline_test_matrix) }}
module
:
${{ fromJson(needs.setup_torch_cuda_pipeline_matrix.outputs.pipeline_test_matrix) }}
runs-on
:
[
single-gpu
,
nvidia-gpu
,
t4
,
ci
]
runs-on
:
[
single-gpu
,
nvidia-gpu
,
t4
,
ci
]
container
:
container
:
image
:
diffusers/diffusers-pytorch-cuda
image
:
diffusers/diffusers-pytorch-cuda
options
:
--shm-size "16gb" --ipc host
-v /mnt/cache/.cache/huggingface/diffusers:/mnt/cache/
--gpus
0
options
:
--shm-size "16gb" --ipc host --gpus
0
steps
:
steps
:
-
name
:
Checkout diffusers
-
name
:
Checkout diffusers
uses
:
actions/checkout@v3
uses
:
actions/checkout@v3
...
@@ -67,19 +64,16 @@ jobs:
...
@@ -67,19 +64,16 @@ jobs:
fetch-depth
:
2
fetch-depth
:
2
-
name
:
NVIDIA-SMI
-
name
:
NVIDIA-SMI
run
:
nvidia-smi
run
:
nvidia-smi
-
name
:
Install dependencies
-
name
:
Install dependencies
run
:
|
run
:
|
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m uv pip install -e [quality,test]
python -m uv pip install -e [quality,test]
python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git
python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git
python -m uv pip install pytest-reportlog
python -m uv pip install pytest-reportlog
-
name
:
Environment
-
name
:
Environment
run
:
|
run
:
|
python utils/print_env.py
python utils/print_env.py
-
name
:
Pipeline CUDA Test
-
name
:
Nightly PyTorch CUDA checkpoint (pipelines) tests
env
:
env
:
HF_TOKEN
:
${{ secrets.HF_TOKEN }}
HF_TOKEN
:
${{ secrets.HF_TOKEN }}
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
...
@@ -90,38 +84,36 @@ jobs:
...
@@ -90,38 +84,36 @@ jobs:
--make-reports=tests_pipeline_${{ matrix.module }}_cuda \
--make-reports=tests_pipeline_${{ matrix.module }}_cuda \
--report-log=tests_pipeline_${{ matrix.module }}_cuda.log \
--report-log=tests_pipeline_${{ matrix.module }}_cuda.log \
tests/pipelines/${{ matrix.module }}
tests/pipelines/${{ matrix.module }}
-
name
:
Failure short reports
-
name
:
Failure short reports
if
:
${{ failure() }}
if
:
${{ failure() }}
run
:
|
run
:
|
cat reports/tests_pipeline_${{ matrix.module }}_cuda_stats.txt
cat reports/tests_pipeline_${{ matrix.module }}_cuda_stats.txt
cat reports/tests_pipeline_${{ matrix.module }}_cuda_failures_short.txt
cat reports/tests_pipeline_${{ matrix.module }}_cuda_failures_short.txt
-
name
:
Test suite reports artifacts
-
name
:
Test suite reports artifacts
if
:
${{ always() }}
if
:
${{ always() }}
uses
:
actions/upload-artifact@v2
uses
:
actions/upload-artifact@v2
with
:
with
:
name
:
pipeline_${{ matrix.module }}_test_reports
name
:
pipeline_${{ matrix.module }}_test_reports
path
:
reports
path
:
reports
-
name
:
Generate Report and Notify Channel
-
name
:
Generate Report and Notify Channel
if
:
always()
if
:
always()
run
:
|
run
:
|
pip install slack_sdk tabulate
pip install slack_sdk tabulate
python
script
s/log_reports.py >> $GITHUB_STEP_SUMMARY
python
util
s/log_reports.py >> $GITHUB_STEP_SUMMARY
run_nightly_tests_for_other_torch_modules
:
run_nightly_tests_for_other_torch_modules
:
name
:
Torch Non-Pipelines CUDA Nightly
Tests
name
:
Nightly Torch CUDA
Tests
runs-on
:
[
single-gpu
,
nvidia-gpu
,
t4
,
ci
]
runs-on
:
[
single-gpu
,
nvidia-gpu
,
t4
,
ci
]
container
:
container
:
image
:
diffusers/diffusers-pytorch-cuda
image
:
diffusers/diffusers-pytorch-cuda
options
:
--shm-size "16gb" --ipc host
-v /mnt/hf_cache:/mnt/cache/
--gpus
0
options
:
--shm-size "16gb" --ipc host --gpus
0
defaults
:
defaults
:
run
:
run
:
shell
:
bash
shell
:
bash
strategy
:
strategy
:
matrix
:
matrix
:
module
:
[
models
,
schedulers
,
others
,
examples
]
max-parallel
:
2
module
:
[
models
,
schedulers
,
lora
,
others
,
single_file
,
examples
]
steps
:
steps
:
-
name
:
Checkout diffusers
-
name
:
Checkout diffusers
uses
:
actions/checkout@v3
uses
:
actions/checkout@v3
...
@@ -133,8 +125,8 @@ jobs:
...
@@ -133,8 +125,8 @@ jobs:
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m uv pip install -e [quality,test]
python -m uv pip install -e [quality,test]
python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git
python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git
python -m uv pip install peft@git+https://github.com/huggingface/peft.git
python -m uv pip install pytest-reportlog
python -m uv pip install pytest-reportlog
-
name
:
Environment
-
name
:
Environment
run
:
python utils/print_env.py
run
:
python utils/print_env.py
...
@@ -158,7 +150,6 @@ jobs:
...
@@ -158,7 +150,6 @@ jobs:
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
CUBLAS_WORKSPACE_CONFIG
:
:16:8
CUBLAS_WORKSPACE_CONFIG
:
:16:8
run
:
|
run
:
|
python -m uv pip install peft@git+https://github.com/huggingface/peft.git
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v --make-reports=examples_torch_cuda \
-s -v --make-reports=examples_torch_cuda \
--report-log=examples_torch_cuda.log \
--report-log=examples_torch_cuda.log \
...
@@ -181,64 +172,7 @@ jobs:
...
@@ -181,64 +172,7 @@ jobs:
if
:
always()
if
:
always()
run
:
|
run
:
|
pip install slack_sdk tabulate
pip install slack_sdk tabulate
python scripts/log_reports.py >> $GITHUB_STEP_SUMMARY
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_lora_nightly_tests
:
name
:
Nightly LoRA Tests with PEFT and TORCH
runs-on
:
[
single-gpu
,
nvidia-gpu
,
t4
,
ci
]
container
:
image
:
diffusers/diffusers-pytorch-cuda
options
:
--shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus
0
defaults
:
run
:
shell
:
bash
steps
:
-
name
:
Checkout diffusers
uses
:
actions/checkout@v3
with
:
fetch-depth
:
2
-
name
:
Install dependencies
run
:
|
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m uv pip install -e [quality,test]
python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git
python -m uv pip install peft@git+https://github.com/huggingface/peft.git
python -m uv pip install pytest-reportlog
-
name
:
Environment
run
:
python utils/print_env.py
-
name
:
Run nightly LoRA tests with PEFT and Torch
env
:
HF_TOKEN
:
${{ secrets.HF_TOKEN }}
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
CUBLAS_WORKSPACE_CONFIG
:
:16:8
run
:
|
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx" \
--make-reports=tests_torch_lora_cuda \
--report-log=tests_torch_lora_cuda.log \
tests/lora
-
name
:
Failure short reports
if
:
${{ failure() }}
run
:
|
cat reports/tests_torch_lora_cuda_stats.txt
cat reports/tests_torch_lora_cuda_failures_short.txt
-
name
:
Test suite reports artifacts
if
:
${{ always() }}
uses
:
actions/upload-artifact@v2
with
:
name
:
torch_lora_cuda_test_reports
path
:
reports
-
name
:
Generate Report and Notify Channel
if
:
always()
run
:
|
pip install slack_sdk tabulate
python scripts/log_reports.py >> $GITHUB_STEP_SUMMARY
run_flax_tpu_tests
:
run_flax_tpu_tests
:
name
:
Nightly Flax TPU Tests
name
:
Nightly Flax TPU Tests
...
@@ -294,14 +228,14 @@ jobs:
...
@@ -294,14 +228,14 @@ jobs:
if
:
always()
if
:
always()
run
:
|
run
:
|
pip install slack_sdk tabulate
pip install slack_sdk tabulate
python
script
s/log_reports.py >> $GITHUB_STEP_SUMMARY
python
util
s/log_reports.py >> $GITHUB_STEP_SUMMARY
run_nightly_onnx_tests
:
run_nightly_onnx_tests
:
name
:
Nightly ONNXRuntime CUDA tests on Ubuntu
name
:
Nightly ONNXRuntime CUDA tests on Ubuntu
runs-on
:
[
single-gpu
,
nvidia-gpu
,
t4
,
ci
]
runs-on
:
[
single-gpu
,
nvidia-gpu
,
t4
,
ci
]
container
:
container
:
image
:
diffusers/diffusers-onnxruntime-cuda
image
:
diffusers/diffusers-onnxruntime-cuda
options
:
--gpus 0 --shm-size "16gb" --ipc host
-v /mnt/hf_cache:/mnt/cache/
options
:
--gpus 0 --shm-size "16gb" --ipc host
steps
:
steps
:
-
name
:
Checkout diffusers
-
name
:
Checkout diffusers
...
@@ -318,11 +252,10 @@ jobs:
...
@@ -318,11 +252,10 @@ jobs:
python -m uv pip install -e [quality,test]
python -m uv pip install -e [quality,test]
python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git
python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git
python -m uv pip install pytest-reportlog
python -m uv pip install pytest-reportlog
-
name
:
Environment
-
name
:
Environment
run
:
python utils/print_env.py
run
:
python utils/print_env.py
-
name
:
Run
n
ightly ONNXRuntime CUDA tests
-
name
:
Run
N
ightly ONNXRuntime CUDA tests
env
:
env
:
HF_TOKEN
:
${{ secrets.HF_TOKEN }}
HF_TOKEN
:
${{ secrets.HF_TOKEN }}
run
:
|
run
:
|
...
@@ -349,7 +282,7 @@ jobs:
...
@@ -349,7 +282,7 @@ jobs:
if
:
always()
if
:
always()
run
:
|
run
:
|
pip install slack_sdk tabulate
pip install slack_sdk tabulate
python
script
s/log_reports.py >> $GITHUB_STEP_SUMMARY
python
util
s/log_reports.py >> $GITHUB_STEP_SUMMARY
run_nightly_tests_apple_m1
:
run_nightly_tests_apple_m1
:
name
:
Nightly PyTorch MPS tests on MacOS
name
:
Nightly PyTorch MPS tests on MacOS
...
@@ -411,4 +344,4 @@ jobs:
...
@@ -411,4 +344,4 @@ jobs:
if
:
always()
if
:
always()
run
:
|
run
:
|
pip install slack_sdk tabulate
pip install slack_sdk tabulate
python
script
s/log_reports.py >> $GITHUB_STEP_SUMMARY
python
util
s/log_reports.py >> $GITHUB_STEP_SUMMARY
.github/workflows/push_tests.yml
View file @
5fbb4d32
...
@@ -11,11 +11,9 @@ on:
...
@@ -11,11 +11,9 @@ on:
env
:
env
:
DIFFUSERS_IS_CI
:
yes
DIFFUSERS_IS_CI
:
yes
HF_HOME
:
/mnt/cache
OMP_NUM_THREADS
:
8
OMP_NUM_THREADS
:
8
MKL_NUM_THREADS
:
8
MKL_NUM_THREADS
:
8
PYTEST_TIMEOUT
:
600
PYTEST_TIMEOUT
:
600
RUN_SLOW
:
yes
PIPELINE_USAGE_CUTOFF
:
50000
PIPELINE_USAGE_CUTOFF
:
50000
jobs
:
jobs
:
...
@@ -52,7 +50,7 @@ jobs:
...
@@ -52,7 +50,7 @@ jobs:
path
:
reports
path
:
reports
torch_pipelines_cuda_tests
:
torch_pipelines_cuda_tests
:
name
:
Torch Pipelines CUDA
Slow
Tests
name
:
Torch Pipelines CUDA Tests
needs
:
setup_torch_cuda_pipeline_matrix
needs
:
setup_torch_cuda_pipeline_matrix
strategy
:
strategy
:
fail-fast
:
false
fail-fast
:
false
...
@@ -62,7 +60,7 @@ jobs:
...
@@ -62,7 +60,7 @@ jobs:
runs-on
:
[
single-gpu
,
nvidia-gpu
,
t4
,
ci
]
runs-on
:
[
single-gpu
,
nvidia-gpu
,
t4
,
ci
]
container
:
container
:
image
:
diffusers/diffusers-pytorch-cuda
image
:
diffusers/diffusers-pytorch-cuda
options
:
--shm-size "16gb" --ipc host
-v /mnt/cache/.cache/huggingface/diffusers:/mnt/cache/
--gpus
0
options
:
--shm-size "16gb" --ipc host --gpus
0
steps
:
steps
:
-
name
:
Checkout diffusers
-
name
:
Checkout diffusers
uses
:
actions/checkout@v3
uses
:
actions/checkout@v3
...
@@ -106,7 +104,7 @@ jobs:
...
@@ -106,7 +104,7 @@ jobs:
runs-on
:
[
single-gpu
,
nvidia-gpu
,
t4
,
ci
]
runs-on
:
[
single-gpu
,
nvidia-gpu
,
t4
,
ci
]
container
:
container
:
image
:
diffusers/diffusers-pytorch-cuda
image
:
diffusers/diffusers-pytorch-cuda
options
:
--shm-size "16gb" --ipc host
-v /mnt/cache/.cache/huggingface/diffusers:/mnt/cache/
--gpus
0
options
:
--shm-size "16gb" --ipc host --gpus
0
defaults
:
defaults
:
run
:
run
:
shell
:
bash
shell
:
bash
...
@@ -124,12 +122,13 @@ jobs:
...
@@ -124,12 +122,13 @@ jobs:
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m uv pip install -e [quality,test]
python -m uv pip install -e [quality,test]
python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git
python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git
python -m uv pip install peft@git+https://github.com/huggingface/peft.git
-
name
:
Environment
-
name
:
Environment
run
:
|
run
:
|
python utils/print_env.py
python utils/print_env.py
-
name
:
Run
slow
PyTorch CUDA tests
-
name
:
Run PyTorch CUDA tests
env
:
env
:
HF_TOKEN
:
${{ secrets.HF_TOKEN }}
HF_TOKEN
:
${{ secrets.HF_TOKEN }}
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
...
@@ -153,61 +152,6 @@ jobs:
...
@@ -153,61 +152,6 @@ jobs:
name
:
torch_cuda_test_reports
name
:
torch_cuda_test_reports
path
:
reports
path
:
reports
peft_cuda_tests
:
name
:
PEFT CUDA Tests
runs-on
:
[
single-gpu
,
nvidia-gpu
,
t4
,
ci
]
container
:
image
:
diffusers/diffusers-pytorch-cuda
options
:
--shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface/diffusers:/mnt/cache/ --gpus
0
defaults
:
run
:
shell
:
bash
steps
:
-
name
:
Checkout diffusers
uses
:
actions/checkout@v3
with
:
fetch-depth
:
2
-
name
:
Install dependencies
run
:
|
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m uv pip install -e [quality,test]
python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git
python -m pip install -U peft@git+https://github.com/huggingface/peft.git
-
name
:
Environment
run
:
|
python utils/print_env.py
-
name
:
Run slow PEFT CUDA tests
env
:
HF_TOKEN
:
${{ secrets.HF_TOKEN }}
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
CUBLAS_WORKSPACE_CONFIG
:
:16:8
run
:
|
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx and not PEFTLoRALoading" \
--make-reports=tests_peft_cuda \
tests/lora/
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "lora and not Flax and not Onnx and not PEFTLoRALoading" \
--make-reports=tests_peft_cuda_models_lora \
tests/models/
-
name
:
Failure short reports
if
:
${{ failure() }}
run
:
|
cat reports/tests_peft_cuda_stats.txt
cat reports/tests_peft_cuda_failures_short.txt
cat reports/tests_peft_cuda_models_lora_failures_short.txt
-
name
:
Test suite reports artifacts
if
:
${{ always() }}
uses
:
actions/upload-artifact@v2
with
:
name
:
torch_peft_test_reports
path
:
reports
flax_tpu_tests
:
flax_tpu_tests
:
name
:
Flax TPU Tests
name
:
Flax TPU Tests
runs-on
:
docker-tpu
runs-on
:
docker-tpu
...
@@ -309,7 +253,7 @@ jobs:
...
@@ -309,7 +253,7 @@ jobs:
container
:
container
:
image
:
diffusers/diffusers-pytorch-compile-cuda
image
:
diffusers/diffusers-pytorch-compile-cuda
options
:
--gpus 0 --shm-size "16gb" --ipc host
-v /mnt/cache/.cache/huggingface:/mnt/cache/
options
:
--gpus 0 --shm-size "16gb" --ipc host
steps
:
steps
:
-
name
:
Checkout diffusers
-
name
:
Checkout diffusers
...
@@ -351,7 +295,7 @@ jobs:
...
@@ -351,7 +295,7 @@ jobs:
container
:
container
:
image
:
diffusers/diffusers-pytorch-xformers-cuda
image
:
diffusers/diffusers-pytorch-xformers-cuda
options
:
--gpus 0 --shm-size "16gb" --ipc host
-v /mnt/cache/.cache/huggingface:/mnt/cache/
options
:
--gpus 0 --shm-size "16gb" --ipc host
steps
:
steps
:
-
name
:
Checkout diffusers
-
name
:
Checkout diffusers
...
@@ -392,7 +336,7 @@ jobs:
...
@@ -392,7 +336,7 @@ jobs:
container
:
container
:
image
:
diffusers/diffusers-pytorch-cuda
image
:
diffusers/diffusers-pytorch-cuda
options
:
--gpus 0 --shm-size "16gb" --ipc host
-v /mnt/cache/.cache/huggingface:/mnt/cache/
options
:
--gpus 0 --shm-size "16gb" --ipc host
steps
:
steps
:
-
name
:
Checkout diffusers
-
name
:
Checkout diffusers
...
...
docker/diffusers-onnxruntime-cuda/Dockerfile
View file @
5fbb4d32
...
@@ -38,6 +38,7 @@ RUN python3.10 -m pip install --no-cache-dir --upgrade pip uv==0.1.11 && \
...
@@ -38,6 +38,7 @@ RUN python3.10 -m pip install --no-cache-dir --upgrade pip uv==0.1.11 && \
datasets
\
datasets
\
hf-doc-builder
\
hf-doc-builder
\
huggingface-hub
\
huggingface-hub
\
hf_transfer
\
Jinja2
\
Jinja2
\
librosa
\
librosa
\
numpy
==
1.26.4
\
numpy
==
1.26.4
\
...
...
docker/diffusers-pytorch-compile-cuda/Dockerfile
View file @
5fbb4d32
...
@@ -38,6 +38,7 @@ RUN python3.10 -m pip install --no-cache-dir --upgrade pip uv==0.1.11 && \
...
@@ -38,6 +38,7 @@ RUN python3.10 -m pip install --no-cache-dir --upgrade pip uv==0.1.11 && \
datasets
\
datasets
\
hf-doc-builder
\
hf-doc-builder
\
huggingface-hub
\
huggingface-hub
\
hf_transfer
\
Jinja2
\
Jinja2
\
librosa
\
librosa
\
numpy
==
1.26.4
\
numpy
==
1.26.4
\
...
...
docker/diffusers-pytorch-cuda/Dockerfile
View file @
5fbb4d32
...
@@ -38,6 +38,7 @@ RUN python3.10 -m pip install --no-cache-dir --upgrade pip uv==0.1.11 && \
...
@@ -38,6 +38,7 @@ RUN python3.10 -m pip install --no-cache-dir --upgrade pip uv==0.1.11 && \
datasets
\
datasets
\
hf-doc-builder
\
hf-doc-builder
\
huggingface-hub
\
huggingface-hub
\
hf_transfer
\
Jinja2
\
Jinja2
\
librosa
\
librosa
\
numpy
==
1.26.4
\
numpy
==
1.26.4
\
...
...
docker/diffusers-pytorch-xformers-cuda/Dockerfile
View file @
5fbb4d32
...
@@ -38,6 +38,7 @@ RUN python3.10 -m pip install --no-cache-dir --upgrade pip uv==0.1.11 && \
...
@@ -38,6 +38,7 @@ RUN python3.10 -m pip install --no-cache-dir --upgrade pip uv==0.1.11 && \
datasets
\
datasets
\
hf-doc-builder
\
hf-doc-builder
\
huggingface-hub
\
huggingface-hub
\
hf_transfer
\
Jinja2
\
Jinja2
\
librosa
\
librosa
\
numpy
==
1.26.4
\
numpy
==
1.26.4
\
...
...
script
s/log_reports.py
→
util
s/log_reports.py
View file @
5fbb4d32
File moved
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment