Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
056723ad
Unverified
Commit
056723ad
authored
Sep 30, 2020
by
Lysandre Debut
Committed by
GitHub
Sep 30, 2020
Browse files
Multi-GPU setup (#7453)
parent
4ba24874
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
114 additions
and
2 deletions
+114
-2
.github/workflows/self-push.yml
.github/workflows/self-push.yml
+51
-1
.github/workflows/self-scheduled.yml
.github/workflows/self-scheduled.yml
+63
-1
No files found.
.github/workflows/self-push.yml
View file @
056723ad
...
...
@@ -14,7 +14,7 @@ on:
jobs
:
run_tests_torch_and_tf_gpu
:
runs-on
:
self-hosted
runs-on
:
[
self-hosted
,
single-gpu
]
steps
:
-
uses
:
actions/checkout@v2
-
name
:
Python version
...
...
@@ -62,3 +62,53 @@ jobs:
run
:
|
source .env/bin/activate
python -m pytest -n 2 --dist=loadfile -s ./tests/
run_tests_torch_and_tf_multiple_gpu
:
runs-on
:
[
self-hosted
,
multi-gpu
]
steps
:
-
uses
:
actions/checkout@v2
-
name
:
Python version
run
:
|
which python
python --version
pip --version
-
name
:
Current dir
run
:
pwd
-
run
:
nvidia-smi
-
name
:
Loading cache.
uses
:
actions/cache@v2
id
:
cache
with
:
path
:
.env
key
:
v0-tests_tf_torch_multiple_gpu-${{ hashFiles('setup.py') }}
-
name
:
Create new python env (on self-hosted runners we have to handle isolation ourselves)
run
:
|
python -m venv .env
source .env/bin/activate
which python
python --version
pip --version
-
name
:
Install dependencies
run
:
|
source .env/bin/activate
pip install --upgrade pip
pip install torch!=1.6.0
pip install .[sklearn,testing,onnxruntime]
pip install git+https://github.com/huggingface/datasets
-
name
:
Are GPUs recognized by our DL frameworks
run
:
|
source .env/bin/activate
python -c "import torch; print(torch.cuda.is_available())"
-
name
:
Run all non-slow tests on GPU
env
:
TF_FORCE_GPU_ALLOW_GROWTH
:
"
true"
# TF_GPU_MEMORY_LIMIT: 4096
OMP_NUM_THREADS
:
1
USE_CUDA
:
yes
run
:
|
source .env/bin/activate
python -m pytest -n 2 --dist=loadfile -s ./tests/
.github/workflows/self-scheduled.yml
View file @
056723ad
...
...
@@ -10,7 +10,7 @@ on:
jobs
:
run_all_tests_torch_and_tf_gpu
:
runs-on
:
self-hosted
runs-on
:
[
self-hosted
,
single-gpu
]
steps
:
-
uses
:
actions/checkout@v2
...
...
@@ -70,3 +70,65 @@ jobs:
source .env/bin/activate
pip install -r examples/requirements.txt
python -m pytest -n 1 --dist=loadfile -s examples
run_all_tests_torch_and_tf_multiple_gpu
:
runs-on
:
[
self-hosted
,
multi-gpu
]
steps
:
-
uses
:
actions/checkout@v2
-
name
:
Loading cache.
uses
:
actions/cache@v2
id
:
cache
with
:
path
:
.env
key
:
v0-slow_tests_tf_torch_multi_gpu-${{ hashFiles('setup.py') }}
-
name
:
Python version
run
:
|
which python
python --version
pip --version
-
name
:
Current dir
run
:
pwd
-
run
:
nvidia-smi
-
name
:
Create new python env (on self-hosted runners we have to handle isolation ourselves)
if
:
steps.cache.outputs.cache-hit != 'true'
run
:
|
python -m venv .env
source .env/bin/activate
which python
python --version
pip --version
-
name
:
Install dependencies
run
:
|
source .env/bin/activate
pip install --upgrade pip
pip install torch!=1.6.0
pip install .[sklearn,testing,onnxruntime]
pip install git+https://github.com/huggingface/datasets
-
name
:
Are GPUs recognized by our DL frameworks
run
:
|
source .env/bin/activate
python -c "import torch; print(torch.cuda.is_available())"
-
name
:
Run all tests on GPU
env
:
TF_FORCE_GPU_ALLOW_GROWTH
:
"
true"
OMP_NUM_THREADS
:
1
RUN_SLOW
:
yes
USE_CUDA
:
yes
run
:
|
source .env/bin/activate
python -m pytest -n 1 --dist=loadfile -s ./tests/
-
name
:
Run examples tests on GPU
env
:
TF_FORCE_GPU_ALLOW_GROWTH
:
"
true"
OMP_NUM_THREADS
:
1
RUN_SLOW
:
yes
USE_CUDA
:
yes
run
:
|
source .env/bin/activate
pip install -r examples/requirements.txt
python -m pytest -n 1 --dist=loadfile -s examples
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment