Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
2a501ac9
Unverified
Commit
2a501ac9
authored
Jul 01, 2021
by
Lysandre Debut
Committed by
GitHub
Jul 01, 2021
Browse files
Comment fast GPU TF tests (#12452)
parent
27d348f2
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
84 additions
and
84 deletions
+84
-84
.github/workflows/self-push.yml
.github/workflows/self-push.yml
+84
-84
No files found.
.github/workflows/self-push.yml
View file @
2a501ac9
...
...
@@ -61,47 +61,47 @@ jobs:
name
:
run_all_tests_torch_gpu_test_reports
path
:
reports
run_tests_tf_gpu
:
runs-on
:
[
self-hosted
,
docker-gpu
,
single-gpu
]
timeout-minutes
:
120
container
:
image
:
tensorflow/tensorflow:2.4.1-gpu
options
:
--gpus 0 --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface:/mnt/cache/
steps
:
-
name
:
Launcher docker
uses
:
actions/checkout@v2
-
name
:
NVIDIA-SMI
run
:
|
nvidia-smi
-
name
:
Install dependencies
run
:
|
pip install --upgrade pip
pip install .[sklearn,testing,onnxruntime,sentencepiece]
-
name
:
Are GPUs recognized by our DL frameworks
run
:
|
TF_CPP_MIN_LOG_LEVEL=3 python -c "import tensorflow as tf; print('TF GPUs available:', bool(tf.config.list_physical_devices('GPU')))"
TF_CPP_MIN_LOG_LEVEL=3 python -c "import tensorflow as tf; print('Number of TF GPUs available:', len(tf.config.list_physical_devices('GPU')))"
-
name
:
Run all non-slow tests on GPU
env
:
TF_NUM_INTRAOP_THREADS
:
8
TF_NUM_INTEROP_THREADS
:
1
run
:
|
python -m pytest -n 1 --dist=loadfile --make-reports=tests_tf_gpu tests
-
name
:
Failure short reports
if
:
${{ always() }}
run
:
cat reports/tests_tf_gpu_failures_short.txt
-
name
:
Test suite reports artifacts
if
:
${{ always() }}
uses
:
actions/upload-artifact@v2
with
:
name
:
run_all_tests_tf_gpu_test_reports
path
:
reports
#
run_tests_tf_gpu:
#
runs-on: [self-hosted, docker-gpu, single-gpu]
#
timeout-minutes: 120
#
container:
#
image: tensorflow/tensorflow:2.4.1-gpu
#
options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface:/mnt/cache/
#
steps:
#
- name: Launcher docker
#
uses: actions/checkout@v2
#
#
- name: NVIDIA-SMI
#
run: |
#
nvidia-smi
#
#
- name: Install dependencies
#
run: |
#
pip install --upgrade pip
#
pip install .[sklearn,testing,onnxruntime,sentencepiece]
#
#
- name: Are GPUs recognized by our DL frameworks
#
run: |
#
TF_CPP_MIN_LOG_LEVEL=3 python -c "import tensorflow as tf; print('TF GPUs available:', bool(tf.config.list_physical_devices('GPU')))"
#
TF_CPP_MIN_LOG_LEVEL=3 python -c "import tensorflow as tf; print('Number of TF GPUs available:', len(tf.config.list_physical_devices('GPU')))"
#
#
- name: Run all non-slow tests on GPU
#
env:
#
TF_NUM_INTRAOP_THREADS: 8
#
TF_NUM_INTEROP_THREADS: 1
#
run: |
#
python -m pytest -n 1 --dist=loadfile --make-reports=tests_tf_gpu tests
#
#
- name: Failure short reports
#
if: ${{ always() }}
#
run: cat reports/tests_tf_gpu_failures_short.txt
#
#
- name: Test suite reports artifacts
#
if: ${{ always() }}
#
uses: actions/upload-artifact@v2
#
with:
#
name: run_all_tests_tf_gpu_test_reports
#
path: reports
run_tests_torch_multi_gpu
:
...
...
@@ -147,47 +147,47 @@ jobs:
name
:
run_all_tests_torch_multi_gpu_test_reports
path
:
reports
run_tests_tf_multi_gpu
:
runs-on
:
[
self-hosted
,
docker-gpu
,
multi-gpu
]
timeout-minutes
:
120
container
:
image
:
tensorflow/tensorflow:2.4.1-gpu
options
:
--gpus all --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface:/mnt/cache/
steps
:
-
name
:
Launcher docker
uses
:
actions/checkout@v2
-
name
:
NVIDIA-SMI
run
:
|
nvidia-smi
-
name
:
Install dependencies
run
:
|
pip install --upgrade pip
pip install .[sklearn,testing,onnxruntime,sentencepiece]
-
name
:
Are GPUs recognized by our DL frameworks
run
:
|
TF_CPP_MIN_LOG_LEVEL=3 python -c "import tensorflow as tf; print('TF GPUs available:', bool(tf.config.list_physical_devices('GPU')))"
TF_CPP_MIN_LOG_LEVEL=3 python -c "import tensorflow as tf; print('Number of TF GPUs available:', len(tf.config.list_physical_devices('GPU')))"
-
name
:
Run all non-slow tests on GPU
env
:
TF_NUM_INTRAOP_THREADS
:
8
TF_NUM_INTEROP_THREADS
:
1
run
:
|
python -m pytest -n 1 --dist=loadfile --make-reports=tests_tf_multi_gpu tests
-
name
:
Failure short reports
if
:
${{ always() }}
run
:
cat reports/tests_tf_multi_gpu_failures_short.txt
-
name
:
Test suite reports artifacts
if
:
${{ always() }}
uses
:
actions/upload-artifact@v2
with
:
name
:
run_all_tests_tf_multi_gpu_test_reports
path
:
reports
#
run_tests_tf_multi_gpu:
#
runs-on: [self-hosted, docker-gpu, multi-gpu]
#
timeout-minutes: 120
#
container:
#
image: tensorflow/tensorflow:2.4.1-gpu
#
options: --gpus all --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface:/mnt/cache/
#
steps:
#
- name: Launcher docker
#
uses: actions/checkout@v2
#
#
- name: NVIDIA-SMI
#
run: |
#
nvidia-smi
#
#
- name: Install dependencies
#
run: |
#
pip install --upgrade pip
#
pip install .[sklearn,testing,onnxruntime,sentencepiece]
#
#
- name: Are GPUs recognized by our DL frameworks
#
run: |
#
TF_CPP_MIN_LOG_LEVEL=3 python -c "import tensorflow as tf; print('TF GPUs available:', bool(tf.config.list_physical_devices('GPU')))"
#
TF_CPP_MIN_LOG_LEVEL=3 python -c "import tensorflow as tf; print('Number of TF GPUs available:', len(tf.config.list_physical_devices('GPU')))"
#
#
- name: Run all non-slow tests on GPU
#
env:
#
TF_NUM_INTRAOP_THREADS: 8
#
TF_NUM_INTEROP_THREADS: 1
#
run: |
#
python -m pytest -n 1 --dist=loadfile --make-reports=tests_tf_multi_gpu tests
#
#
- name: Failure short reports
#
if: ${{ always() }}
#
run: cat reports/tests_tf_multi_gpu_failures_short.txt
#
#
- name: Test suite reports artifacts
#
if: ${{ always() }}
#
uses: actions/upload-artifact@v2
#
with:
#
name: run_all_tests_tf_multi_gpu_test_reports
#
path: reports
run_tests_torch_cuda_extensions_gpu
:
runs-on
:
[
self-hosted
,
docker-gpu
,
single-gpu
]
...
...
@@ -278,9 +278,9 @@ jobs:
if
:
always()
needs
:
[
run_tests_torch_gpu
,
run_tests_tf_gpu
,
#
run_tests_tf_gpu,
run_tests_torch_multi_gpu
,
run_tests_tf_multi_gpu
,
#
run_tests_tf_multi_gpu,
run_tests_torch_cuda_extensions_gpu
,
run_tests_torch_cuda_extensions_multi_gpu
]
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment