Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
e2b6e73f
Unverified
Commit
e2b6e73f
authored
Feb 03, 2022
by
Patrick von Platen
Committed by
GitHub
Feb 03, 2022
Browse files
[Flax tests] Disable scheduled GPU tests (#15503)
parent
f5d98da2
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
44 additions
and
44 deletions
+44
-44
.github/workflows/self-scheduled.yml
.github/workflows/self-scheduled.yml
+44
-44
No files found.
.github/workflows/self-scheduled.yml
View file @
e2b6e73f
...
...
@@ -98,50 +98,50 @@ jobs:
name
:
run_all_tests_torch_gpu_test_reports
path
:
reports
run_all_tests_flax_gpu
:
runs-on
:
[
self-hosted
,
docker-gpu-test
,
single-gpu
]
container
:
image
:
tensorflow/tensorflow:2.4.1-gpu
options
:
--gpus 0 --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface:/mnt/cache/
steps
:
-
name
:
Launcher docker
uses
:
actions/checkout@v2
-
name
:
NVIDIA-SMI
continue-on-error
:
true
run
:
|
nvidia-smi
-
name
:
Install dependencies
run
:
|
pip install --upgrade pip
pip install --upgrade "jax[cuda111]" -f https://storage.googleapis.com/jax-releases/jax_releases.html
pip install .[flax,integrations,sklearn,testing,sentencepiece,flax-speech,vision]
pip install https://github.com/kpu/kenlm/archive/master.zip
-
name
:
Are GPUs recognized by our DL frameworks
run
:
|
python -c "from jax.lib import xla_bridge; print('GPU available:', xla_bridge.get_backend().platform)"
python -c "import jax; print('Number of GPUs available:', len(jax.local_devices()))"
-
name
:
Run all tests on GPU
run
:
|
python -m pytest -n 1 -v --dist=loadfile --make-reports=tests_flax_gpu tests
-
name
:
Failure short reports
if
:
${{ always() }}
run
:
cat reports/tests_flax_gpu_failures_short.txt
-
name
:
Test durations
if
:
${{ always() }}
run
:
cat reports/tests_flax_gpu_durations.txt
-
name
:
Test suite reports artifacts
if
:
${{ always() }}
uses
:
actions/upload-artifact@v2
with
:
name
:
run_all_tests_flax_gpu_test_reports
path
:
reports
#
run_all_tests_flax_gpu:
#
runs-on: [self-hosted, docker-gpu-test, single-gpu]
#
container:
#
image: tensorflow/tensorflow:2.4.1-gpu
#
options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface:/mnt/cache/
#
steps:
#
- name: Launcher docker
#
uses: actions/checkout@v2
#
#
- name: NVIDIA-SMI
#
continue-on-error: true
#
run: |
#
nvidia-smi
#
#
- name: Install dependencies
#
run: |
#
pip install --upgrade pip
#
pip install --upgrade "jax[cuda111]" -f https://storage.googleapis.com/jax-releases/jax_releases.html
#
pip install .[flax,integrations,sklearn,testing,sentencepiece,flax-speech,vision]
#
pip install https://github.com/kpu/kenlm/archive/master.zip
#
#
- name: Are GPUs recognized by our DL frameworks
#
run: |
#
python -c "from jax.lib import xla_bridge; print('GPU available:', xla_bridge.get_backend().platform)"
#
python -c "import jax; print('Number of GPUs available:', len(jax.local_devices()))"
#
#
- name: Run all tests on GPU
#
run: |
#
python -m pytest -n 1 -v --dist=loadfile --make-reports=tests_flax_gpu tests
#
#
- name: Failure short reports
#
if: ${{ always() }}
#
run: cat reports/tests_flax_gpu_failures_short.txt
#
#
- name: Test durations
#
if: ${{ always() }}
#
run: cat reports/tests_flax_gpu_durations.txt
#
#
- name: Test suite reports artifacts
#
if: ${{ always() }}
#
uses: actions/upload-artifact@v2
#
with:
#
name: run_all_tests_flax_gpu_test_reports
#
path: reports
run_all_tests_tf_gpu
:
runs-on
:
[
self-hosted
,
docker-gpu
,
single-gpu
]
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment