Commit 13afb712 authored by Julien Chaumond's avatar Julien Chaumond
Browse files

[ci] Ensure that TF does not preempt all GPU memory for itself

see https://www.tensorflow.org/guide/gpu#limiting_gpu_memory_growth

Co-Authored-By: default avatarFuntowicz Morgan <mfuntowicz@users.noreply.github.com>
Co-Authored-By: default avatarLysandre Debut <lysandre.debut@reseau.eseo.fr>
parent c0135194
......@@ -40,6 +40,7 @@ jobs:
- name: Run all non-slow tests on GPU
env:
TF_FORCE_GPU_ALLOW_GROWTH: yes
OMP_NUM_THREADS: 1
USE_CUDA: yes
run: |
......
......@@ -41,6 +41,7 @@ jobs:
- name: Run all tests on GPU
env:
TF_FORCE_GPU_ALLOW_GROWTH: yes
OMP_NUM_THREADS: 1
RUN_SLOW: yes
USE_CUDA: yes
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment