Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
dgl
Commits
a47ab71d
Unverified
Commit
a47ab71d
authored
Oct 14, 2021
by
mszarma
Committed by
GitHub
Oct 14, 2021
Browse files
[DOCS] Add training on CPU sections to docs (#3398)
parent
18863069
Changes
4
Show whitespace changes
Inline
Side-by-side
Showing
4 changed files
with
58 additions
and
2 deletions
+58
-2
docs/source/conf.py
docs/source/conf.py
+4
-2
docs/source/index.rst
docs/source/index.rst
+1
-0
tutorials/cpu/README.txt
tutorials/cpu/README.txt
+2
-0
tutorials/cpu/cpu_best_practises.py
tutorials/cpu/cpu_best_practises.py
+51
-0
No files found.
docs/source/conf.py
View file @
a47ab71d
...
@@ -200,12 +200,14 @@ examples_dirs = ['../../tutorials/blitz',
...
@@ -200,12 +200,14 @@ examples_dirs = ['../../tutorials/blitz',
'../../tutorials/large'
,
'../../tutorials/large'
,
'../../tutorials/dist'
,
'../../tutorials/dist'
,
'../../tutorials/models'
,
'../../tutorials/models'
,
'../../tutorials/multi'
]
# path to find sources
'../../tutorials/multi'
,
'../../tutorials/cpu'
]
# path to find sources
gallery_dirs
=
[
'tutorials/blitz/'
,
gallery_dirs
=
[
'tutorials/blitz/'
,
'tutorials/large/'
,
'tutorials/large/'
,
'tutorials/dist/'
,
'tutorials/dist/'
,
'tutorials/models/'
,
'tutorials/models/'
,
'tutorials/multi/'
]
# path to generate docs
'tutorials/multi/'
,
'tutorials/cpu'
]
# path to generate docs
reference_url
=
{
reference_url
=
{
'dgl'
:
None
,
'dgl'
:
None
,
'numpy'
:
'http://docs.scipy.org/doc/numpy/'
,
'numpy'
:
'http://docs.scipy.org/doc/numpy/'
,
...
...
docs/source/index.rst
View file @
a47ab71d
...
@@ -25,6 +25,7 @@ Welcome to Deep Graph Library Tutorials and Documentation
...
@@ -25,6 +25,7 @@ Welcome to Deep Graph Library Tutorials and Documentation
guide/index
guide/index
guide_cn/index
guide_cn/index
tutorials/large/index
tutorials/large/index
tutorials/cpu/index
tutorials/multi/index
tutorials/multi/index
tutorials/dist/index
tutorials/dist/index
tutorials/models/index
tutorials/models/index
...
...
tutorials/cpu/README.txt
0 → 100644
View file @
a47ab71d
Training on CPUs
=========================
tutorials/cpu/cpu_best_practises.py
0 → 100644
View file @
a47ab71d
"""
CPU Best Pratices
=====================================================
This chapter focus on providing best practises for environment setup
to get the best performance during training and inference on the CPU.
Intel
`````````````````````````````
Hyper-treading
---------------------------
For specific workloads as GNN’s domain, suggested default setting for having best performance
is to turn off hyperthreading.
Turning off the hyper threading feature can be done at BIOS [#f1]_ or operating system level [#f2]_ [#f3]_ .
OpenMP settings
---------------------------
During training on CPU, the training and dataloading part need to be maintained simultaneously.
Best performance of parallelization in OpenMP
can be achieved by setting up the optimal number of working threads and dataloading workers.
**GNU OpenMP**
Default BKM for setting the number of OMP threads with Pytorch backend:
``OMP_NUM_THREADS`` = number of physical cores – ``num_workers``
Number of physical cores can be checked by using ``lscpu`` ("Core(s) per socket")
or ``nproc`` command in Linux command line.
Below simple bash script example for setting the OMP threads and ``pytorch`` backend dataloader workers:
.. code:: bash
cores=`nproc`
num_workers=4
export OMP_NUM_THREADS=$(($cores-$num_workers))
python script.py --gpu -1 --num_workers=$num_workers
Depending on the dataset, model and CPU optimal number of dataloader workers and OpemMP threads may vary
but close to the general default advise presented above [#f4]_ .
.. rubric:: Footnotes
.. [#f1] https://www.intel.com/content/www/us/en/support/articles/000007645/boards-and-kits/desktop-boards.html
.. [#f2] https://aws.amazon.com/blogs/compute/disabling-intel-hyper-threading-technology-on-amazon-linux/
.. [#f3] https://aws.amazon.com/blogs/compute/disabling-intel-hyper-threading-technology-on-amazon-ec2-windows-instances/
.. [#f4] https://software.intel.com/content/www/us/en/develop/articles/how-to-get-better-performance-on-pytorchcaffe2-with-intel-acceleration.html
"""
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment