perf_train_cpu.md 3.97 KB
Newer Older
1
2
3
4
5
6
7
8
9
<!--Copyright 2022 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
11
12
13

鈿狅笍 Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.

14
15
16
17
18
19
20
-->

# Efficient Training on CPU

This guide focuses on training large models efficiently on CPU.

## Mixed precision with IPEX
21
Mixed precision uses single (fp32) and half-precision (bf16/fp16) data types in a model to accelerate training or inference while still preserving much of the single-precision accuracy. Modern CPUs such as 3rd and 4th Gen Intel庐 Xeon庐 Scalable processors natively support bf16, so you should get more performance out of the box by enabling mixed precision training with bf16.
22

23
To further maximize training performance, you can use Intel庐 Extension for PyTorch (IPEX), which is a library built on PyTorch and adds additional CPU instruction level architecture (ISA) level support such as Intel庐 Advanced Vector Extensions 512 Vector Neural Network Instructions (Intel庐 AVX512-VNNI), and Intel庐 Advanced Matrix Extensions (Intel庐 AMX) for an extra performance boost on Intel CPUs. However, CPUs with only AVX2 (e.g., AMD or older Intel CPUs) are not guaranteed to have better performance under IPEX.
24

25
Auto Mixed Precision (AMP) for CPU backends has been enabled since PyTorch 1.10. AMP support for bf16 on CPUs and bf16 operator optimization is also supported in IPEX and partially upstreamed to the main PyTorch branch. You can get better performance and user experience with IPEX AMP.
26

27
Check more detailed information for [Auto Mixed Precision](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/amp.html).
28
29
30
31
32

### IPEX installation:

IPEX release is following PyTorch, to install via pip:

Wang, Yi's avatar
Wang, Yi committed
33
34
| PyTorch Version   | IPEX version   |
| :---------------: | :----------:   |
jiqing-feng's avatar
jiqing-feng committed
35
36
| 2.1.x             |  2.1.100+cpu   |
| 2.0.x             |  2.0.100+cpu   |
Wang, Yi's avatar
Wang, Yi committed
37
38
| 1.13              |  1.13.0+cpu    |
| 1.12              |  1.12.300+cpu  |
39

jiqing-feng's avatar
jiqing-feng committed
40
Please run `pip list | grep torch` to get your `pytorch_version`, so you can get the `IPEX version_name`.
41
```
Wang, Yi's avatar
Wang, Yi committed
42
pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
43
```
jiqing-feng's avatar
jiqing-feng committed
44
You can check the latest versions in [ipex-whl-stable-cpu](https://developer.intel.com/ipex-whl-stable-cpu) if needed.
45
46

Check more approaches for [IPEX installation](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html).
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64

### Usage in Trainer
To enable auto mixed precision with IPEX in Trainer, users should add `use_ipex`, `bf16` and `no_cuda` in training command arguments.

Take an example of the use cases on [Transformers question-answering](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering)

- Training with IPEX using BF16 auto mixed precision on CPU:
<pre> python run_qa.py \
--model_name_or_path bert-base-uncased \
--dataset_name squad \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/ \
jiqing-feng's avatar
jiqing-feng committed
65
66
67
68
69
70
71
72
73
74
75
76
77
78
<b>--use_ipex</b> \
<b>--bf16</b> \
<b>--use_cpu</b></pre> 

If you want to enable `use_ipex` and `bf16` in your script, add these parameters to `TrainingArguments` like this:
```diff
training_args = TrainingArguments(
    output_dir=args.output_path,
+   bf16=True,
+   use_ipex=True,
+   use_cpu=True,
    **kwargs
)
```
79

80
81
82
### Practice example

Blog: [Accelerating PyTorch Transformers with Intel Sapphire Rapids](https://huggingface.co/blog/intel-sapphire-rapids)