accelerate.mdx 5.14 KB
Newer Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
<!--Copyright 2022 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# 🤗 Accelerate ã‚’į”¨ã„ãŸåˆ†æ•Ŗå­Ļįŋ’

ãƒĸデãƒĢが大きくãĒるãĢつれãĻ、限られたハãƒŧドã‚Ļェã‚ĸでより大きãĒãƒĸデãƒĢã‚’č¨“įˇ´ã—ã€č¨“įˇ´é€ŸåēĻを大嚅ãĢä¸Šæ˜‡ã•ã›ã‚‹ãŸã‚ãŽæ–šæŗ•ã¨ã—ãĻä¸Ļ列å‡Ļį†ãŒæĩŽä¸Šã—ãĻきぞした。1å°ãŽãƒžã‚ˇãƒŗãĢč¤‡æ•°ãŽGPUãŒã‚ãŖãĻã‚‚ã€č¤‡æ•°ãŽãƒžã‚ˇãƒŗãĢãžãŸãŒã‚‹č¤‡æ•°ãŽGPUãŒã‚ãŖãĻも、あらゆるã‚ŋã‚¤ãƒ—ãŽåˆ†æ•Ŗå‡Ļᐆã‚ģットã‚ĸップ上でãƒĻãƒŧã‚ļãƒŧãŒį°Ąå˜ãĢ 🤗 Transformers ãƒĸデãƒĢã‚’č¨“įˇ´ã§ãã‚‹ã‚ˆã†ãĢ、 Hugging Face では [🤗 Accelerate](https://huggingface.co/docs/accelerate) ナイブナãƒĒをäŊœæˆã—ぞした。こぎチãƒĨãƒŧトãƒĒã‚ĸãƒĢでは、PyTorch ãŽč¨“įˇ´ãƒĢãƒŧプをã‚Ģ゚ã‚ŋマイã‚ēしãĻã€åˆ†æ•Ŗå‡Ļᐆᒰåĸƒã§ãŽč¨“įˇ´ã‚’å¯čƒŊãĢã™ã‚‹æ–šæŗ•ãĢついãĻå­Ļãŗãžã™ã€‚

## ã‚ģットã‚ĸップ

はじめãĢ 🤗 Accelerate ã‚’ã‚¤ãƒŗã‚šãƒˆãƒŧãƒĢしぞしょう:

```bash
pip install accelerate
```

ãã—ãŸã‚‰ã‚¤ãƒŗãƒãƒŧトしãĻ [`~accelerate.Accelerator`] ã‚ĒブジェクトをäŊœæˆã—ぞしょう。[`~accelerate.Accelerator`] ã¯åˆ†æ•Ŗå‡Ļᐆã‚ģットã‚ĸップをč‡Ēå‹•įš„ãĢ検å‡ēã—ã€č¨“įˇ´ãŽãŸã‚ãĢåŋ…čρãĒ全ãĻãŽã‚ŗãƒŗãƒãƒŧãƒãƒŗãƒˆã‚’åˆæœŸåŒ–ã—ãžã™ã€‚ãƒĸデãƒĢをデバイ゚ãĢ明į¤ēįš„ãĢ配įŊŽã™ã‚‹åŋ…čĻã¯ã‚ã‚Šãžã›ã‚“ã€‚

```py
>>> from accelerate import Accelerator

>>> accelerator = Accelerator()
```

## Accelerate するæē–備をしぞしょう

æŦĄãĢ、é–ĸé€Ŗã™ã‚‹å…¨ãĻãŽč¨“įˇ´ã‚Ēブジェクトを [`~accelerate.Accelerator.prepare`] ãƒĄã‚ŊッドãĢæ¸Ąã—ãžã™ã€‚ã“ã‚ŒãĢã¯ã€č¨“įˇ´ã¨čŠ•äžĄãã‚Œãžã‚ŒãŽDataloader、ãƒĸデãƒĢ、optimizer がåĢぞれぞす:

```py
>>> train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare(
...     train_dataloader, eval_dataloader, model, optimizer
... )
```

## Backward

最垌ãĢ荓ᎴãƒĢãƒŧプ内ぎ `loss.backward()` を 🤗 Accelerate ぎ [`~accelerate.Accelerator.backward`] ãƒĄã‚ŊッドでįŊŽãæ›ãˆãžã™īŧš

```py
>>> for epoch in range(num_epochs):
...     for batch in train_dataloader:
...         outputs = model(**batch)
...         loss = outputs.loss
...         accelerator.backward(loss)

...         optimizer.step()
...         lr_scheduler.step()
...         optimizer.zero_grad()
...         progress_bar.update(1)
```

äģĨä¸‹ãŽã‚ŗãƒŧドでįĸēčĒã§ãã‚‹é€šã‚Šã€č¨“įˇ´ãƒĢãƒŧプãĢ4čĄŒãŽã‚ŗãƒŧドをčŋŊåŠ ã™ã‚‹ã ã‘ã§åˆ†æ•Ŗå­Ļįŋ’が可čƒŊですīŧ

```diff
+ from accelerate import Accelerator
  from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler

+ accelerator = Accelerator()

  model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)
  optimizer = AdamW(model.parameters(), lr=3e-5)

- device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
- model.to(device)

+ train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare(
+     train_dataloader, eval_dataloader, model, optimizer
+ )

  num_epochs = 3
  num_training_steps = num_epochs * len(train_dataloader)
  lr_scheduler = get_scheduler(
      "linear",
      optimizer=optimizer,
      num_warmup_steps=0,
      num_training_steps=num_training_steps
  )

  progress_bar = tqdm(range(num_training_steps))

  model.train()
  for epoch in range(num_epochs):
      for batch in train_dataloader:
-         batch = {k: v.to(device) for k, v in batch.items()}
          outputs = model(**batch)
          loss = outputs.loss
-         loss.backward()
+         accelerator.backward(loss)

          optimizer.step()
          lr_scheduler.step()
          optimizer.zero_grad()
          progress_bar.update(1)
```

## č¨“įˇ´ã™ã‚‹

é–ĸé€Ŗã™ã‚‹ã‚ŗãƒŧドをčŋŊ加したら、゚クãƒĒプトぞたは Colaboratory ãĒおぎノãƒŧãƒˆãƒ–ãƒƒã‚¯ã§č¨“įˇ´ã‚’é–‹å§‹ã—ãžã™ã€‚

### ゚クãƒĒãƒ—ãƒˆã§č¨“įˇ´ã™ã‚‹

゚クãƒĒãƒ—ãƒˆã‹ã‚‰č¨“įˇ´ã‚’ã—ãĻã„ã‚‹å ´åˆã¯ã€č¨­åŽšãƒ•ã‚Ąã‚¤ãƒĢをäŊœæˆãƒģäŋå­˜ã™ã‚‹ãŸã‚ãĢäģĨä¸‹ãŽã‚ŗãƒžãƒŗãƒ‰ã‚’åŽŸčĄŒã—ãĻください:

```bash
accelerate config
```

そしãĻæŦĄãŽã‚ˆã†ãĢしãĻč¨“įˇ´ã‚’é–‹å§‹ã—ãžã™:

```bash
accelerate launch train.py
```

### ノãƒŧãƒˆãƒ–ãƒƒã‚¯ã§č¨“įˇ´ã™ã‚‹

Colaboratory ぎ TPU ãŽåˆŠį”¨ã‚’ãŠč€ƒãˆãŽå ´åˆã€đŸ¤— Accelerate はノãƒŧãƒˆãƒ–ãƒƒã‚¯ä¸Šã§åŽŸčĄŒã™ã‚‹ã“ã¨ã‚‚ã§ããžã™ã€‚č¨“įˇ´ãĢåŋ…čρãĒ全ãĻãŽã‚ŗãƒŧドをé–ĸ数ãĢåĢめ、[`~accelerate.notebook_launcher`] ãĢæ¸Ąã—ãĻください:

```py
>>> from accelerate import notebook_launcher

>>> notebook_launcher(training_function)
```

🤗 Accelerate ã¨čąŠå¯ŒãĒ抟čƒŊãĢついãĻã‚‚ãŖã¨įŸĨりたい斚は[ドキãƒĨãƒĄãƒŗãƒˆ](https://huggingface.co/docs/accelerate)ã‚’å‚į…§ã—ãĻください。