Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
ColossalAI
Commits
ad00894f
Commit
ad00894f
authored
Jan 06, 2023
by
Ziyue Jiang
Browse files
polish
parent
9ae9e740
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
2 additions
and
13 deletions
+2
-13
examples/language/gpt/experiments/pipeline_parallel/README.md
...ples/language/gpt/experiments/pipeline_parallel/README.md
+2
-1
examples/language/gpt/experiments/pipeline_parallel/utils.py
examples/language/gpt/experiments/pipeline_parallel/utils.py
+0
-12
No files found.
examples/language/gpt/experiments/pipeline_parallel/README.md
View file @
ad00894f
#
Auto-
Parallelism with GPT2
#
Pipeline
Parallelism
Demo
with GPT2
## Requirements
...
...
@@ -33,5 +33,6 @@ For simplicity, the input data is randonly generated here.
```
bash
#Run the Pipeline Parallel on GPT with default setting and a dummy dataset.
#You can change the GPU number or microbatch number in the run.sh .
bash run.sh
```
examples/language/gpt/experiments/pipeline_parallel/utils.py
deleted
100644 → 0
View file @
9ae9e740
import
torch
# Randomly Generated Data
def
get_data
(
batch_size
,
seq_len
,
vocab_size
):
input_ids
=
torch
.
randint
(
0
,
vocab_size
,
(
batch_size
,
seq_len
),
device
=
torch
.
cuda
.
current_device
())
attention_mask
=
torch
.
ones_like
(
input_ids
)
return
input_ids
,
attention_mask
def
get_tflops
(
model_numel
,
batch_size
,
seq_len
,
step_time
):
return
model_numel
*
batch_size
*
seq_len
*
8
/
1e12
/
(
step_time
+
1e-12
)
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment