Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
ColossalAI
Commits
31fe8423
Unverified
Commit
31fe8423
authored
Dec 29, 2022
by
HELSON
Committed by
GitHub
Dec 29, 2022
Browse files
[example] fix benchmark.sh for gpt example (#2229)
parent
78483a9f
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
10 additions
and
10 deletions
+10
-10
examples/language/gpt/benchmark.sh
examples/language/gpt/benchmark.sh
+4
-4
examples/language/gpt/run.sh
examples/language/gpt/run.sh
+6
-6
No files found.
examples/language/gpt/benchmark.sh
View file @
31fe8423
for
MODEL_
NAM
E
in
"
GPT2small
"
for
MODEL_
TYP
E
in
"
gpt2_medium
"
do
for
BATCH_SIZE
in
8
for
BATCH_SIZE
in
16
do
for
GPUNUM
in
1 2 4 8
do
...
...
@@ -11,8 +11,8 @@ then
continue
fi
echo
"****************** Begin ***************************"
echo
"* benchmrking MODEL_
NAM
E
${
MODEL_
NAM
E
}
BS
${
BATCH_SIZE
}
BS
${
BS
}
GPUNUM
${
GPUNUM
}
TPDEGREE
${
TPDEGREE
}
"
bash ./run.sh
echo
"* benchmrking MODEL_
TYP
E
${
MODEL_
TYP
E
}
BS
${
BATCH_SIZE
}
BS
${
BS
}
GPUNUM
${
GPUNUM
}
TPDEGREE
${
TPDEGREE
}
"
MODEL_TYPE
=
${
MODEL_TYPE
}
BATCH_SIZE
=
${
BATCH_SIZE
}
GPUNUM
=
${
GPUNUM
}
TPDEGREE
=
${
TPDEGREE
}
bash ./run.sh
echo
"****************** Finished ***************************"
echo
""
echo
""
...
...
examples/language/gpt/run.sh
View file @
31fe8423
# distplan in ["colossalai", "zero1", "zero2", "torch_ddp", "torch_zero"]
export
DISTPAN
=
{
$DISTPAN
:-
"colossalai"
}
export
DISTPAN
=
$
{
DISTPAN
:-
"colossalai"
}
# The following options only valid when DISTPAN="colossalai"
export
TPDEGREE
=
${
TPDEGREE
:-
1
}
export
GPUNUM
=
${
GPUNUM
:-
1
}
export
PLACEMENT
=
${
PLACEMENT
:
'const'
}
export
USE_SHARD_INIT
=
${
USE_SHARD_INIT
:False
}
export
BATCH_SIZE
=
${
BATCH_SIZE
:-
8
}
export
MODEL_TYPE
=
${
MODEL_TYPE
:
"gpt2_medium"
}
export
TPDEGREE
=
${
TPDEGREE
:-
1
}
export
PLACEMENT
=
${
PLACEMENT
:-
"const"
}
export
USE_SHARD_INIT
=
${
USE_SHARD_INIT
:-
False
}
export
BATCH_SIZE
=
${
BATCH_SIZE
:-
16
}
export
MODEL_TYPE
=
${
MODEL_TYPE
:-
"gpt2_medium"
}
mkdir
-p
logs
torchrun
--standalone
--nproc_per_node
=
${
GPUNUM
}
train_gpt_demo.py
--tp_degree
=
${
TPDEGREE
}
--model_type
=
${
MODEL_TYPE
}
--batch_size
=
${
BATCH_SIZE
}
--placement
${
PLACEMENT
}
--shardinit
${
USE_SHARD_INIT
}
--distplan
${
DISTPAN
}
2>&1 |
tee
./logs/
${
MODEL_TYPE
}
_
${
DISTPAN
}
_gpu_
${
GPUNUM
}
_bs_
${
BATCH_SIZE
}
_tp_
${
TPDEGREE
}
.log
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment