Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
1de7e4a5
Commit
1de7e4a5
authored
Jun 07, 2023
by
Benjamin Fattori
Committed by
lintangsutawika
Jun 22, 2023
Browse files
accelerator multidevice setup
parent
2af4f9e0
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
21 additions
and
1 deletion
+21
-1
lm_eval/models/seq2seq.py
lm_eval/models/seq2seq.py
+21
-1
No files found.
lm_eval/models/seq2seq.py
View file @
1de7e4a5
...
...
@@ -71,7 +71,27 @@ class Seq2SeqHFLM(LM):
self
.
batch_size_per_gpu
=
batch_size
if
gpus
>
1
:
raise
NotImplementedError
accelerator
=
Accelerator
()
if
gpus
>
accelerator
.
num_processes
:
warning
=
(
"WARNING: The number of total system GPUs does not match the number of spawned processes. "
"If you would like to use data parallelism, please launch the script "
"with 'accelerate launch *script*'. "
f
"Current run will proceed with
{
accelerator
.
num_processes
}
devices."
)
print
(
warning
)
self
.
_rank
=
accelerator
.
local_process_index
self
.
_world_size
=
accelerator
.
num_processes
else
:
self
.
model
=
accelerator
.
prepare
(
self
.
model
)
self
.
_device
=
torch
.
device
(
f
"cuda:
{
accelerator
.
local_process_index
}
"
)
self
.
accelerator
=
accelerator
if
self
.
accelerator
.
is_local_main_process
:
print
(
f
"Using
{
gpus
}
devices with data parallelism"
)
self
.
_rank
=
self
.
accelerator
.
local_process_index
self
.
_world_size
=
self
.
accelerator
.
num_processes
@
property
def
eot_token_id
(
self
):
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment