Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
bb6a664e
Unverified
Commit
bb6a664e
authored
Feb 17, 2023
by
Yoshinari Fujinuma
Committed by
GitHub
Feb 17, 2023
Browse files
Fix multi-gpu training error for LayoutLMv2 (#21675)
Co-authored-by:
Yoshinari Fujinuma
<
fujinuy@amazon.com
>
parent
a8eb4f79
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
1 deletion
+1
-1
src/transformers/models/layoutlmv2/modeling_layoutlmv2.py
src/transformers/models/layoutlmv2/modeling_layoutlmv2.py
+1
-1
No files found.
src/transformers/models/layoutlmv2/modeling_layoutlmv2.py
View file @
bb6a664e
...
@@ -604,7 +604,7 @@ class LayoutLMv2VisualBackbone(nn.Module):
...
@@ -604,7 +604,7 @@ class LayoutLMv2VisualBackbone(nn.Module):
self_rank
=
torch
.
distributed
.
get_rank
()
self_rank
=
torch
.
distributed
.
get_rank
()
node_size
=
torch
.
cuda
.
device_count
()
node_size
=
torch
.
cuda
.
device_count
()
world_size
=
torch
.
distributed
.
get_world_size
()
world_size
=
torch
.
distributed
.
get_world_size
()
if
not
(
world_size
&
node_size
==
0
):
if
not
(
world_size
%
node_size
==
0
):
raise
RuntimeError
(
"Make sure the number of processes can be divided by the number of nodes"
)
raise
RuntimeError
(
"Make sure the number of processes can be divided by the number of nodes"
)
node_global_ranks
=
[
list
(
range
(
i
*
node_size
,
(
i
+
1
)
*
node_size
))
for
i
in
range
(
world_size
//
node_size
)]
node_global_ranks
=
[
list
(
range
(
i
*
node_size
,
(
i
+
1
)
*
node_size
))
for
i
in
range
(
world_size
//
node_size
)]
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment