Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
fairscale
Commits
428110b8
Unverified
Commit
428110b8
authored
Mar 02, 2021
by
Min Xu
Committed by
GitHub
Mar 02, 2021
Browse files
[docs] minor doc update (#459)
parent
8f77255b
Changes
3
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
5 additions
and
6 deletions
+5
-6
README.md
README.md
+1
-1
tests/nn/data_parallel/test_fsdp.py
tests/nn/data_parallel/test_fsdp.py
+3
-4
tests/nn/data_parallel/test_fsdp_uneven.py
tests/nn/data_parallel/test_fsdp_uneven.py
+1
-1
No files found.
README.md
View file @
428110b8
...
...
@@ -17,7 +17,7 @@ FairScale supports:
*
Sharded training:
*
Optimizer state sharding (
`fairscale.optim.OSS`
)
*
Sharded Data Parallel (SDP) (
`fairscale.nn.ShardedDataParallel`
)
*
Fully Sharded Data Parallel (FSDP) (
`fairscale.nn.FullyShardedDataParallel`
)
*
Fully Sharded Data Parallel (FSDP) (
`fairscale.nn.FullyShardedDataParallel`
)
(PyTorch >= 1.6)
*
Optimization at scale:
*
AdaScale SGD (
`fairscale.optim.AdaScale`
)
*
GPU memory optimization:
...
...
tests/nn/data_parallel/test_fsdp.py
View file @
428110b8
...
...
@@ -25,6 +25,7 @@ from fairscale.utils.testing import (
get_cycles_per_ms
,
objects_are_equal
,
spawn_for_all_world_sizes
,
torch_version
,
)
# How to use remote-pdb: https://gist.github.com/sshleifer/9d43351957179c13606e015b072927d4
...
...
@@ -33,10 +34,8 @@ from fairscale.utils.testing import (
class
DistributedTest
(
unittest
.
TestCase
):
def
setUp
(
self
):
major
,
minor
=
torch
.
__version__
.
split
(
"."
)[:
2
]
major
,
minor
=
int
(
major
),
int
(
minor
)
if
major
<
1
or
(
major
==
1
and
minor
<
6
):
raise
unittest
.
SkipTest
(
"Need pytorch version >= 1.6 due to autocast"
)
if
torch_version
()
<
(
1
,
6
,
0
):
raise
unittest
.
SkipTest
(
"Need pytorch version >= 1.6 due to lack of reduce_scatter"
)
if
not
torch
.
cuda
.
is_available
():
raise
unittest
.
SkipTest
(
"CUDA not available, skipping test"
)
if
sys
.
platform
==
"win32"
:
...
...
tests/nn/data_parallel/test_fsdp_uneven.py
View file @
428110b8
...
...
@@ -85,7 +85,7 @@ def _test_func(rank, world_size, model, fsdp_config, tempfile_name, unused, test
def
test_one_iteration
(
world_size
,
test_case
,
fsdp_config
):
"""Test FSDP with uneven divide of parameter shards."""
if
torch_version
()
<
(
1
,
6
,
0
):
pytest
.
skip
(
"older pytorch doesn't support reduce_scatter
in gloo backend
"
)
pytest
.
skip
(
"older pytorch doesn't support reduce_scatter"
)
if
world_size
>
torch
.
cuda
.
device_count
():
pytest
.
skip
(
"Not enough GPUs."
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment