Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
SOLOv2-pytorch
Commits
7b4cc7bf
Commit
7b4cc7bf
authored
Apr 02, 2020
by
taokong
Browse files
using mmcv==0.2.16
parent
f90f3671
Changes
5
Hide whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
5 additions
and
21 deletions
+5
-21
docs/INSTALL.md
docs/INSTALL.md
+1
-1
mmdet/apis/train.py
mmdet/apis/train.py
+1
-9
requirements/runtime.txt
requirements/runtime.txt
+1
-1
tools/test.py
tools/test.py
+1
-5
tools/test_robustness.py
tools/test_robustness.py
+1
-5
No files found.
docs/INSTALL.md
View file @
7b4cc7bf
...
...
@@ -8,7 +8,7 @@
-
CUDA 9.0 or higher
-
NCCL 2
-
GCC 4.9 or higher
-
[
mmcv
](
https://github.com/open-mmlab/mmcv
)
-
[
mmcv
0.2.16
](
https://github.com/open-mmlab/mmcv
/tree/v0.2.16
)
We have tested the following versions of OS and softwares:
...
...
mmdet/apis/train.py
View file @
7b4cc7bf
...
...
@@ -205,15 +205,7 @@ def _dist_train(model,
for
ds
in
dataset
]
# put model on gpus
# model = MMDistributedDataParallel(model.cuda())
find_unused_parameters
=
True
# Sets the `find_unused_parameters` parameter in
# torch.nn.parallel.DistributedDataParallel
model
=
MMDistributedDataParallel
(
model
.
cuda
(),
device_ids
=
[
torch
.
cuda
.
current_device
()],
broadcast_buffers
=
False
,
find_unused_parameters
=
find_unused_parameters
)
model
=
MMDistributedDataParallel
(
model
.
cuda
())
# build runner
optimizer
=
build_optimizer
(
model
,
cfg
.
optimizer
)
...
...
requirements/runtime.txt
View file @
7b4cc7bf
matplotlib
mmcv
>
=0.
3
.1
mmcv
=
=0.
2
.1
6
numpy
scipy
# need older pillow until torchvision is fixed
...
...
tools/test.py
View file @
7b4cc7bf
...
...
@@ -240,11 +240,7 @@ def main():
model
=
MMDataParallel
(
model
,
device_ids
=
[
0
])
outputs
=
single_gpu_test
(
model
,
data_loader
,
args
.
show
)
else
:
# model = MMDistributedDataParallel(model.cuda())
model
=
MMDistributedDataParallel
(
model
.
cuda
(),
device_ids
=
[
torch
.
cuda
.
current_device
()],
broadcast_buffers
=
False
)
model
=
MMDistributedDataParallel
(
model
.
cuda
())
outputs
=
multi_gpu_test
(
model
,
data_loader
,
args
.
tmpdir
,
args
.
gpu_collect
)
...
...
tools/test_robustness.py
View file @
7b4cc7bf
...
...
@@ -375,11 +375,7 @@ def main():
model
=
MMDataParallel
(
model
,
device_ids
=
[
0
])
outputs
=
single_gpu_test
(
model
,
data_loader
,
args
.
show
)
else
:
# model = MMDistributedDataParallel(model.cuda())
model
=
MMDistributedDataParallel
(
model
.
cuda
(),
device_ids
=
[
torch
.
cuda
.
current_device
()],
broadcast_buffers
=
False
)
model
=
MMDistributedDataParallel
(
model
.
cuda
())
outputs
=
multi_gpu_test
(
model
,
data_loader
,
args
.
tmpdir
)
rank
,
_
=
get_dist_info
()
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment