Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
OOTDiffusion_pytorch
Commits
54a066bf
Commit
54a066bf
authored
May 20, 2024
by
mashun1
Browse files
ootdiffusion
parents
Pipeline
#1004
canceled with stages
Changes
331
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
881 additions
and
0 deletions
+881
-0
preprocess/humanparsing/mhp_extension/detectron2/dev/linter.sh
...ocess/humanparsing/mhp_extension/detectron2/dev/linter.sh
+46
-0
preprocess/humanparsing/mhp_extension/detectron2/dev/packaging/README.md
...nparsing/mhp_extension/detectron2/dev/packaging/README.md
+17
-0
preprocess/humanparsing/mhp_extension/detectron2/dev/packaging/build_all_wheels.sh
...hp_extension/detectron2/dev/packaging/build_all_wheels.sh
+57
-0
preprocess/humanparsing/mhp_extension/detectron2/dev/packaging/build_wheel.sh
...ing/mhp_extension/detectron2/dev/packaging/build_wheel.sh
+32
-0
preprocess/humanparsing/mhp_extension/detectron2/dev/packaging/gen_wheel_index.sh
...mhp_extension/detectron2/dev/packaging/gen_wheel_index.sh
+27
-0
preprocess/humanparsing/mhp_extension/detectron2/dev/packaging/pkg_helpers.bash
...g/mhp_extension/detectron2/dev/packaging/pkg_helpers.bash
+57
-0
preprocess/humanparsing/mhp_extension/detectron2/dev/parse_results.sh
...umanparsing/mhp_extension/detectron2/dev/parse_results.sh
+45
-0
preprocess/humanparsing/mhp_extension/detectron2/dev/run_inference_tests.sh
...rsing/mhp_extension/detectron2/dev/run_inference_tests.sh
+44
-0
preprocess/humanparsing/mhp_extension/detectron2/dev/run_instant_tests.sh
...parsing/mhp_extension/detectron2/dev/run_instant_tests.sh
+27
-0
preprocess/humanparsing/mhp_extension/detectron2/docker/Dockerfile
...s/humanparsing/mhp_extension/detectron2/docker/Dockerfile
+49
-0
preprocess/humanparsing/mhp_extension/detectron2/docker/Dockerfile-circleci
...rsing/mhp_extension/detectron2/docker/Dockerfile-circleci
+17
-0
preprocess/humanparsing/mhp_extension/detectron2/docker/README.md
...ss/humanparsing/mhp_extension/detectron2/docker/README.md
+36
-0
preprocess/humanparsing/mhp_extension/detectron2/docker/docker-compose.yml
...arsing/mhp_extension/detectron2/docker/docker-compose.yml
+18
-0
preprocess/humanparsing/mhp_extension/detectron2/docs/.gitignore
...ess/humanparsing/mhp_extension/detectron2/docs/.gitignore
+1
-0
preprocess/humanparsing/mhp_extension/detectron2/docs/Makefile
...ocess/humanparsing/mhp_extension/detectron2/docs/Makefile
+19
-0
preprocess/humanparsing/mhp_extension/detectron2/docs/README.md
...cess/humanparsing/mhp_extension/detectron2/docs/README.md
+16
-0
preprocess/humanparsing/mhp_extension/detectron2/docs/conf.py
...rocess/humanparsing/mhp_extension/detectron2/docs/conf.py
+335
-0
preprocess/humanparsing/mhp_extension/detectron2/docs/index.rst
...cess/humanparsing/mhp_extension/detectron2/docs/index.rst
+14
-0
preprocess/humanparsing/mhp_extension/detectron2/docs/modules/checkpoint.rst
...sing/mhp_extension/detectron2/docs/modules/checkpoint.rst
+7
-0
preprocess/humanparsing/mhp_extension/detectron2/docs/modules/config.rst
...nparsing/mhp_extension/detectron2/docs/modules/config.rst
+17
-0
No files found.
Too many changes to show.
To preserve performance only
331 of 331+
files are displayed.
Plain diff
Email patch
preprocess/humanparsing/mhp_extension/detectron2/dev/linter.sh
0 → 100644
View file @
54a066bf
#!/bin/bash -e
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
# Run this script at project root by "./dev/linter.sh" before you commit
vergte
()
{
[
"
$2
"
=
"
$(
echo
-e
"
$1
\\
n
$2
"
|
sort
-V
|
head
-n1
)
"
]
}
{
black
--version
|
grep
-E
"(19.3b0.*6733274)|(19.3b0
\\
+8)"
>
/dev/null
}
||
{
echo
"Linter requires 'black @ git+https://github.com/psf/black@673327449f86fce558adde153bb6cbe54bfebad2' !"
exit
1
}
ISORT_TARGET_VERSION
=
"4.3.21"
ISORT_VERSION
=
$(
isort
-v
|
grep
VERSION |
awk
'{print $2}'
)
vergte
"
$ISORT_VERSION
"
"
$ISORT_TARGET_VERSION
"
||
{
echo
"Linter requires isort>=
${
ISORT_TARGET_VERSION
}
!"
exit
1
}
set
-v
echo
"Running isort ..."
isort
-y
-sp
.
--atomic
echo
"Running black ..."
black
-l
100
.
echo
"Running flake8 ..."
if
[
-x
"
$(
command
-v
flake8-3
)
"
]
;
then
flake8-3
.
else
python3
-m
flake8
.
fi
# echo "Running mypy ..."
# Pytorch does not have enough type annotations
# mypy detectron2/solver detectron2/structures detectron2/config
echo
"Running clang-format ..."
find
.
-regex
".*
\.\(
cpp
\|
c
\|
cc
\|
cu
\|
cxx
\|
h
\|
hh
\|
hpp
\|
hxx
\|
tcc
\|
mm
\|
m
\)
"
-print0
| xargs
-0
clang-format
-i
command
-v
arc
>
/dev/null
&&
arc lint
preprocess/humanparsing/mhp_extension/detectron2/dev/packaging/README.md
0 → 100644
View file @
54a066bf
## To build a cu101 wheel for release:
```
$ nvidia-docker run -it --storage-opt "size=20GB" --name pt pytorch/manylinux-cuda101
# inside the container:
# git clone https://github.com/facebookresearch/detectron2/
# cd detectron2
# export CU_VERSION=cu101 D2_VERSION_SUFFIX= PYTHON_VERSION=3.7 PYTORCH_VERSION=1.4
# ./dev/packaging/build_wheel.sh
```
## To build all wheels for `CUDA {9.2,10.0,10.1}` x `Python {3.6,3.7,3.8}`:
```
./dev/packaging/build_all_wheels.sh
./dev/packaging/gen_wheel_index.sh /path/to/wheels
```
preprocess/humanparsing/mhp_extension/detectron2/dev/packaging/build_all_wheels.sh
0 → 100644
View file @
54a066bf
#!/bin/bash -e
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
PYTORCH_VERSION
=
1.5
build_for_one_cuda
()
{
cu
=
$1
case
"
$cu
"
in
cu
*
)
container_name
=
manylinux-cuda
${
cu
/cu/
}
;;
cpu
)
container_name
=
manylinux-cuda101
;;
*
)
echo
"Unrecognized cu=
$cu
"
exit
1
;;
esac
echo
"Launching container
$container_name
..."
for
py
in
3.6 3.7 3.8
;
do
docker run
-itd
\
--name
$container_name
\
--mount
type
=
bind
,source
=
"
$(
pwd
)
"
,target
=
/detectron2
\
pytorch/
$container_name
cat
<<
EOF
| docker exec -i
$container_name
sh
export CU_VERSION=
$cu
D2_VERSION_SUFFIX=+
$cu
PYTHON_VERSION=
$py
export PYTORCH_VERSION=
$PYTORCH_VERSION
cd /detectron2 && ./dev/packaging/build_wheel.sh
EOF
# if [[ "$cu" == "cu101" ]]; then
# # build wheel without local version
# cat <<EOF | docker exec -i $container_name sh
# export CU_VERSION=$cu D2_VERSION_SUFFIX= PYTHON_VERSION=$py
# export PYTORCH_VERSION=$PYTORCH_VERSION
# cd /detectron2 && ./dev/packaging/build_wheel.sh
# EOF
# fi
docker
exec
-i
$container_name
rm
-rf
/detectron2/build/
$cu
docker container stop
$container_name
docker container
rm
$container_name
done
}
if
[[
-n
"
$1
"
]]
;
then
build_for_one_cuda
"
$1
"
else
for
cu
in
cu102 cu101 cu92 cpu
;
do
build_for_one_cuda
"
$cu
"
done
fi
preprocess/humanparsing/mhp_extension/detectron2/dev/packaging/build_wheel.sh
0 → 100644
View file @
54a066bf
#!/bin/bash
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
set
-ex
ldconfig
# https://github.com/NVIDIA/nvidia-docker/issues/854
script_dir
=
"
$(
cd
"
$(
dirname
"
${
BASH_SOURCE
[0]
}
"
)
"
>
/dev/null 2>&1
&&
pwd
)
"
.
"
$script_dir
/pkg_helpers.bash"
echo
"Build Settings:"
echo
"CU_VERSION:
$CU_VERSION
"
# e.g. cu101
echo
"D2_VERSION_SUFFIX:
$D2_VERSION_SUFFIX
"
# e.g. +cu101 or ""
echo
"PYTHON_VERSION:
$PYTHON_VERSION
"
# e.g. 3.6
echo
"PYTORCH_VERSION:
$PYTORCH_VERSION
"
# e.g. 1.4
setup_cuda
setup_wheel_python
yum
install
ninja-build
-y
&&
ln
-sv
/usr/bin/ninja-build /usr/bin/ninja
export
TORCH_VERSION_SUFFIX
=
"+
$CU_VERSION
"
if
[[
"
$CU_VERSION
"
==
"cu102"
]]
;
then
export
TORCH_VERSION_SUFFIX
=
""
fi
pip_install pip numpy
-U
pip_install
"torch==
$PYTORCH_VERSION$TORCH_VERSION_SUFFIX
"
\
-f
https://download.pytorch.org/whl/
$CU_VERSION
/torch_stable.html
# use separate directories to allow parallel build
BASE_BUILD_DIR
=
build/
$CU_VERSION
/
$PYTHON_VERSION
python setup.py
\
build
-b
$BASE_BUILD_DIR
\
bdist_wheel
-b
$BASE_BUILD_DIR
/build_dist
-d
wheels/
$CU_VERSION
preprocess/humanparsing/mhp_extension/detectron2/dev/packaging/gen_wheel_index.sh
0 → 100644
View file @
54a066bf
#!/bin/bash -e
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
root
=
$1
if
[[
-z
"
$root
"
]]
;
then
echo
"Usage: ./gen_wheel_index.sh /path/to/wheels"
exit
fi
index
=
$root
/index.html
cd
"
$root
"
for
cu
in
cpu cu92 cu100 cu101 cu102
;
do
cd
$cu
echo
"Creating
$PWD
/index.html ..."
for
whl
in
*
.whl
;
do
echo
"<a href=
\"
${
whl
/+/%2B
}
\"
>
$whl
</a><br>"
done
>
index.html
cd
"
$root
"
done
echo
"Creating
$index
..."
for
whl
in
$(
find
.
-type
f
-name
'*.whl'
-printf
'%P\n'
|
sort
)
;
do
echo
"<a href=
\"
${
whl
/+/%2B
}
\"
>
$whl
</a><br>"
done
>
"
$index
"
preprocess/humanparsing/mhp_extension/detectron2/dev/packaging/pkg_helpers.bash
0 → 100644
View file @
54a066bf
#!/bin/bash -e
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
# Function to retry functions that sometimes timeout or have flaky failures
retry
()
{
$*
||
(
sleep
1
&&
$*
)
||
(
sleep
2
&&
$*
)
||
(
sleep
4
&&
$*
)
||
(
sleep
8
&&
$*
)
}
# Install with pip a bit more robustly than the default
pip_install
()
{
retry pip
install
--progress-bar
off
"
$@
"
}
setup_cuda
()
{
# Now work out the CUDA settings
# Like other torch domain libraries, we choose common GPU architectures only.
export
FORCE_CUDA
=
1
case
"
$CU_VERSION
"
in
cu102
)
export
CUDA_HOME
=
/usr/local/cuda-10.2/
export
TORCH_CUDA_ARCH_LIST
=
"3.5;3.7;5.0;5.2;6.0+PTX;6.1+PTX;7.0+PTX;7.5+PTX"
;;
cu101
)
export
CUDA_HOME
=
/usr/local/cuda-10.1/
export
TORCH_CUDA_ARCH_LIST
=
"3.5;3.7;5.0;5.2;6.0+PTX;6.1+PTX;7.0+PTX;7.5+PTX"
;;
cu100
)
export
CUDA_HOME
=
/usr/local/cuda-10.0/
export
TORCH_CUDA_ARCH_LIST
=
"3.5;3.7;5.0;5.2;6.0+PTX;6.1+PTX;7.0+PTX;7.5+PTX"
;;
cu92
)
export
CUDA_HOME
=
/usr/local/cuda-9.2/
export
TORCH_CUDA_ARCH_LIST
=
"3.5;3.7;5.0;5.2;6.0+PTX;6.1+PTX;7.0+PTX"
;;
cpu
)
unset
FORCE_CUDA
export
CUDA_VISIBLE_DEVICES
=
;;
*
)
echo
"Unrecognized CU_VERSION=
$CU_VERSION
"
exit
1
;;
esac
}
setup_wheel_python
()
{
case
"
$PYTHON_VERSION
"
in
3.6
)
python_abi
=
cp36-cp36m
;;
3.7
)
python_abi
=
cp37-cp37m
;;
3.8
)
python_abi
=
cp38-cp38
;;
*
)
echo
"Unrecognized PYTHON_VERSION=
$PYTHON_VERSION
"
exit
1
;;
esac
export
PATH
=
"/opt/python/
$python_abi
/bin:
$PATH
"
}
preprocess/humanparsing/mhp_extension/detectron2/dev/parse_results.sh
0 → 100644
View file @
54a066bf
#!/bin/bash
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
# A shell script that parses metrics from the log file.
# Make it easier for developers to track performance of models.
LOG
=
"
$1
"
if
[[
-z
"
$LOG
"
]]
;
then
echo
"Usage:
$0
/path/to/log/file"
exit
1
fi
# [12/15 11:47:32] trainer INFO: Total training time: 12:15:04.446477 (0.4900 s / it)
# [12/15 11:49:03] inference INFO: Total inference time: 0:01:25.326167 (0.13652186737060548 s / demo per device, on 8 devices)
# [12/15 11:49:03] inference INFO: Total inference pure compute time: .....
# training time
trainspeed
=
$(
grep
-o
'Overall training.*'
"
$LOG
"
|
grep
-Eo
'\(.*\)'
|
grep
-o
'[0-9\.]*'
)
echo
"Training speed:
$trainspeed
s/it"
# inference time: there could be multiple inference during training
inferencespeed
=
$(
grep
-o
'Total inference pure.*'
"
$LOG
"
|
tail
-n1
|
grep
-Eo
'\(.*\)'
|
grep
-o
'[0-9\.]*'
|
head
-n1
)
echo
"Inference speed:
$inferencespeed
s/it"
# [12/15 11:47:18] trainer INFO: eta: 0:00:00 iter: 90000 loss: 0.5407 (0.7256) loss_classifier: 0.1744 (0.2446) loss_box_reg: 0.0838 (0.1160) loss_mask: 0.2159 (0.2722) loss_objectness: 0.0244 (0.0429) loss_rpn_box_reg: 0.0279 (0.0500) time: 0.4487 (0.4899) data: 0.0076 (0.0975) lr: 0.000200 max mem: 4161
memory
=
$(
grep
-o
'max[_ ]mem: [0-9]*'
"
$LOG
"
|
tail
-n1
|
grep
-o
'[0-9]*'
)
echo
"Training memory:
$memory
MB"
echo
"Easy to copypaste:"
echo
"
$trainspeed
"
,
"
$inferencespeed
"
,
"
$memory
"
echo
"------------------------------"
# [12/26 17:26:32] engine.coco_evaluation: copypaste: Task: bbox
# [12/26 17:26:32] engine.coco_evaluation: copypaste: AP,AP50,AP75,APs,APm,APl
# [12/26 17:26:32] engine.coco_evaluation: copypaste: 0.0017,0.0024,0.0017,0.0005,0.0019,0.0011
# [12/26 17:26:32] engine.coco_evaluation: copypaste: Task: segm
# [12/26 17:26:32] engine.coco_evaluation: copypaste: AP,AP50,AP75,APs,APm,APl
# [12/26 17:26:32] engine.coco_evaluation: copypaste: 0.0014,0.0021,0.0016,0.0005,0.0016,0.0011
echo
"COCO Results:"
num_tasks
=
$(
grep
-o
'copypaste:.*Task.*'
"
$LOG
"
|
sort
-u
|
wc
-l
)
# each task has 3 lines
grep
-o
'copypaste:.*'
"
$LOG
"
|
cut
-d
' '
-f
2- |
tail
-n
$((
num_tasks
*
3
))
preprocess/humanparsing/mhp_extension/detectron2/dev/run_inference_tests.sh
0 → 100644
View file @
54a066bf
#!/bin/bash -e
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
BIN
=
"python tools/train_net.py"
OUTPUT
=
"inference_test_output"
NUM_GPUS
=
2
CFG_LIST
=(
"
${
@
:1
}
"
)
if
[
${#
CFG_LIST
[@]
}
-eq
0
]
;
then
CFG_LIST
=(
./configs/quick_schedules/
*
inference_acc_test.yaml
)
fi
echo
"========================================================================"
echo
"Configs to run:"
echo
"
${
CFG_LIST
[@]
}
"
echo
"========================================================================"
for
cfg
in
"
${
CFG_LIST
[@]
}
"
;
do
echo
"========================================================================"
echo
"Running
$cfg
..."
echo
"========================================================================"
$BIN
\
--eval-only
\
--num-gpus
$NUM_GPUS
\
--config-file
"
$cfg
"
\
OUTPUT_DIR
$OUTPUT
rm
-rf
$OUTPUT
done
echo
"========================================================================"
echo
"Running demo.py ..."
echo
"========================================================================"
DEMO_BIN
=
"python demo/demo.py"
COCO_DIR
=
datasets/coco/val2014
mkdir
-pv
$OUTPUT
set
-v
$DEMO_BIN
--config-file
./configs/quick_schedules/panoptic_fpn_R_50_inference_acc_test.yaml
\
--input
$COCO_DIR
/COCO_val2014_0000001933
*
--output
$OUTPUT
rm
-rf
$OUTPUT
preprocess/humanparsing/mhp_extension/detectron2/dev/run_instant_tests.sh
0 → 100644
View file @
54a066bf
#!/bin/bash -e
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
BIN
=
"python tools/train_net.py"
OUTPUT
=
"instant_test_output"
NUM_GPUS
=
2
CFG_LIST
=(
"
${
@
:1
}
"
)
if
[
${#
CFG_LIST
[@]
}
-eq
0
]
;
then
CFG_LIST
=(
./configs/quick_schedules/
*
instant_test.yaml
)
fi
echo
"========================================================================"
echo
"Configs to run:"
echo
"
${
CFG_LIST
[@]
}
"
echo
"========================================================================"
for
cfg
in
"
${
CFG_LIST
[@]
}
"
;
do
echo
"========================================================================"
echo
"Running
$cfg
..."
echo
"========================================================================"
$BIN
--num-gpus
$NUM_GPUS
--config-file
"
$cfg
"
\
SOLVER.IMS_PER_BATCH
$((
$NUM_GPUS
*
2
))
\
OUTPUT_DIR
"
$OUTPUT
"
rm
-rf
"
$OUTPUT
"
done
preprocess/humanparsing/mhp_extension/detectron2/docker/Dockerfile
0 → 100644
View file @
54a066bf
FROM
nvidia/cuda:10.1-cudnn7-devel
ENV
DEBIAN_FRONTEND noninteractive
RUN
apt-get update
&&
apt-get
install
-y
\
python3-opencv ca-certificates python3-dev git wget
sudo
\
cmake ninja-build protobuf-compiler libprotobuf-dev
&&
\
rm
-rf
/var/lib/apt/lists/
*
RUN
ln
-sv
/usr/bin/python3 /usr/bin/python
# create a non-root user
ARG
USER_ID=1000
RUN
useradd
-m
--no-log-init
--system
--uid
${
USER_ID
}
appuser
-g
sudo
RUN
echo
'%sudo ALL=(ALL) NOPASSWD:ALL'
>>
/etc/sudoers
USER
appuser
WORKDIR
/home/appuser
ENV
PATH="/home/appuser/.local/bin:${PATH}"
RUN
wget https://bootstrap.pypa.io/get-pip.py
&&
\
python3 get-pip.py
--user
&&
\
rm
get-pip.py
# install dependencies
# See https://pytorch.org/ for other options if you use a different version of CUDA
RUN
pip
install
--user
tensorboard cython
RUN
pip
install
--user
torch
==
1.5+cu101
torchvision
==
0.6+cu101
-f
https://download.pytorch.org/whl/torch_stable.html
RUN
pip
install
--user
'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
RUN
pip
install
--user
'git+https://github.com/facebookresearch/fvcore'
# install detectron2
RUN
git clone https://github.com/facebookresearch/detectron2 detectron2_repo
# set FORCE_CUDA because during `docker build` cuda is not accessible
ENV
FORCE_CUDA="1"
# This will by default build detectron2 for all common cuda architectures and take a lot more time,
# because inside `docker build`, there is no way to tell which architecture will be used.
ARG
TORCH_CUDA_ARCH_LIST="Kepler;Kepler+Tesla;Maxwell;Maxwell+Tegra;Pascal;Volta;Turing"
ENV
TORCH_CUDA_ARCH_LIST="${TORCH_CUDA_ARCH_LIST}"
RUN
pip
install
--user
-e
detectron2_repo
# Set a fixed model cache directory.
ENV
FVCORE_CACHE="/tmp"
WORKDIR
/home/appuser/detectron2_repo
# run detectron2 under user "appuser":
# wget http://images.cocodataset.org/val2017/000000439715.jpg -O input.jpg
# python3 demo/demo.py \
#--config-file configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \
#--input input.jpg --output outputs/ \
#--opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl
preprocess/humanparsing/mhp_extension/detectron2/docker/Dockerfile-circleci
0 → 100644
View file @
54a066bf
FROM nvidia/cuda:10.1-cudnn7-devel
# This dockerfile only aims to provide an environment for unittest on CircleCI
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get install -y \
python3-opencv ca-certificates python3-dev git wget sudo ninja-build && \
rm -rf /var/lib/apt/lists/*
RUN wget -q https://bootstrap.pypa.io/get-pip.py && \
python3 get-pip.py && \
rm get-pip.py
# install dependencies
# See https://pytorch.org/ for other options if you use a different version of CUDA
RUN pip install tensorboard cython
RUN pip install torch==1.5+cu101 torchvision==0.6+cu101 -f https://download.pytorch.org/whl/torch_stable.html
RUN pip install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
preprocess/humanparsing/mhp_extension/detectron2/docker/README.md
0 → 100644
View file @
54a066bf
## Use the container (with docker ≥ 19.03)
```
cd docker/
# Build:
docker build --build-arg USER_ID=$UID -t detectron2:v0 .
# Run:
docker run --gpus all -it \
--shm-size=8gb --env="DISPLAY" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
--name=detectron2 detectron2:v0
# Grant docker access to host X server to show images
xhost +local:`docker inspect --format='{{ .Config.Hostname }}' detectron2`
```
## Use the container (with docker < 19.03)
Install docker-compose and nvidia-docker2, then run:
```
cd docker && USER_ID=$UID docker-compose run detectron2
```
#### Using a persistent cache directory
You can prevent models from being re-downloaded on every run,
by storing them in a cache directory.
To do this, add
`--volume=$HOME/.torch/fvcore_cache:/tmp:rw`
in the run command.
## Install new dependencies
Add the following to
`Dockerfile`
to make persistent changes.
```
RUN sudo apt-get update && sudo apt-get install -y vim
```
Or run them in the container to make temporary changes.
preprocess/humanparsing/mhp_extension/detectron2/docker/docker-compose.yml
0 → 100644
View file @
54a066bf
version
:
"
2.3"
services
:
detectron2
:
build
:
context
:
.
dockerfile
:
Dockerfile
args
:
USER_ID
:
${USER_ID:-1000}
runtime
:
nvidia
# TODO: Exchange with "gpu: all" in the future (see https://github.com/facebookresearch/detectron2/pull/197/commits/00545e1f376918db4a8ce264d427a07c1e896c5a).
shm_size
:
"
8gb"
ulimits
:
memlock
:
-1
stack
:
67108864
volumes
:
-
/tmp/.X11-unix:/tmp/.X11-unix:ro
environment
:
-
DISPLAY=$DISPLAY
-
NVIDIA_VISIBLE_DEVICES=all
preprocess/humanparsing/mhp_extension/detectron2/docs/.gitignore
0 → 100644
View file @
54a066bf
_build
preprocess/humanparsing/mhp_extension/detectron2/docs/Makefile
0 → 100644
View file @
54a066bf
# Minimal makefile for Sphinx documentation
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
# You can set these variables from the command line.
SPHINXOPTS
=
SPHINXBUILD
=
sphinx-build
SOURCEDIR
=
.
BUILDDIR
=
_build
# Put it first so that "make" without argument is like "make help".
help
:
@
$(SPHINXBUILD)
-M
help
"
$(SOURCEDIR)
"
"
$(BUILDDIR)
"
$(SPHINXOPTS)
$(O)
.PHONY
:
help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%
:
Makefile
@
$(SPHINXBUILD)
-M
$@
"
$(SOURCEDIR)
"
"
$(BUILDDIR)
"
$(SPHINXOPTS)
$(O)
preprocess/humanparsing/mhp_extension/detectron2/docs/README.md
0 → 100644
View file @
54a066bf
# Read the docs:
The latest documentation built from this directory is available at
[
detectron2.readthedocs.io
](
https://detectron2.readthedocs.io/
)
.
Documents in this directory are not meant to be read on github.
# Build the docs:
1.
Install detectron2 according to
[
INSTALL.md
](
INSTALL.md
)
.
2.
Install additional libraries required to build docs:
-
docutils==0.16
-
Sphinx==3.0.0
-
recommonmark==0.6.0
-
sphinx_rtd_theme
-
mock
3.
Run
`make html`
from this directory.
preprocess/humanparsing/mhp_extension/detectron2/docs/conf.py
0 → 100644
View file @
54a066bf
# -*- coding: utf-8 -*-
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
# flake8: noqa
# Configuration file for the Sphinx documentation builder.
#
# This file does only contain a selection of the most common options. For a
# full list see the documentation:
# http://www.sphinx-doc.org/en/master/config
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import
os
import
sys
import
mock
from
sphinx.domains
import
Domain
from
typing
import
Dict
,
List
,
Tuple
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
import
sphinx_rtd_theme
class
GithubURLDomain
(
Domain
):
"""
Resolve certain links in markdown files to github source.
"""
name
=
"githuburl"
ROOT
=
"https://github.com/facebookresearch/detectron2/blob/master/"
LINKED_DOC
=
[
"tutorials/install"
,
"tutorials/getting_started"
]
def
resolve_any_xref
(
self
,
env
,
fromdocname
,
builder
,
target
,
node
,
contnode
):
github_url
=
None
if
not
target
.
endswith
(
"html"
)
and
target
.
startswith
(
"../../"
):
url
=
target
.
replace
(
"../"
,
""
)
github_url
=
url
if
fromdocname
in
self
.
LINKED_DOC
:
# unresolved links in these docs are all github links
github_url
=
target
if
github_url
is
not
None
:
if
github_url
.
endswith
(
"MODEL_ZOO"
)
or
github_url
.
endswith
(
"README"
):
# bug of recommonmark.
# https://github.com/readthedocs/recommonmark/blob/ddd56e7717e9745f11300059e4268e204138a6b1/recommonmark/parser.py#L152-L155
github_url
+=
".md"
print
(
"Ref {} resolved to github:{}"
.
format
(
target
,
github_url
))
contnode
[
"refuri"
]
=
self
.
ROOT
+
github_url
return
[(
"githuburl:any"
,
contnode
)]
else
:
return
[]
# to support markdown
from
recommonmark.parser
import
CommonMarkParser
sys
.
path
.
insert
(
0
,
os
.
path
.
abspath
(
"../"
))
os
.
environ
[
"DOC_BUILDING"
]
=
"True"
DEPLOY
=
os
.
environ
.
get
(
"READTHEDOCS"
)
==
"True"
# -- Project information -----------------------------------------------------
# fmt: off
try
:
import
torch
# noqa
except
ImportError
:
for
m
in
[
"torch"
,
"torchvision"
,
"torch.nn"
,
"torch.nn.parallel"
,
"torch.distributed"
,
"torch.multiprocessing"
,
"torch.autograd"
,
"torch.autograd.function"
,
"torch.nn.modules"
,
"torch.nn.modules.utils"
,
"torch.utils"
,
"torch.utils.data"
,
"torch.onnx"
,
"torchvision"
,
"torchvision.ops"
,
]:
sys
.
modules
[
m
]
=
mock
.
Mock
(
name
=
m
)
sys
.
modules
[
'torch'
].
__version__
=
"1.5"
# fake version
for
m
in
[
"cv2"
,
"scipy"
,
"portalocker"
,
"detectron2._C"
,
"pycocotools"
,
"pycocotools.mask"
,
"pycocotools.coco"
,
"pycocotools.cocoeval"
,
"google"
,
"google.protobuf"
,
"google.protobuf.internal"
,
"onnx"
,
"caffe2"
,
"caffe2.proto"
,
"caffe2.python"
,
"caffe2.python.utils"
,
"caffe2.python.onnx"
,
"caffe2.python.onnx.backend"
,
]:
sys
.
modules
[
m
]
=
mock
.
Mock
(
name
=
m
)
# fmt: on
sys
.
modules
[
"cv2"
].
__version__
=
"3.4"
import
detectron2
# isort: skip
project
=
"detectron2"
copyright
=
"2019-2020, detectron2 contributors"
author
=
"detectron2 contributors"
# The short X.Y version
version
=
detectron2
.
__version__
# The full version, including alpha/beta/rc tags
release
=
version
# -- General configuration ---------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
needs_sphinx
=
"3.0"
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions
=
[
"recommonmark"
,
"sphinx.ext.autodoc"
,
"sphinx.ext.napoleon"
,
"sphinx.ext.intersphinx"
,
"sphinx.ext.todo"
,
"sphinx.ext.coverage"
,
"sphinx.ext.mathjax"
,
"sphinx.ext.viewcode"
,
"sphinx.ext.githubpages"
,
]
# -- Configurations for plugins ------------
napoleon_google_docstring
=
True
napoleon_include_init_with_doc
=
True
napoleon_include_special_with_doc
=
True
napoleon_numpy_docstring
=
False
napoleon_use_rtype
=
False
autodoc_inherit_docstrings
=
False
autodoc_member_order
=
"bysource"
if
DEPLOY
:
intersphinx_timeout
=
10
else
:
# skip this when building locally
intersphinx_timeout
=
0.1
intersphinx_mapping
=
{
"python"
:
(
"https://docs.python.org/3.6"
,
None
),
"numpy"
:
(
"https://docs.scipy.org/doc/numpy/"
,
None
),
"torch"
:
(
"https://pytorch.org/docs/master/"
,
None
),
}
# -------------------------
# Add any paths that contain templates here, relative to this directory.
templates_path
=
[
"_templates"
]
source_suffix
=
[
".rst"
,
".md"
]
# The master toctree document.
master_doc
=
"index"
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language
=
None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns
=
[
"_build"
,
"Thumbs.db"
,
".DS_Store"
,
"build"
,
"README.md"
,
"tutorials/README.md"
]
# The name of the Pygments (syntax highlighting) style to use.
pygments_style
=
"sphinx"
# -- Options for HTML output -------------------------------------------------
html_theme
=
"sphinx_rtd_theme"
html_theme_path
=
[
sphinx_rtd_theme
.
get_html_theme_path
()]
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path
=
[
"_static"
]
# Custom sidebar templates, must be a dictionary that maps document names
# to template names.
#
# The default sidebars (for documents that don't match any pattern) are
# defined by theme itself. Builtin themes are using these templates by
# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
# 'searchbox.html']``.
#
# html_sidebars = {}
# -- Options for HTMLHelp output ---------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename
=
"detectron2doc"
# -- Options for LaTeX output ------------------------------------------------
latex_elements
=
{
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents
=
[
(
master_doc
,
"detectron2.tex"
,
"detectron2 Documentation"
,
"detectron2 contributors"
,
"manual"
)
]
# -- Options for manual page output ------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages
=
[(
master_doc
,
"detectron2"
,
"detectron2 Documentation"
,
[
author
],
1
)]
# -- Options for Texinfo output ----------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents
=
[
(
master_doc
,
"detectron2"
,
"detectron2 Documentation"
,
author
,
"detectron2"
,
"One line description of project."
,
"Miscellaneous"
,
)
]
# -- Options for todo extension ----------------------------------------------
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos
=
True
_DEPRECATED_NAMES
=
set
()
def
autodoc_skip_member
(
app
,
what
,
name
,
obj
,
skip
,
options
):
# we hide something deliberately
if
getattr
(
obj
,
"__HIDE_SPHINX_DOC__"
,
False
):
return
True
# Hide some names that are deprecated or not intended to be used
if
name
in
_DEPRECATED_NAMES
:
return
True
return
None
_PAPER_DATA
=
{
"resnet"
:
(
"1512.03385"
,
"Deep Residual Learning for Image Recognition"
),
"fpn"
:
(
"1612.03144"
,
"Feature Pyramid Networks for Object Detection"
),
"mask r-cnn"
:
(
"1703.06870"
,
"Mask R-CNN"
),
"faster r-cnn"
:
(
"1506.01497"
,
"Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks"
,
),
"deformconv"
:
(
"1703.06211"
,
"Deformable Convolutional Networks"
),
"deformconv2"
:
(
"1811.11168"
,
"Deformable ConvNets v2: More Deformable, Better Results"
),
"panopticfpn"
:
(
"1901.02446"
,
"Panoptic Feature Pyramid Networks"
),
"retinanet"
:
(
"1708.02002"
,
"Focal Loss for Dense Object Detection"
),
"cascade r-cnn"
:
(
"1712.00726"
,
"Cascade R-CNN: Delving into High Quality Object Detection"
),
"lvis"
:
(
"1908.03195"
,
"LVIS: A Dataset for Large Vocabulary Instance Segmentation"
),
"rrpn"
:
(
"1703.01086"
,
"Arbitrary-Oriented Scene Text Detection via Rotation Proposals"
),
"in1k1h"
:
(
"1706.02677"
,
"Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour"
),
}
def
paper_ref_role
(
typ
:
str
,
rawtext
:
str
,
text
:
str
,
lineno
:
int
,
inliner
,
options
:
Dict
=
{},
content
:
List
[
str
]
=
[],
):
"""
Parse :paper:`xxx`. Similar to the "extlinks" sphinx extension.
"""
from
docutils
import
nodes
,
utils
from
sphinx.util.nodes
import
split_explicit_title
text
=
utils
.
unescape
(
text
)
has_explicit_title
,
title
,
link
=
split_explicit_title
(
text
)
link
=
link
.
lower
()
if
link
not
in
_PAPER_DATA
:
inliner
.
reporter
.
warning
(
"Cannot find paper "
+
link
)
paper_url
,
paper_title
=
"#"
,
link
else
:
paper_url
,
paper_title
=
_PAPER_DATA
[
link
]
if
"/"
not
in
paper_url
:
paper_url
=
"https://arxiv.org/abs/"
+
paper_url
if
not
has_explicit_title
:
title
=
paper_title
pnode
=
nodes
.
reference
(
title
,
title
,
internal
=
False
,
refuri
=
paper_url
)
return
[
pnode
],
[]
def
setup
(
app
):
from
recommonmark.transform
import
AutoStructify
app
.
add_domain
(
GithubURLDomain
)
app
.
connect
(
"autodoc-skip-member"
,
autodoc_skip_member
)
app
.
add_role
(
"paper"
,
paper_ref_role
)
app
.
add_config_value
(
"recommonmark_config"
,
{
"enable_math"
:
True
,
"enable_inline_math"
:
True
,
"enable_eval_rst"
:
True
},
True
,
)
app
.
add_transform
(
AutoStructify
)
preprocess/humanparsing/mhp_extension/detectron2/docs/index.rst
0 → 100644
View file @
54a066bf
.. detectron2 documentation master file, created by
sphinx-quickstart on Sat Sep 21 13:46:45 2019.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to detectron2's documentation!
======================================
.. toctree::
:maxdepth: 2
tutorials/index
notes/index
modules/index
preprocess/humanparsing/mhp_extension/detectron2/docs/modules/checkpoint.rst
0 → 100644
View file @
54a066bf
detectron2.checkpoint package
=============================
.. automodule:: detectron2.checkpoint
:members:
:undoc-members:
:show-inheritance:
preprocess/humanparsing/mhp_extension/detectron2/docs/modules/config.rst
0 → 100644
View file @
54a066bf
detectron2.config package
=========================
.. automodule:: detectron2.config
:members:
:undoc-members:
:show-inheritance:
:inherited-members:
Config References
-----------------
.. literalinclude:: ../../detectron2/config/defaults.py
:language: python
:linenos:
:lines: 4-
Prev
1
…
11
12
13
14
15
16
17
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment