Commit e0750f8c authored by Jirka Borovec's avatar Jirka Borovec Committed by Kai Chen
Browse files

move github doc and documentation to particular folders (#1233)

parent cf0ef86e
...@@ -27,7 +27,7 @@ We use the following tools for linting and formatting: ...@@ -27,7 +27,7 @@ We use the following tools for linting and formatting:
- [yapf](https://github.com/google/yapf): formatter - [yapf](https://github.com/google/yapf): formatter
- [isort](https://github.com/timothycrosley/isort): sort imports - [isort](https://github.com/timothycrosley/isort): sort imports
Style configurations of yapf and isort can be found in [.style.yapf](.style.yapf) and [.isort.cfg](.isort.cfg). Style configurations of yapf and isort can be found in [.style.yapf](../.style.yapf) and [.isort.cfg](../.isort.cfg).
>Before you create a PR, make sure that your code lints and is formatted by yapf. >Before you create a PR, make sure that your code lints and is formatted by yapf.
......
...@@ -85,7 +85,7 @@ v0.5.1 (20/10/2018) ...@@ -85,7 +85,7 @@ v0.5.1 (20/10/2018)
## Benchmark and model zoo ## Benchmark and model zoo
Supported methods and backbones are shown in the below table. Supported methods and backbones are shown in the below table.
Results and models are available in the [Model zoo](MODEL_ZOO.md). Results and models are available in the [Model zoo](docs/MODEL_ZOO.md).
| | ResNet | ResNeXt | SENet | VGG | HRNet | | | ResNet | ResNeXt | SENet | VGG | HRNet |
|--------------------|:--------:|:--------:|:--------:|:--------:|:-----:| |--------------------|:--------:|:--------:|:--------:|:--------:|:-----:|
...@@ -119,16 +119,16 @@ Other features ...@@ -119,16 +119,16 @@ Other features
## Installation ## Installation
Please refer to [INSTALL.md](INSTALL.md) for installation and dataset preparation. Please refer to [INSTALL.md](docs/INSTALL.md) for installation and dataset preparation.
## Get Started ## Get Started
Please see [GETTING_STARTED.md](GETTING_STARTED.md) for the basic usage of MMDetection. Please see [GETTING_STARTED.md](docs/GETTING_STARTED.md) for the basic usage of MMDetection.
## Contributing ## Contributing
We appreciate all contributions to improve MMDetection. Please refer to [CONTRIBUTING.md](CONTRIBUTING.md) for the contributing guideline. We appreciate all contributions to improve MMDetection. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline.
## Acknowledgement ## Acknowledgement
......
...@@ -103,7 +103,7 @@ for frame in video: ...@@ -103,7 +103,7 @@ for frame in video:
show_result(frame, result, model.CLASSES, wait_time=1) show_result(frame, result, model.CLASSES, wait_time=1)
``` ```
A notebook demo can be found in [demo/inference_demo.ipynb](demo/inference_demo.ipynb). A notebook demo can be found in [demo/inference_demo.ipynb](../demo/inference_demo.ipynb).
## Train a model ## Train a model
...@@ -133,7 +133,7 @@ If you want to specify the working directory in the command, you can add an argu ...@@ -133,7 +133,7 @@ If you want to specify the working directory in the command, you can add an argu
Optional arguments are: Optional arguments are:
- `--validate` (**strongly recommended**): Perform evaluation at every k (default value is 1, which can be modified like [this](configs/mask_rcnn_r50_fpn_1x.py#L174)) epochs during the training. - `--validate` (**strongly recommended**): Perform evaluation at every k (default value is 1, which can be modified like [this](../configs/mask_rcnn_r50_fpn_1x.py#L174)) epochs during the training.
- `--work_dir ${WORK_DIR}`: Override the working directory specified in the config file. - `--work_dir ${WORK_DIR}`: Override the working directory specified in the config file.
- `--resume_from ${CHECKPOINT_FILE}`: Resume from a previous checkpoint file. - `--resume_from ${CHECKPOINT_FILE}`: Resume from a previous checkpoint file.
...@@ -155,7 +155,7 @@ Here is an example of using 16 GPUs to train Mask R-CNN on the dev partition. ...@@ -155,7 +155,7 @@ Here is an example of using 16 GPUs to train Mask R-CNN on the dev partition.
./tools/slurm_train.sh dev mask_r50_1x configs/mask_rcnn_r50_fpn_1x.py /nfs/xxxx/mask_rcnn_r50_fpn_1x 16 ./tools/slurm_train.sh dev mask_r50_1x configs/mask_rcnn_r50_fpn_1x.py /nfs/xxxx/mask_rcnn_r50_fpn_1x 16
``` ```
You can check [slurm_train.sh](tools/slurm_train.sh) for full arguments and environment variables. You can check [slurm_train.sh](../tools/slurm_train.sh) for full arguments and environment variables.
If you have just multiple machines connected with ethernet, you can refer to If you have just multiple machines connected with ethernet, you can refer to
pytorch [launch utility](https://pytorch.org/docs/stable/distributed_deprecated.html#launch-utility). pytorch [launch utility](https://pytorch.org/docs/stable/distributed_deprecated.html#launch-utility).
...@@ -168,7 +168,7 @@ Usually it is slow if you do not have high speed networking like infiniband. ...@@ -168,7 +168,7 @@ Usually it is slow if you do not have high speed networking like infiniband.
You can plot loss/mAP curves given a training log file. Run `pip install seaborn` first to install the dependency. You can plot loss/mAP curves given a training log file. Run `pip install seaborn` first to install the dependency.
![loss curve image](demo/loss_curve.png) ![loss curve image](../demo/loss_curve.png)
```shell ```shell
python tools/analyze_logs.py plot_curve [--keys ${KEYS}] [--title ${TITLE}] [--legend ${LEGEND}] [--backend ${BACKEND}] [--style ${STYLE}] [--out ${OUT_FILE}] python tools/analyze_logs.py plot_curve [--keys ${KEYS}] [--title ${TITLE}] [--legend ${LEGEND}] [--backend ${BACKEND}] [--style ${STYLE}] [--out ${OUT_FILE}]
...@@ -324,12 +324,12 @@ There are two ways to work with custom datasets. ...@@ -324,12 +324,12 @@ There are two ways to work with custom datasets.
You can write a new Dataset class inherited from `CustomDataset`, and overwrite two methods You can write a new Dataset class inherited from `CustomDataset`, and overwrite two methods
`load_annotations(self, ann_file)` and `get_ann_info(self, idx)`, `load_annotations(self, ann_file)` and `get_ann_info(self, idx)`,
like [CocoDataset](mmdet/datasets/coco.py) and [VOCDataset](mmdet/datasets/voc.py). like [CocoDataset](../mmdet/datasets/coco.py) and [VOCDataset](../mmdet/datasets/voc.py).
- offline conversion - offline conversion
You can convert the annotation format to the expected format above and save it to You can convert the annotation format to the expected format above and save it to
a pickle or json file, like [pascal_voc.py](tools/convert_datasets/pascal_voc.py). a pickle or json file, like [pascal_voc.py](../tools/convert_datasets/pascal_voc.py).
Then you can simply use `CustomDataset`. Then you can simply use `CustomDataset`.
### Develop new components ### Develop new components
......
...@@ -55,7 +55,7 @@ It is recommended that you run step d each time you pull some updates from githu ...@@ -55,7 +55,7 @@ It is recommended that you run step d each time you pull some updates from githu
### Another option: Docker Image ### Another option: Docker Image
We provide a [Dockerfile](docker/Dockerfile) to build an image. We provide a [Dockerfile](../docker/Dockerfile) to build an image.
```shell ```shell
# build an image with PyTorch 1.1, CUDA 10.0 and CUDNN 7.5 # build an image with PyTorch 1.1, CUDA 10.0 and CUDNN 7.5
......
...@@ -197,7 +197,7 @@ More models with different backbones will be added to the model zoo. ...@@ -197,7 +197,7 @@ More models with different backbones will be added to the model zoo.
**Notes:** **Notes:**
- Please refer to [Hybrid Task Cascade](configs/htc/README.md) for details and more a powerful model (50.7/43.9). - Please refer to [Hybrid Task Cascade](../configs/htc/README.md) for details and more a powerful model (50.7/43.9).
### SSD ### SSD
...@@ -214,54 +214,54 @@ More models with different backbones will be added to the model zoo. ...@@ -214,54 +214,54 @@ More models with different backbones will be added to the model zoo.
### Group Normalization (GN) ### Group Normalization (GN)
Please refer to [Group Normalization](configs/gn/README.md) for details. Please refer to [Group Normalization](../configs/gn/README.md) for details.
### Weight Standardization ### Weight Standardization
Please refer to [Weight Standardization](configs/gn+ws/README.md) for details. Please refer to [Weight Standardization](../configs/gn+ws/README.md) for details.
### Deformable Convolution v2 ### Deformable Convolution v2
Please refer to [Deformable Convolutional Networks](configs/dcn/README.md) for details. Please refer to [Deformable Convolutional Networks](../configs/dcn/README.md) for details.
### Libra R-CNN ### Libra R-CNN
Please refer to [Libra R-CNN](configs/libra_rcnn/README.md) for details. Please refer to [Libra R-CNN](../configs/libra_rcnn/README.md) for details.
### Guided Anchoring ### Guided Anchoring
Please refer to [Guided Anchoring](configs/guided_anchoring/README.md) for details. Please refer to [Guided Anchoring](../configs/guided_anchoring/README.md) for details.
### FCOS ### FCOS
Please refer to [FCOS](configs/fcos/README.md) for details. Please refer to [FCOS](../configs/fcos/README.md) for details.
### Grid R-CNN (plus) ### Grid R-CNN (plus)
Please refer to [Grid R-CNN](configs/grid_rcnn/README.md) for details. Please refer to [Grid R-CNN](../configs/grid_rcnn/README.md) for details.
### GHM ### GHM
Please refer to [GHM](configs/ghm/README.md) for details. Please refer to [GHM](../configs/ghm/README.md) for details.
### GCNet ### GCNet
Please refer to [GCNet](configs/gcnet/README.md) for details. Please refer to [GCNet](../configs/gcnet/README.md) for details.
### HRNet ### HRNet
Please refer to [HRNet](configs/hrnet/README.md) for details. Please refer to [HRNet](../configs/hrnet/README.md) for details.
### Mask Scoring R-CNN ### Mask Scoring R-CNN
Please refer to [Mask Scoring R-CNN](configs/ms_rcnn/README.md) for details. Please refer to [Mask Scoring R-CNN](../configs/ms_rcnn/README.md) for details.
### Train from Scratch ### Train from Scratch
Please refer to [Rethinking ImageNet Pre-training](configs/scratch/README.md) for details. Please refer to [Rethinking ImageNet Pre-training](../configs/scratch/README.md) for details.
### Other datasets ### Other datasets
We also benchmark some methods on [PASCAL VOC](configs/pascal_voc/README.md), [Cityscapes](configs/cityscapes/README.md) and [WIDER FACE](configs/wider_face/README.md). We also benchmark some methods on [PASCAL VOC](../configs/pascal_voc/README.md), [Cityscapes](../configs/cityscapes/README.md) and [WIDER FACE](../configs/wider_face/README.md).
## Comparison with Detectron and maskrcnn-benchmark ## Comparison with Detectron and maskrcnn-benchmark
......
...@@ -18,7 +18,7 @@ This page provides basic tutorials how to use the benchmark. ...@@ -18,7 +18,7 @@ This page provides basic tutorials how to use the benchmark.
} }
``` ```
![image corruption example](demo/corruptions_sev_3.png) ![image corruption example](../demo/corruptions_sev_3.png)
## About the benchmark ## About the benchmark
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment