Commit e0750f8c authored by Jirka Borovec's avatar Jirka Borovec Committed by Kai Chen
Browse files

move github doc and documentation to particular folders (#1233)

parent cf0ef86e
......@@ -27,7 +27,7 @@ We use the following tools for linting and formatting:
- [yapf](https://github.com/google/yapf): formatter
- [isort](https://github.com/timothycrosley/isort): sort imports
Style configurations of yapf and isort can be found in [.style.yapf](.style.yapf) and [.isort.cfg](.isort.cfg).
Style configurations of yapf and isort can be found in [.style.yapf](../.style.yapf) and [.isort.cfg](../.isort.cfg).
>Before you create a PR, make sure that your code lints and is formatted by yapf.
......
......@@ -85,7 +85,7 @@ v0.5.1 (20/10/2018)
## Benchmark and model zoo
Supported methods and backbones are shown in the below table.
Results and models are available in the [Model zoo](MODEL_ZOO.md).
Results and models are available in the [Model zoo](docs/MODEL_ZOO.md).
| | ResNet | ResNeXt | SENet | VGG | HRNet |
|--------------------|:--------:|:--------:|:--------:|:--------:|:-----:|
......@@ -119,16 +119,16 @@ Other features
## Installation
Please refer to [INSTALL.md](INSTALL.md) for installation and dataset preparation.
Please refer to [INSTALL.md](docs/INSTALL.md) for installation and dataset preparation.
## Get Started
Please see [GETTING_STARTED.md](GETTING_STARTED.md) for the basic usage of MMDetection.
Please see [GETTING_STARTED.md](docs/GETTING_STARTED.md) for the basic usage of MMDetection.
## Contributing
We appreciate all contributions to improve MMDetection. Please refer to [CONTRIBUTING.md](CONTRIBUTING.md) for the contributing guideline.
We appreciate all contributions to improve MMDetection. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline.
## Acknowledgement
......
......@@ -103,7 +103,7 @@ for frame in video:
show_result(frame, result, model.CLASSES, wait_time=1)
```
A notebook demo can be found in [demo/inference_demo.ipynb](demo/inference_demo.ipynb).
A notebook demo can be found in [demo/inference_demo.ipynb](../demo/inference_demo.ipynb).
## Train a model
......@@ -133,7 +133,7 @@ If you want to specify the working directory in the command, you can add an argu
Optional arguments are:
- `--validate` (**strongly recommended**): Perform evaluation at every k (default value is 1, which can be modified like [this](configs/mask_rcnn_r50_fpn_1x.py#L174)) epochs during the training.
- `--validate` (**strongly recommended**): Perform evaluation at every k (default value is 1, which can be modified like [this](../configs/mask_rcnn_r50_fpn_1x.py#L174)) epochs during the training.
- `--work_dir ${WORK_DIR}`: Override the working directory specified in the config file.
- `--resume_from ${CHECKPOINT_FILE}`: Resume from a previous checkpoint file.
......@@ -155,7 +155,7 @@ Here is an example of using 16 GPUs to train Mask R-CNN on the dev partition.
./tools/slurm_train.sh dev mask_r50_1x configs/mask_rcnn_r50_fpn_1x.py /nfs/xxxx/mask_rcnn_r50_fpn_1x 16
```
You can check [slurm_train.sh](tools/slurm_train.sh) for full arguments and environment variables.
You can check [slurm_train.sh](../tools/slurm_train.sh) for full arguments and environment variables.
If you have just multiple machines connected with ethernet, you can refer to
pytorch [launch utility](https://pytorch.org/docs/stable/distributed_deprecated.html#launch-utility).
......@@ -168,7 +168,7 @@ Usually it is slow if you do not have high speed networking like infiniband.
You can plot loss/mAP curves given a training log file. Run `pip install seaborn` first to install the dependency.
![loss curve image](demo/loss_curve.png)
![loss curve image](../demo/loss_curve.png)
```shell
python tools/analyze_logs.py plot_curve [--keys ${KEYS}] [--title ${TITLE}] [--legend ${LEGEND}] [--backend ${BACKEND}] [--style ${STYLE}] [--out ${OUT_FILE}]
......@@ -324,12 +324,12 @@ There are two ways to work with custom datasets.
You can write a new Dataset class inherited from `CustomDataset`, and overwrite two methods
`load_annotations(self, ann_file)` and `get_ann_info(self, idx)`,
like [CocoDataset](mmdet/datasets/coco.py) and [VOCDataset](mmdet/datasets/voc.py).
like [CocoDataset](../mmdet/datasets/coco.py) and [VOCDataset](../mmdet/datasets/voc.py).
- offline conversion
You can convert the annotation format to the expected format above and save it to
a pickle or json file, like [pascal_voc.py](tools/convert_datasets/pascal_voc.py).
a pickle or json file, like [pascal_voc.py](../tools/convert_datasets/pascal_voc.py).
Then you can simply use `CustomDataset`.
### Develop new components
......
......@@ -55,7 +55,7 @@ It is recommended that you run step d each time you pull some updates from githu
### Another option: Docker Image
We provide a [Dockerfile](docker/Dockerfile) to build an image.
We provide a [Dockerfile](../docker/Dockerfile) to build an image.
```shell
# build an image with PyTorch 1.1, CUDA 10.0 and CUDNN 7.5
......
......@@ -197,7 +197,7 @@ More models with different backbones will be added to the model zoo.
**Notes:**
- Please refer to [Hybrid Task Cascade](configs/htc/README.md) for details and more a powerful model (50.7/43.9).
- Please refer to [Hybrid Task Cascade](../configs/htc/README.md) for details and more a powerful model (50.7/43.9).
### SSD
......@@ -214,54 +214,54 @@ More models with different backbones will be added to the model zoo.
### Group Normalization (GN)
Please refer to [Group Normalization](configs/gn/README.md) for details.
Please refer to [Group Normalization](../configs/gn/README.md) for details.
### Weight Standardization
Please refer to [Weight Standardization](configs/gn+ws/README.md) for details.
Please refer to [Weight Standardization](../configs/gn+ws/README.md) for details.
### Deformable Convolution v2
Please refer to [Deformable Convolutional Networks](configs/dcn/README.md) for details.
Please refer to [Deformable Convolutional Networks](../configs/dcn/README.md) for details.
### Libra R-CNN
Please refer to [Libra R-CNN](configs/libra_rcnn/README.md) for details.
Please refer to [Libra R-CNN](../configs/libra_rcnn/README.md) for details.
### Guided Anchoring
Please refer to [Guided Anchoring](configs/guided_anchoring/README.md) for details.
Please refer to [Guided Anchoring](../configs/guided_anchoring/README.md) for details.
### FCOS
Please refer to [FCOS](configs/fcos/README.md) for details.
Please refer to [FCOS](../configs/fcos/README.md) for details.
### Grid R-CNN (plus)
Please refer to [Grid R-CNN](configs/grid_rcnn/README.md) for details.
Please refer to [Grid R-CNN](../configs/grid_rcnn/README.md) for details.
### GHM
Please refer to [GHM](configs/ghm/README.md) for details.
Please refer to [GHM](../configs/ghm/README.md) for details.
### GCNet
Please refer to [GCNet](configs/gcnet/README.md) for details.
Please refer to [GCNet](../configs/gcnet/README.md) for details.
### HRNet
Please refer to [HRNet](configs/hrnet/README.md) for details.
Please refer to [HRNet](../configs/hrnet/README.md) for details.
### Mask Scoring R-CNN
Please refer to [Mask Scoring R-CNN](configs/ms_rcnn/README.md) for details.
Please refer to [Mask Scoring R-CNN](../configs/ms_rcnn/README.md) for details.
### Train from Scratch
Please refer to [Rethinking ImageNet Pre-training](configs/scratch/README.md) for details.
Please refer to [Rethinking ImageNet Pre-training](../configs/scratch/README.md) for details.
### Other datasets
We also benchmark some methods on [PASCAL VOC](configs/pascal_voc/README.md), [Cityscapes](configs/cityscapes/README.md) and [WIDER FACE](configs/wider_face/README.md).
We also benchmark some methods on [PASCAL VOC](../configs/pascal_voc/README.md), [Cityscapes](../configs/cityscapes/README.md) and [WIDER FACE](../configs/wider_face/README.md).
## Comparison with Detectron and maskrcnn-benchmark
......
......@@ -18,7 +18,7 @@ This page provides basic tutorials how to use the benchmark.
}
```
![image corruption example](demo/corruptions_sev_3.png)
![image corruption example](../demo/corruptions_sev_3.png)
## About the benchmark
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment