Prepare datasets according to the [guidelines](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/en/dataset_prepare.md#prepare-datasets) in MMSegmentation.
Prepare datasets according to the [guidelines](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/en/dataset_prepare.md#prepare-datasets) in MMSegmentation.
### Evaluation
### Evaluation
To evaluate our `InternImage` on ADE20K val, run:
To evaluate our `InternImage` on ADE20K val, run:
...
@@ -72,6 +74,7 @@ To evaluate our `InternImage` on ADE20K val, run:
...
@@ -72,6 +74,7 @@ To evaluate our `InternImage` on ADE20K val, run:
```bash
```bash
sh dist_test.sh <config-file> <checkpoint> <gpu-num> --eval mIoU
sh dist_test.sh <config-file> <checkpoint> <gpu-num> --eval mIoU
```
```
You can download checkpoint files from [here](https://huggingface.co/OpenGVLab/InternImage/tree/fc1e4e7e01c3e7a39a3875bdebb6577a7256ff91). Then place it to segmentation/checkpoint_dir/seg.
You can download checkpoint files from [here](https://huggingface.co/OpenGVLab/InternImage/tree/fc1e4e7e01c3e7a39a3875bdebb6577a7256ff91). Then place it to segmentation/checkpoint_dir/seg.
For example, to evaluate the `InternImage-T` with a single GPU:
For example, to evaluate the `InternImage-T` with a single GPU:
...
@@ -109,19 +112,22 @@ GPUS=8 sh slurm_train.sh <partition> <job-name> configs/ade20k/upernet_internima
...
@@ -109,19 +112,22 @@ GPUS=8 sh slurm_train.sh <partition> <job-name> configs/ade20k/upernet_internima
```
```
### Image Demo
### Image Demo
To inference a single/multiple image like this.
To inference a single/multiple image like this.
If you specify image containing directory instead of a single image, it will process all the images in the directory.:
If you specify image containing directory instead of a single image, it will process all the images in the directory.:
@@ -4,28 +4,25 @@ Introduced by Zhou et al. in [Scene Parsing Through ADE20K Dataset](https://pape
...
@@ -4,28 +4,25 @@ Introduced by Zhou et al. in [Scene Parsing Through ADE20K Dataset](https://pape
The ADE20K semantic segmentation dataset contains more than 20K scene-centric images exhaustively annotated with pixel-level objects and object parts labels. There are totally 150 semantic categories, which include stuffs like sky, road, grass, and discrete objects like person, car, bed.
The ADE20K semantic segmentation dataset contains more than 20K scene-centric images exhaustively annotated with pixel-level objects and object parts labels. There are totally 150 semantic categories, which include stuffs like sky, road, grass, and discrete objects like person, car, bed.