cityscapes.md 5.21 KB
Newer Older
yukun's avatar
yukun committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# Running DeepLab on Cityscapes Semantic Segmentation Dataset

This page walks through the steps required to run DeepLab on Cityscapes on a
local machine.

## Download dataset and convert to TFRecord

We have prepared the script (under the folder `datasets`) to convert Cityscapes
dataset to TFRecord. The users are required to download the dataset beforehand
by registering the [website](https://www.cityscapes-dataset.com/).

```bash
# From the tensorflow/models/research/deeplab/datasets directory.
sh convert_cityscapes.sh
```

The converted dataset will be saved at ./deeplab/datasets/cityscapes/tfrecord.

## Recommended Directory Structure for Training and Evaluation

```
+ datasets
  + cityscapes
    + leftImg8bit
    + gtFine
    + tfrecord
    + exp
      + train_on_train_set
        + train
        + eval
        + vis
```

where the folder `train_on_train_set` stores the train/eval/vis events and
results (when training DeepLab on the Cityscapes train set).

## Running the train/eval/vis jobs

A local training job using `xception_65` can be run with the following command:

```bash
# From tensorflow/models/research/
python deeplab/train.py \
    --logtostderr \
lcchen's avatar
lcchen committed
45
    --training_number_of_steps=90000 \
46
    --train_split="train_fine" \
yukun's avatar
yukun committed
47
48
49
50
51
52
    --model_variant="xception_65" \
    --atrous_rates=6 \
    --atrous_rates=12 \
    --atrous_rates=18 \
    --output_stride=16 \
    --decoder_output_stride=4 \
53
    --train_crop_size="769,769" \
yukun's avatar
yukun committed
54
    --train_batch_size=1 \
lcchen's avatar
lcchen committed
55
    --dataset="cityscapes" \
56
    --tf_initial_checkpoint=${PATH_TO_INITIAL_CHECKPOINT} \
yukun's avatar
yukun committed
57
58
59
60
61
62
63
64
65
    --train_logdir=${PATH_TO_TRAIN_DIR} \
    --dataset_dir=${PATH_TO_DATASET}
```

where ${PATH_TO_INITIAL_CHECKPOINT} is the path to the initial checkpoint
(usually an ImageNet pretrained checkpoint), ${PATH_TO_TRAIN_DIR} is the
directory in which training checkpoints and events will be written to, and
${PATH_TO_DATASET} is the directory in which the Cityscapes dataset resides.

lcchen's avatar
lcchen committed
66
67
68
69
70
71
72
73
**Note that for {train,eval,vis}.py**:

1.  In order to reproduce our results, one needs to use large batch size (> 8),
    and set fine_tune_batch_norm = True. Here, we simply use small batch size
    during training for the purpose of demonstration. If the users have limited
    GPU memory at hand, please fine-tune from our provided checkpoints whose
    batch norm parameters have been trained, and use smaller learning rate with
    fine_tune_batch_norm = False.
yukun's avatar
yukun committed
74
75
76
77
78
79
80

2.  The users should change atrous_rates from [6, 12, 18] to [12, 24, 36] if
    setting output_stride=8.

3.  The users could skip the flag, `decoder_output_stride`, if you do not want
    to use the decoder structure.

81
82
83
4.  Change and add the following flags in order to use the provided dense
    prediction cell. Note we need to set decoder_output_stride if you want to
    use the provided checkpoints which include the decoder module.
84
85
86
87

```bash
--model_variant="xception_71"
--dense_prediction_cell_json="deeplab/core/dense_prediction_cell_branch5_top1_cityscapes.json"
88
--decoder_output_stride=4
89
90
```

yukun's avatar
yukun committed
91
92
93
94
95
96
97
A local evaluation job using `xception_65` can be run with the following
command:

```bash
# From tensorflow/models/research/
python deeplab/eval.py \
    --logtostderr \
98
    --eval_split="val_fine" \
yukun's avatar
yukun committed
99
100
101
102
103
104
    --model_variant="xception_65" \
    --atrous_rates=6 \
    --atrous_rates=12 \
    --atrous_rates=18 \
    --output_stride=16 \
    --decoder_output_stride=4 \
105
    --eval_crop_size="1025,2049" \
lcchen's avatar
lcchen committed
106
    --dataset="cityscapes" \
yukun's avatar
yukun committed
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
    --checkpoint_dir=${PATH_TO_CHECKPOINT} \
    --eval_logdir=${PATH_TO_EVAL_DIR} \
    --dataset_dir=${PATH_TO_DATASET}
```

where ${PATH_TO_CHECKPOINT} is the path to the trained checkpoint (i.e., the
path to train_logdir), ${PATH_TO_EVAL_DIR} is the directory in which evaluation
events will be written to, and ${PATH_TO_DATASET} is the directory in which the
Cityscapes dataset resides.

A local visualization job using `xception_65` can be run with the following
command:

```bash
# From tensorflow/models/research/
python deeplab/vis.py \
    --logtostderr \
124
    --vis_split="val_fine" \
yukun's avatar
yukun committed
125
126
127
128
129
130
    --model_variant="xception_65" \
    --atrous_rates=6 \
    --atrous_rates=12 \
    --atrous_rates=18 \
    --output_stride=16 \
    --decoder_output_stride=4 \
131
    --vis_crop_size="1025,2049" \
lcchen's avatar
lcchen committed
132
    --dataset="cityscapes" \
yukun's avatar
yukun committed
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
    --colormap_type="cityscapes" \
    --checkpoint_dir=${PATH_TO_CHECKPOINT} \
    --vis_logdir=${PATH_TO_VIS_DIR} \
    --dataset_dir=${PATH_TO_DATASET}
```

where ${PATH_TO_CHECKPOINT} is the path to the trained checkpoint (i.e., the
path to train_logdir), ${PATH_TO_VIS_DIR} is the directory in which evaluation
events will be written to, and ${PATH_TO_DATASET} is the directory in which the
Cityscapes dataset resides. Note that if the users would like to save the
segmentation results for evaluation server, set also_save_raw_predictions =
True.

## Running Tensorboard

Progress for training and evaluation jobs can be inspected using Tensorboard. If
using the recommended directory structure, Tensorboard can be run using the
following command:

```bash
tensorboard --logdir=${PATH_TO_LOG_DIRECTORY}
```

where `${PATH_TO_LOG_DIRECTORY}` points to the directory that contains the
train, eval, and vis directories (e.g., the folder `train_on_train_set` in the
above example). Please note it may take Tensorboard a couple minutes to populate
with data.