Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
ResNet50_tensorflow
Commits
e302950d
Commit
e302950d
authored
Mar 14, 2018
by
lcchen
Browse files
update dataset examples
parent
137e750b
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
60 additions
and
0 deletions
+60
-0
research/deeplab/g3doc/cityscapes.md
research/deeplab/g3doc/cityscapes.md
+30
-0
research/deeplab/g3doc/pascal.md
research/deeplab/g3doc/pascal.md
+30
-0
No files found.
research/deeplab/g3doc/cityscapes.md
View file @
e302950d
...
@@ -42,6 +42,10 @@ A local training job using `xception_65` can be run with the following command:
...
@@ -42,6 +42,10 @@ A local training job using `xception_65` can be run with the following command:
# From tensorflow/models/research/
# From tensorflow/models/research/
python deeplab/train.py
\
python deeplab/train.py
\
--logtostderr
\
--logtostderr
\
<<<<<<
< HEAD
=======
--training_number_of_steps
=
90000
\
>>>>>>>
origin/master
--train_split
=
"train"
\
--train_split
=
"train"
\
--model_variant
=
"xception_65"
\
--model_variant
=
"xception_65"
\
--atrous_rates
=
6
\
--atrous_rates
=
6
\
...
@@ -52,6 +56,11 @@ python deeplab/train.py \
...
@@ -52,6 +56,11 @@ python deeplab/train.py \
--train_crop_size
=
769
\
--train_crop_size
=
769
\
--train_crop_size
=
769
\
--train_crop_size
=
769
\
--train_batch_size
=
1
\
--train_batch_size
=
1
\
<<<<<<
< HEAD
=======
--dataset
=
"cityscapes"
\
--train_split
=
"train"
\
>>>>>>>
origin/master
--tf_initial_checkpoints
=
${
PATH_TO_INITIAL_CHECKPOINT
}
\
--tf_initial_checkpoints
=
${
PATH_TO_INITIAL_CHECKPOINT
}
\
--train_logdir
=
${
PATH_TO_TRAIN_DIR
}
\
--train_logdir
=
${
PATH_TO_TRAIN_DIR
}
\
--dataset_dir
=
${
PATH_TO_DATASET
}
--dataset_dir
=
${
PATH_TO_DATASET
}
...
@@ -62,11 +71,22 @@ where ${PATH_TO_INITIAL_CHECKPOINT} is the path to the initial checkpoint
...
@@ -62,11 +71,22 @@ where ${PATH_TO_INITIAL_CHECKPOINT} is the path to the initial checkpoint
directory in which training checkpoints and events will be written to, and
directory in which training checkpoints and events will be written to, and
${PATH_TO_DATASET} is the directory in which the Cityscapes dataset resides.
${PATH_TO_DATASET} is the directory in which the Cityscapes dataset resides.
<<<<<<< HEAD
Note that for {train,eval,vis}.py:
Note that for {train,eval,vis}.py:
1.
We use small batch size during training. The users could change it based on
1.
We use small batch size during training. The users could change it based on
the available GPU memory and also set
`fine_tune_batch_norm`
to be False or
the available GPU memory and also set
`fine_tune_batch_norm`
to be False or
True depending on the use case.
True depending on the use case.
=======
**Note that for {train,eval,vis}.py**
:
1.
In order to reproduce our results, one needs to use large batch size (> 8),
and set fine_tune_batch_norm = True. Here, we simply use small batch size
during training for the purpose of demonstration. If the users have limited
GPU memory at hand, please fine-tune from our provided checkpoints whose
batch norm parameters have been trained, and use smaller learning rate with
fine_tune_batch_norm = False.
>>>>>>> origin/master
2.
The users should change atrous_rates from [6, 12, 18] to [12, 24, 36] if
2.
The users should change atrous_rates from [6, 12, 18] to [12, 24, 36] if
setting output_stride=8.
setting output_stride=8.
...
@@ -90,6 +110,11 @@ python deeplab/eval.py \
...
@@ -90,6 +110,11 @@ python deeplab/eval.py \
--decoder_output_stride
=
4
\
--decoder_output_stride
=
4
\
--eval_crop_size
=
1025
\
--eval_crop_size
=
1025
\
--eval_crop_size
=
2049
\
--eval_crop_size
=
2049
\
<<<<<<
< HEAD
=======
--dataset
=
"cityscapes"
\
--eval_split
=
"val"
\
>>>>>>>
origin/master
--checkpoint_dir
=
${
PATH_TO_CHECKPOINT
}
\
--checkpoint_dir
=
${
PATH_TO_CHECKPOINT
}
\
--eval_logdir
=
${
PATH_TO_EVAL_DIR
}
\
--eval_logdir
=
${
PATH_TO_EVAL_DIR
}
\
--dataset_dir
=
${
PATH_TO_DATASET
}
--dataset_dir
=
${
PATH_TO_DATASET
}
...
@@ -116,6 +141,11 @@ python deeplab/vis.py \
...
@@ -116,6 +141,11 @@ python deeplab/vis.py \
--decoder_output_stride
=
4
\
--decoder_output_stride
=
4
\
--vis_crop_size
=
1025
\
--vis_crop_size
=
1025
\
--vis_crop_size
=
2049
\
--vis_crop_size
=
2049
\
<<<<<<
< HEAD
=======
--dataset
=
"cityscapes"
\
--vis_split
=
"val"
\
>>>>>>>
origin/master
--colormap_type
=
"cityscapes"
\
--colormap_type
=
"cityscapes"
\
--checkpoint_dir
=
${
PATH_TO_CHECKPOINT
}
\
--checkpoint_dir
=
${
PATH_TO_CHECKPOINT
}
\
--vis_logdir
=
${
PATH_TO_VIS_DIR
}
\
--vis_logdir
=
${
PATH_TO_VIS_DIR
}
\
...
...
research/deeplab/g3doc/pascal.md
View file @
e302950d
...
@@ -44,6 +44,10 @@ A local training job using `xception_65` can be run with the following command:
...
@@ -44,6 +44,10 @@ A local training job using `xception_65` can be run with the following command:
# From tensorflow/models/research/
# From tensorflow/models/research/
python deeplab/train.py
\
python deeplab/train.py
\
--logtostderr
\
--logtostderr
\
<<<<<<
< HEAD
=======
--training_number_of_steps
=
30000
\
>>>>>>>
origin/master
--train_split
=
"train"
\
--train_split
=
"train"
\
--model_variant
=
"xception_65"
\
--model_variant
=
"xception_65"
\
--atrous_rates
=
6
\
--atrous_rates
=
6
\
...
@@ -54,6 +58,11 @@ python deeplab/train.py \
...
@@ -54,6 +58,11 @@ python deeplab/train.py \
--train_crop_size
=
513
\
--train_crop_size
=
513
\
--train_crop_size
=
513
\
--train_crop_size
=
513
\
--train_batch_size
=
1
\
--train_batch_size
=
1
\
<<<<<<
< HEAD
=======
--dataset
=
"pascal_voc_seg"
\
--train_split
=
"train"
\
>>>>>>>
origin/master
--tf_initial_checkpoints
=
${
PATH_TO_INITIAL_CHECKPOINT
}
\
--tf_initial_checkpoints
=
${
PATH_TO_INITIAL_CHECKPOINT
}
\
--train_logdir
=
${
PATH_TO_TRAIN_DIR
}
\
--train_logdir
=
${
PATH_TO_TRAIN_DIR
}
\
--dataset_dir
=
${
PATH_TO_DATASET
}
--dataset_dir
=
${
PATH_TO_DATASET
}
...
@@ -65,11 +74,22 @@ directory in which training checkpoints and events will be written to, and
...
@@ -65,11 +74,22 @@ directory in which training checkpoints and events will be written to, and
${PATH_TO_DATASET} is the directory in which the PASCAL VOC 2012 dataset
${PATH_TO_DATASET} is the directory in which the PASCAL VOC 2012 dataset
resides.
resides.
<<<<<<< HEAD
Note that for {train,eval,vis}.py:
Note that for {train,eval,vis}.py:
1.
We use small batch size during training. The users could change it based on
1.
We use small batch size during training. The users could change it based on
the available GPU memory and also set
`fine_tune_batch_norm`
to be False or
the available GPU memory and also set
`fine_tune_batch_norm`
to be False or
True depending on the use case.
True depending on the use case.
=======
**Note that for {train,eval,vis}.py:**
1.
In order to reproduce our results, one needs to use large batch size (> 12),
and set fine_tune_batch_norm = True. Here, we simply use small batch size
during training for the purpose of demonstration. If the users have limited
GPU memory at hand, please fine-tune from our provided checkpoints whose
batch norm parameters have been trained, and use smaller learning rate with
fine_tune_batch_norm = False.
>>>>>>> origin/master
2.
The users should change atrous_rates from [6, 12, 18] to [12, 24, 36] if
2.
The users should change atrous_rates from [6, 12, 18] to [12, 24, 36] if
setting output_stride=8.
setting output_stride=8.
...
@@ -93,6 +113,11 @@ python deeplab/eval.py \
...
@@ -93,6 +113,11 @@ python deeplab/eval.py \
--decoder_output_stride
=
4
\
--decoder_output_stride
=
4
\
--eval_crop_size
=
513
\
--eval_crop_size
=
513
\
--eval_crop_size
=
513
\
--eval_crop_size
=
513
\
<<<<<<
< HEAD
=======
--dataset
=
"pascal_voc_seg"
\
--eval_split
=
"val"
\
>>>>>>>
origin/master
--checkpoint_dir
=
${
PATH_TO_CHECKPOINT
}
\
--checkpoint_dir
=
${
PATH_TO_CHECKPOINT
}
\
--eval_logdir
=
${
PATH_TO_EVAL_DIR
}
\
--eval_logdir
=
${
PATH_TO_EVAL_DIR
}
\
--dataset_dir
=
${
PATH_TO_DATASET
}
--dataset_dir
=
${
PATH_TO_DATASET
}
...
@@ -119,6 +144,11 @@ python deeplab/vis.py \
...
@@ -119,6 +144,11 @@ python deeplab/vis.py \
--decoder_output_stride
=
4
\
--decoder_output_stride
=
4
\
--vis_crop_size
=
513
\
--vis_crop_size
=
513
\
--vis_crop_size
=
513
\
--vis_crop_size
=
513
\
<<<<<<
< HEAD
=======
--dataset
=
"pascal_voc_seg"
\
--vis_split
=
"val"
\
>>>>>>>
origin/master
--checkpoint_dir
=
${
PATH_TO_CHECKPOINT
}
\
--checkpoint_dir
=
${
PATH_TO_CHECKPOINT
}
\
--vis_logdir
=
${
PATH_TO_VIS_DIR
}
\
--vis_logdir
=
${
PATH_TO_VIS_DIR
}
\
--dataset_dir
=
${
PATH_TO_DATASET
}
--dataset_dir
=
${
PATH_TO_DATASET
}
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment