Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
ResNet50_tensorflow
Commits
8f6c3708
"vscode:/vscode.git/clone" did not exist on "be7a899945ab8a1912de297918dff468fc2a3903"
Unverified
Commit
8f6c3708
authored
Mar 14, 2018
by
Yukun Zhu
Committed by
GitHub
Mar 14, 2018
Browse files
Merge pull request #3602 from aquariusjay/master
Update dataset examples
parents
af79775b
8e25697b
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
28 additions
and
8 deletions
+28
-8
research/deeplab/g3doc/cityscapes.md
research/deeplab/g3doc/cityscapes.md
+14
-4
research/deeplab/g3doc/pascal.md
research/deeplab/g3doc/pascal.md
+14
-4
No files found.
research/deeplab/g3doc/cityscapes.md
View file @
8f6c3708
...
@@ -42,6 +42,7 @@ A local training job using `xception_65` can be run with the following command:
...
@@ -42,6 +42,7 @@ A local training job using `xception_65` can be run with the following command:
# From tensorflow/models/research/
# From tensorflow/models/research/
python deeplab/train.py
\
python deeplab/train.py
\
--logtostderr
\
--logtostderr
\
--training_number_of_steps
=
90000
\
--train_split
=
"train"
\
--train_split
=
"train"
\
--model_variant
=
"xception_65"
\
--model_variant
=
"xception_65"
\
--atrous_rates
=
6
\
--atrous_rates
=
6
\
...
@@ -52,6 +53,8 @@ python deeplab/train.py \
...
@@ -52,6 +53,8 @@ python deeplab/train.py \
--train_crop_size
=
769
\
--train_crop_size
=
769
\
--train_crop_size
=
769
\
--train_crop_size
=
769
\
--train_batch_size
=
1
\
--train_batch_size
=
1
\
--dataset
=
"cityscapes"
\
--train_split
=
"train"
\
--tf_initial_checkpoints
=
${
PATH_TO_INITIAL_CHECKPOINT
}
\
--tf_initial_checkpoints
=
${
PATH_TO_INITIAL_CHECKPOINT
}
\
--train_logdir
=
${
PATH_TO_TRAIN_DIR
}
\
--train_logdir
=
${
PATH_TO_TRAIN_DIR
}
\
--dataset_dir
=
${
PATH_TO_DATASET
}
--dataset_dir
=
${
PATH_TO_DATASET
}
...
@@ -62,11 +65,14 @@ where ${PATH_TO_INITIAL_CHECKPOINT} is the path to the initial checkpoint
...
@@ -62,11 +65,14 @@ where ${PATH_TO_INITIAL_CHECKPOINT} is the path to the initial checkpoint
directory in which training checkpoints and events will be written to, and
directory in which training checkpoints and events will be written to, and
${PATH_TO_DATASET} is the directory in which the Cityscapes dataset resides.
${PATH_TO_DATASET} is the directory in which the Cityscapes dataset resides.
Note that for {train,eval,vis}.py:
**
Note that for {train,eval,vis}.py
**
:
1.
We use small batch size during training. The users could change it based on
1.
In order to reproduce our results, one needs to use large batch size (> 8),
the available GPU memory and also set
`fine_tune_batch_norm`
to be False or
and set fine_tune_batch_norm = True. Here, we simply use small batch size
True depending on the use case.
during training for the purpose of demonstration. If the users have limited
GPU memory at hand, please fine-tune from our provided checkpoints whose
batch norm parameters have been trained, and use smaller learning rate with
fine_tune_batch_norm = False.
2.
The users should change atrous_rates from [6, 12, 18] to [12, 24, 36] if
2.
The users should change atrous_rates from [6, 12, 18] to [12, 24, 36] if
setting output_stride=8.
setting output_stride=8.
...
@@ -90,6 +96,8 @@ python deeplab/eval.py \
...
@@ -90,6 +96,8 @@ python deeplab/eval.py \
--decoder_output_stride
=
4
\
--decoder_output_stride
=
4
\
--eval_crop_size
=
1025
\
--eval_crop_size
=
1025
\
--eval_crop_size
=
2049
\
--eval_crop_size
=
2049
\
--dataset
=
"cityscapes"
\
--eval_split
=
"val"
\
--checkpoint_dir
=
${
PATH_TO_CHECKPOINT
}
\
--checkpoint_dir
=
${
PATH_TO_CHECKPOINT
}
\
--eval_logdir
=
${
PATH_TO_EVAL_DIR
}
\
--eval_logdir
=
${
PATH_TO_EVAL_DIR
}
\
--dataset_dir
=
${
PATH_TO_DATASET
}
--dataset_dir
=
${
PATH_TO_DATASET
}
...
@@ -116,6 +124,8 @@ python deeplab/vis.py \
...
@@ -116,6 +124,8 @@ python deeplab/vis.py \
--decoder_output_stride
=
4
\
--decoder_output_stride
=
4
\
--vis_crop_size
=
1025
\
--vis_crop_size
=
1025
\
--vis_crop_size
=
2049
\
--vis_crop_size
=
2049
\
--dataset
=
"cityscapes"
\
--vis_split
=
"val"
\
--colormap_type
=
"cityscapes"
\
--colormap_type
=
"cityscapes"
\
--checkpoint_dir
=
${
PATH_TO_CHECKPOINT
}
\
--checkpoint_dir
=
${
PATH_TO_CHECKPOINT
}
\
--vis_logdir
=
${
PATH_TO_VIS_DIR
}
\
--vis_logdir
=
${
PATH_TO_VIS_DIR
}
\
...
...
research/deeplab/g3doc/pascal.md
View file @
8f6c3708
...
@@ -44,6 +44,7 @@ A local training job using `xception_65` can be run with the following command:
...
@@ -44,6 +44,7 @@ A local training job using `xception_65` can be run with the following command:
# From tensorflow/models/research/
# From tensorflow/models/research/
python deeplab/train.py
\
python deeplab/train.py
\
--logtostderr
\
--logtostderr
\
--training_number_of_steps
=
30000
\
--train_split
=
"train"
\
--train_split
=
"train"
\
--model_variant
=
"xception_65"
\
--model_variant
=
"xception_65"
\
--atrous_rates
=
6
\
--atrous_rates
=
6
\
...
@@ -54,6 +55,8 @@ python deeplab/train.py \
...
@@ -54,6 +55,8 @@ python deeplab/train.py \
--train_crop_size
=
513
\
--train_crop_size
=
513
\
--train_crop_size
=
513
\
--train_crop_size
=
513
\
--train_batch_size
=
1
\
--train_batch_size
=
1
\
--dataset
=
"pascal_voc_seg"
\
--train_split
=
"train"
\
--tf_initial_checkpoints
=
${
PATH_TO_INITIAL_CHECKPOINT
}
\
--tf_initial_checkpoints
=
${
PATH_TO_INITIAL_CHECKPOINT
}
\
--train_logdir
=
${
PATH_TO_TRAIN_DIR
}
\
--train_logdir
=
${
PATH_TO_TRAIN_DIR
}
\
--dataset_dir
=
${
PATH_TO_DATASET
}
--dataset_dir
=
${
PATH_TO_DATASET
}
...
@@ -65,11 +68,14 @@ directory in which training checkpoints and events will be written to, and
...
@@ -65,11 +68,14 @@ directory in which training checkpoints and events will be written to, and
${PATH_TO_DATASET} is the directory in which the PASCAL VOC 2012 dataset
${PATH_TO_DATASET} is the directory in which the PASCAL VOC 2012 dataset
resides.
resides.
Note that for {train,eval,vis}.py:
**
Note that for {train,eval,vis}.py:
**
1.
We use small batch size during training. The users could change it based on
1.
In order to reproduce our results, one needs to use large batch size (> 12),
the available GPU memory and also set
`fine_tune_batch_norm`
to be False or
and set fine_tune_batch_norm = True. Here, we simply use small batch size
True depending on the use case.
during training for the purpose of demonstration. If the users have limited
GPU memory at hand, please fine-tune from our provided checkpoints whose
batch norm parameters have been trained, and use smaller learning rate with
fine_tune_batch_norm = False.
2.
The users should change atrous_rates from [6, 12, 18] to [12, 24, 36] if
2.
The users should change atrous_rates from [6, 12, 18] to [12, 24, 36] if
setting output_stride=8.
setting output_stride=8.
...
@@ -93,6 +99,8 @@ python deeplab/eval.py \
...
@@ -93,6 +99,8 @@ python deeplab/eval.py \
--decoder_output_stride
=
4
\
--decoder_output_stride
=
4
\
--eval_crop_size
=
513
\
--eval_crop_size
=
513
\
--eval_crop_size
=
513
\
--eval_crop_size
=
513
\
--dataset
=
"pascal_voc_seg"
\
--eval_split
=
"val"
\
--checkpoint_dir
=
${
PATH_TO_CHECKPOINT
}
\
--checkpoint_dir
=
${
PATH_TO_CHECKPOINT
}
\
--eval_logdir
=
${
PATH_TO_EVAL_DIR
}
\
--eval_logdir
=
${
PATH_TO_EVAL_DIR
}
\
--dataset_dir
=
${
PATH_TO_DATASET
}
--dataset_dir
=
${
PATH_TO_DATASET
}
...
@@ -119,6 +127,8 @@ python deeplab/vis.py \
...
@@ -119,6 +127,8 @@ python deeplab/vis.py \
--decoder_output_stride
=
4
\
--decoder_output_stride
=
4
\
--vis_crop_size
=
513
\
--vis_crop_size
=
513
\
--vis_crop_size
=
513
\
--vis_crop_size
=
513
\
--dataset
=
"pascal_voc_seg"
\
--vis_split
=
"val"
\
--checkpoint_dir
=
${
PATH_TO_CHECKPOINT
}
\
--checkpoint_dir
=
${
PATH_TO_CHECKPOINT
}
\
--vis_logdir
=
${
PATH_TO_VIS_DIR
}
\
--vis_logdir
=
${
PATH_TO_VIS_DIR
}
\
--dataset_dir
=
${
PATH_TO_DATASET
}
--dataset_dir
=
${
PATH_TO_DATASET
}
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment