Commit 8e25697b authored by lcchen's avatar lcchen
Browse files

update dataset examples

parent 576a37d1
...@@ -42,14 +42,7 @@ A local training job using `xception_65` can be run with the following command: ...@@ -42,14 +42,7 @@ A local training job using `xception_65` can be run with the following command:
# From tensorflow/models/research/ # From tensorflow/models/research/
python deeplab/train.py \ python deeplab/train.py \
--logtostderr \ --logtostderr \
<<<<<<< HEAD
<<<<<<< HEAD
=======
--training_number_of_steps=90000 \ --training_number_of_steps=90000 \
>>>>>>> origin/master
=======
--training_number_of_steps=90000 \
>>>>>>> origin/master
--train_split="train" \ --train_split="train" \
--model_variant="xception_65" \ --model_variant="xception_65" \
--atrous_rates=6 \ --atrous_rates=6 \
...@@ -60,16 +53,8 @@ python deeplab/train.py \ ...@@ -60,16 +53,8 @@ python deeplab/train.py \
--train_crop_size=769 \ --train_crop_size=769 \
--train_crop_size=769 \ --train_crop_size=769 \
--train_batch_size=1 \ --train_batch_size=1 \
<<<<<<< HEAD
<<<<<<< HEAD
=======
--dataset="cityscapes" \
--train_split="train" \
>>>>>>> origin/master
=======
--dataset="cityscapes" \ --dataset="cityscapes" \
--train_split="train" \ --train_split="train" \
>>>>>>> origin/master
--tf_initial_checkpoints=${PATH_TO_INITIAL_CHECKPOINT} \ --tf_initial_checkpoints=${PATH_TO_INITIAL_CHECKPOINT} \
--train_logdir=${PATH_TO_TRAIN_DIR} \ --train_logdir=${PATH_TO_TRAIN_DIR} \
--dataset_dir=${PATH_TO_DATASET} --dataset_dir=${PATH_TO_DATASET}
...@@ -80,16 +65,6 @@ where ${PATH_TO_INITIAL_CHECKPOINT} is the path to the initial checkpoint ...@@ -80,16 +65,6 @@ where ${PATH_TO_INITIAL_CHECKPOINT} is the path to the initial checkpoint
directory in which training checkpoints and events will be written to, and directory in which training checkpoints and events will be written to, and
${PATH_TO_DATASET} is the directory in which the Cityscapes dataset resides. ${PATH_TO_DATASET} is the directory in which the Cityscapes dataset resides.
<<<<<<< HEAD
<<<<<<< HEAD
Note that for {train,eval,vis}.py:
1. We use small batch size during training. The users could change it based on
the available GPU memory and also set `fine_tune_batch_norm` to be False or
True depending on the use case.
=======
=======
>>>>>>> origin/master
**Note that for {train,eval,vis}.py**: **Note that for {train,eval,vis}.py**:
1. In order to reproduce our results, one needs to use large batch size (> 8), 1. In order to reproduce our results, one needs to use large batch size (> 8),
...@@ -98,10 +73,6 @@ Note that for {train,eval,vis}.py: ...@@ -98,10 +73,6 @@ Note that for {train,eval,vis}.py:
GPU memory at hand, please fine-tune from our provided checkpoints whose GPU memory at hand, please fine-tune from our provided checkpoints whose
batch norm parameters have been trained, and use smaller learning rate with batch norm parameters have been trained, and use smaller learning rate with
fine_tune_batch_norm = False. fine_tune_batch_norm = False.
<<<<<<< HEAD
>>>>>>> origin/master
=======
>>>>>>> origin/master
2. The users should change atrous_rates from [6, 12, 18] to [12, 24, 36] if 2. The users should change atrous_rates from [6, 12, 18] to [12, 24, 36] if
setting output_stride=8. setting output_stride=8.
...@@ -125,16 +96,8 @@ python deeplab/eval.py \ ...@@ -125,16 +96,8 @@ python deeplab/eval.py \
--decoder_output_stride=4 \ --decoder_output_stride=4 \
--eval_crop_size=1025 \ --eval_crop_size=1025 \
--eval_crop_size=2049 \ --eval_crop_size=2049 \
<<<<<<< HEAD
<<<<<<< HEAD
=======
--dataset="cityscapes" \ --dataset="cityscapes" \
--eval_split="val" \ --eval_split="val" \
>>>>>>> origin/master
=======
--dataset="cityscapes" \
--eval_split="val" \
>>>>>>> origin/master
--checkpoint_dir=${PATH_TO_CHECKPOINT} \ --checkpoint_dir=${PATH_TO_CHECKPOINT} \
--eval_logdir=${PATH_TO_EVAL_DIR} \ --eval_logdir=${PATH_TO_EVAL_DIR} \
--dataset_dir=${PATH_TO_DATASET} --dataset_dir=${PATH_TO_DATASET}
...@@ -161,16 +124,8 @@ python deeplab/vis.py \ ...@@ -161,16 +124,8 @@ python deeplab/vis.py \
--decoder_output_stride=4 \ --decoder_output_stride=4 \
--vis_crop_size=1025 \ --vis_crop_size=1025 \
--vis_crop_size=2049 \ --vis_crop_size=2049 \
<<<<<<< HEAD
<<<<<<< HEAD
=======
--dataset="cityscapes" \
--vis_split="val" \
>>>>>>> origin/master
=======
--dataset="cityscapes" \ --dataset="cityscapes" \
--vis_split="val" \ --vis_split="val" \
>>>>>>> origin/master
--colormap_type="cityscapes" \ --colormap_type="cityscapes" \
--checkpoint_dir=${PATH_TO_CHECKPOINT} \ --checkpoint_dir=${PATH_TO_CHECKPOINT} \
--vis_logdir=${PATH_TO_VIS_DIR} \ --vis_logdir=${PATH_TO_VIS_DIR} \
......
...@@ -44,14 +44,7 @@ A local training job using `xception_65` can be run with the following command: ...@@ -44,14 +44,7 @@ A local training job using `xception_65` can be run with the following command:
# From tensorflow/models/research/ # From tensorflow/models/research/
python deeplab/train.py \ python deeplab/train.py \
--logtostderr \ --logtostderr \
<<<<<<< HEAD
<<<<<<< HEAD
=======
--training_number_of_steps=30000 \ --training_number_of_steps=30000 \
>>>>>>> origin/master
=======
--training_number_of_steps=30000 \
>>>>>>> origin/master
--train_split="train" \ --train_split="train" \
--model_variant="xception_65" \ --model_variant="xception_65" \
--atrous_rates=6 \ --atrous_rates=6 \
...@@ -62,16 +55,8 @@ python deeplab/train.py \ ...@@ -62,16 +55,8 @@ python deeplab/train.py \
--train_crop_size=513 \ --train_crop_size=513 \
--train_crop_size=513 \ --train_crop_size=513 \
--train_batch_size=1 \ --train_batch_size=1 \
<<<<<<< HEAD
<<<<<<< HEAD
=======
--dataset="pascal_voc_seg" \
--train_split="train" \
>>>>>>> origin/master
=======
--dataset="pascal_voc_seg" \ --dataset="pascal_voc_seg" \
--train_split="train" \ --train_split="train" \
>>>>>>> origin/master
--tf_initial_checkpoints=${PATH_TO_INITIAL_CHECKPOINT} \ --tf_initial_checkpoints=${PATH_TO_INITIAL_CHECKPOINT} \
--train_logdir=${PATH_TO_TRAIN_DIR} \ --train_logdir=${PATH_TO_TRAIN_DIR} \
--dataset_dir=${PATH_TO_DATASET} --dataset_dir=${PATH_TO_DATASET}
...@@ -83,16 +68,6 @@ directory in which training checkpoints and events will be written to, and ...@@ -83,16 +68,6 @@ directory in which training checkpoints and events will be written to, and
${PATH_TO_DATASET} is the directory in which the PASCAL VOC 2012 dataset ${PATH_TO_DATASET} is the directory in which the PASCAL VOC 2012 dataset
resides. resides.
<<<<<<< HEAD
<<<<<<< HEAD
Note that for {train,eval,vis}.py:
1. We use small batch size during training. The users could change it based on
the available GPU memory and also set `fine_tune_batch_norm` to be False or
True depending on the use case.
=======
=======
>>>>>>> origin/master
**Note that for {train,eval,vis}.py:** **Note that for {train,eval,vis}.py:**
1. In order to reproduce our results, one needs to use large batch size (> 12), 1. In order to reproduce our results, one needs to use large batch size (> 12),
...@@ -101,10 +76,6 @@ Note that for {train,eval,vis}.py: ...@@ -101,10 +76,6 @@ Note that for {train,eval,vis}.py:
GPU memory at hand, please fine-tune from our provided checkpoints whose GPU memory at hand, please fine-tune from our provided checkpoints whose
batch norm parameters have been trained, and use smaller learning rate with batch norm parameters have been trained, and use smaller learning rate with
fine_tune_batch_norm = False. fine_tune_batch_norm = False.
<<<<<<< HEAD
>>>>>>> origin/master
=======
>>>>>>> origin/master
2. The users should change atrous_rates from [6, 12, 18] to [12, 24, 36] if 2. The users should change atrous_rates from [6, 12, 18] to [12, 24, 36] if
setting output_stride=8. setting output_stride=8.
...@@ -128,16 +99,8 @@ python deeplab/eval.py \ ...@@ -128,16 +99,8 @@ python deeplab/eval.py \
--decoder_output_stride=4 \ --decoder_output_stride=4 \
--eval_crop_size=513 \ --eval_crop_size=513 \
--eval_crop_size=513 \ --eval_crop_size=513 \
<<<<<<< HEAD
<<<<<<< HEAD
=======
--dataset="pascal_voc_seg" \ --dataset="pascal_voc_seg" \
--eval_split="val" \ --eval_split="val" \
>>>>>>> origin/master
=======
--dataset="pascal_voc_seg" \
--eval_split="val" \
>>>>>>> origin/master
--checkpoint_dir=${PATH_TO_CHECKPOINT} \ --checkpoint_dir=${PATH_TO_CHECKPOINT} \
--eval_logdir=${PATH_TO_EVAL_DIR} \ --eval_logdir=${PATH_TO_EVAL_DIR} \
--dataset_dir=${PATH_TO_DATASET} --dataset_dir=${PATH_TO_DATASET}
...@@ -164,16 +127,8 @@ python deeplab/vis.py \ ...@@ -164,16 +127,8 @@ python deeplab/vis.py \
--decoder_output_stride=4 \ --decoder_output_stride=4 \
--vis_crop_size=513 \ --vis_crop_size=513 \
--vis_crop_size=513 \ --vis_crop_size=513 \
<<<<<<< HEAD
<<<<<<< HEAD
=======
--dataset="pascal_voc_seg" \
--vis_split="val" \
>>>>>>> origin/master
=======
--dataset="pascal_voc_seg" \ --dataset="pascal_voc_seg" \
--vis_split="val" \ --vis_split="val" \
>>>>>>> origin/master
--checkpoint_dir=${PATH_TO_CHECKPOINT} \ --checkpoint_dir=${PATH_TO_CHECKPOINT} \
--vis_logdir=${PATH_TO_VIS_DIR} \ --vis_logdir=${PATH_TO_VIS_DIR} \
--dataset_dir=${PATH_TO_DATASET} --dataset_dir=${PATH_TO_DATASET}
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment