"git@developer.sourcefind.cn:ox696c/ktransformers.git" did not exist on "19d4a50b1cb514f9090c5bcbf5d9893da8b48674"
Commit 576a37d1 authored by lcchen's avatar lcchen
Browse files

update dataset examples

parent e302950d
...@@ -43,6 +43,10 @@ A local training job using `xception_65` can be run with the following command: ...@@ -43,6 +43,10 @@ A local training job using `xception_65` can be run with the following command:
python deeplab/train.py \ python deeplab/train.py \
--logtostderr \ --logtostderr \
<<<<<<< HEAD <<<<<<< HEAD
<<<<<<< HEAD
=======
--training_number_of_steps=90000 \
>>>>>>> origin/master
======= =======
--training_number_of_steps=90000 \ --training_number_of_steps=90000 \
>>>>>>> origin/master >>>>>>> origin/master
...@@ -57,6 +61,11 @@ python deeplab/train.py \ ...@@ -57,6 +61,11 @@ python deeplab/train.py \
--train_crop_size=769 \ --train_crop_size=769 \
--train_batch_size=1 \ --train_batch_size=1 \
<<<<<<< HEAD <<<<<<< HEAD
<<<<<<< HEAD
=======
--dataset="cityscapes" \
--train_split="train" \
>>>>>>> origin/master
======= =======
--dataset="cityscapes" \ --dataset="cityscapes" \
--train_split="train" \ --train_split="train" \
...@@ -71,6 +80,7 @@ where ${PATH_TO_INITIAL_CHECKPOINT} is the path to the initial checkpoint ...@@ -71,6 +80,7 @@ where ${PATH_TO_INITIAL_CHECKPOINT} is the path to the initial checkpoint
directory in which training checkpoints and events will be written to, and directory in which training checkpoints and events will be written to, and
${PATH_TO_DATASET} is the directory in which the Cityscapes dataset resides. ${PATH_TO_DATASET} is the directory in which the Cityscapes dataset resides.
<<<<<<< HEAD
<<<<<<< HEAD <<<<<<< HEAD
Note that for {train,eval,vis}.py: Note that for {train,eval,vis}.py:
...@@ -78,6 +88,8 @@ Note that for {train,eval,vis}.py: ...@@ -78,6 +88,8 @@ Note that for {train,eval,vis}.py:
the available GPU memory and also set `fine_tune_batch_norm` to be False or the available GPU memory and also set `fine_tune_batch_norm` to be False or
True depending on the use case. True depending on the use case.
======= =======
=======
>>>>>>> origin/master
**Note that for {train,eval,vis}.py**: **Note that for {train,eval,vis}.py**:
1. In order to reproduce our results, one needs to use large batch size (> 8), 1. In order to reproduce our results, one needs to use large batch size (> 8),
...@@ -86,6 +98,9 @@ Note that for {train,eval,vis}.py: ...@@ -86,6 +98,9 @@ Note that for {train,eval,vis}.py:
GPU memory at hand, please fine-tune from our provided checkpoints whose GPU memory at hand, please fine-tune from our provided checkpoints whose
batch norm parameters have been trained, and use smaller learning rate with batch norm parameters have been trained, and use smaller learning rate with
fine_tune_batch_norm = False. fine_tune_batch_norm = False.
<<<<<<< HEAD
>>>>>>> origin/master
=======
>>>>>>> origin/master >>>>>>> origin/master
2. The users should change atrous_rates from [6, 12, 18] to [12, 24, 36] if 2. The users should change atrous_rates from [6, 12, 18] to [12, 24, 36] if
...@@ -111,6 +126,11 @@ python deeplab/eval.py \ ...@@ -111,6 +126,11 @@ python deeplab/eval.py \
--eval_crop_size=1025 \ --eval_crop_size=1025 \
--eval_crop_size=2049 \ --eval_crop_size=2049 \
<<<<<<< HEAD <<<<<<< HEAD
<<<<<<< HEAD
=======
--dataset="cityscapes" \
--eval_split="val" \
>>>>>>> origin/master
======= =======
--dataset="cityscapes" \ --dataset="cityscapes" \
--eval_split="val" \ --eval_split="val" \
...@@ -142,6 +162,11 @@ python deeplab/vis.py \ ...@@ -142,6 +162,11 @@ python deeplab/vis.py \
--vis_crop_size=1025 \ --vis_crop_size=1025 \
--vis_crop_size=2049 \ --vis_crop_size=2049 \
<<<<<<< HEAD <<<<<<< HEAD
<<<<<<< HEAD
=======
--dataset="cityscapes" \
--vis_split="val" \
>>>>>>> origin/master
======= =======
--dataset="cityscapes" \ --dataset="cityscapes" \
--vis_split="val" \ --vis_split="val" \
......
...@@ -45,6 +45,10 @@ A local training job using `xception_65` can be run with the following command: ...@@ -45,6 +45,10 @@ A local training job using `xception_65` can be run with the following command:
python deeplab/train.py \ python deeplab/train.py \
--logtostderr \ --logtostderr \
<<<<<<< HEAD <<<<<<< HEAD
<<<<<<< HEAD
=======
--training_number_of_steps=30000 \
>>>>>>> origin/master
======= =======
--training_number_of_steps=30000 \ --training_number_of_steps=30000 \
>>>>>>> origin/master >>>>>>> origin/master
...@@ -59,6 +63,11 @@ python deeplab/train.py \ ...@@ -59,6 +63,11 @@ python deeplab/train.py \
--train_crop_size=513 \ --train_crop_size=513 \
--train_batch_size=1 \ --train_batch_size=1 \
<<<<<<< HEAD <<<<<<< HEAD
<<<<<<< HEAD
=======
--dataset="pascal_voc_seg" \
--train_split="train" \
>>>>>>> origin/master
======= =======
--dataset="pascal_voc_seg" \ --dataset="pascal_voc_seg" \
--train_split="train" \ --train_split="train" \
...@@ -74,6 +83,7 @@ directory in which training checkpoints and events will be written to, and ...@@ -74,6 +83,7 @@ directory in which training checkpoints and events will be written to, and
${PATH_TO_DATASET} is the directory in which the PASCAL VOC 2012 dataset ${PATH_TO_DATASET} is the directory in which the PASCAL VOC 2012 dataset
resides. resides.
<<<<<<< HEAD
<<<<<<< HEAD <<<<<<< HEAD
Note that for {train,eval,vis}.py: Note that for {train,eval,vis}.py:
...@@ -81,6 +91,8 @@ Note that for {train,eval,vis}.py: ...@@ -81,6 +91,8 @@ Note that for {train,eval,vis}.py:
the available GPU memory and also set `fine_tune_batch_norm` to be False or the available GPU memory and also set `fine_tune_batch_norm` to be False or
True depending on the use case. True depending on the use case.
======= =======
=======
>>>>>>> origin/master
**Note that for {train,eval,vis}.py:** **Note that for {train,eval,vis}.py:**
1. In order to reproduce our results, one needs to use large batch size (> 12), 1. In order to reproduce our results, one needs to use large batch size (> 12),
...@@ -89,6 +101,9 @@ Note that for {train,eval,vis}.py: ...@@ -89,6 +101,9 @@ Note that for {train,eval,vis}.py:
GPU memory at hand, please fine-tune from our provided checkpoints whose GPU memory at hand, please fine-tune from our provided checkpoints whose
batch norm parameters have been trained, and use smaller learning rate with batch norm parameters have been trained, and use smaller learning rate with
fine_tune_batch_norm = False. fine_tune_batch_norm = False.
<<<<<<< HEAD
>>>>>>> origin/master
=======
>>>>>>> origin/master >>>>>>> origin/master
2. The users should change atrous_rates from [6, 12, 18] to [12, 24, 36] if 2. The users should change atrous_rates from [6, 12, 18] to [12, 24, 36] if
...@@ -114,6 +129,11 @@ python deeplab/eval.py \ ...@@ -114,6 +129,11 @@ python deeplab/eval.py \
--eval_crop_size=513 \ --eval_crop_size=513 \
--eval_crop_size=513 \ --eval_crop_size=513 \
<<<<<<< HEAD <<<<<<< HEAD
<<<<<<< HEAD
=======
--dataset="pascal_voc_seg" \
--eval_split="val" \
>>>>>>> origin/master
======= =======
--dataset="pascal_voc_seg" \ --dataset="pascal_voc_seg" \
--eval_split="val" \ --eval_split="val" \
...@@ -145,6 +165,11 @@ python deeplab/vis.py \ ...@@ -145,6 +165,11 @@ python deeplab/vis.py \
--vis_crop_size=513 \ --vis_crop_size=513 \
--vis_crop_size=513 \ --vis_crop_size=513 \
<<<<<<< HEAD <<<<<<< HEAD
<<<<<<< HEAD
=======
--dataset="pascal_voc_seg" \
--vis_split="val" \
>>>>>>> origin/master
======= =======
--dataset="pascal_voc_seg" \ --dataset="pascal_voc_seg" \
--vis_split="val" \ --vis_split="val" \
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment