Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
dlib
Commits
b9332698
Commit
b9332698
authored
May 23, 2016
by
Davis King
Browse files
updated example
parent
e5ad9590
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
7 additions
and
7 deletions
+7
-7
examples/dnn_mnist_advanced_ex.cpp
examples/dnn_mnist_advanced_ex.cpp
+7
-7
No files found.
examples/dnn_mnist_advanced_ex.cpp
View file @
b9332698
...
@@ -198,32 +198,32 @@ int main(int argc, char** argv) try
...
@@ -198,32 +198,32 @@ int main(int argc, char** argv) try
layer<2> avg_pool (nr=0, nc=0, stride_y=1, stride_x=1, padding_y=0, padding_x=0)
layer<2> avg_pool (nr=0, nc=0, stride_y=1, stride_x=1, padding_y=0, padding_x=0)
layer<3> prelu (initial_param_value=0.2)
layer<3> prelu (initial_param_value=0.2)
layer<4> add_prev
layer<4> add_prev
layer<5> bn_con eps=1e-05 learning_rate_mult=1 weight_decay_mult=0
layer<5> bn_con eps=1e-05 learning_rate_mult=1 weight_decay_mult=0
bias_learning_rate_mult=1 bias_weight_decay_mult=1
layer<6> con (num_filters=8, nr=3, nc=3, stride_y=1, stride_x=1, padding_y=1, padding_x=1) learning_rate_mult=1 weight_decay_mult=1 bias_learning_rate_mult=1 bias_weight_decay_mult=0
layer<6> con (num_filters=8, nr=3, nc=3, stride_y=1, stride_x=1, padding_y=1, padding_x=1) learning_rate_mult=1 weight_decay_mult=1 bias_learning_rate_mult=1 bias_weight_decay_mult=0
layer<7> prelu (initial_param_value=0.25)
layer<7> prelu (initial_param_value=0.25)
layer<8> bn_con eps=1e-05 learning_rate_mult=1 weight_decay_mult=0
layer<8> bn_con eps=1e-05 learning_rate_mult=1 weight_decay_mult=0
bias_learning_rate_mult=1 bias_weight_decay_mult=1
layer<9> con (num_filters=8, nr=3, nc=3, stride_y=1, stride_x=1, padding_y=1, padding_x=1) learning_rate_mult=1 weight_decay_mult=1 bias_learning_rate_mult=1 bias_weight_decay_mult=0
layer<9> con (num_filters=8, nr=3, nc=3, stride_y=1, stride_x=1, padding_y=1, padding_x=1) learning_rate_mult=1 weight_decay_mult=1 bias_learning_rate_mult=1 bias_weight_decay_mult=0
layer<10> tag1
layer<10> tag1
...
...
layer<34> relu
layer<34> relu
layer<35> bn_con eps=1e-05 learning_rate_mult=1 weight_decay_mult=0
layer<35> bn_con eps=1e-05 learning_rate_mult=1 weight_decay_mult=0
bias_learning_rate_mult=1 bias_weight_decay_mult=1
layer<36> con (num_filters=8, nr=3, nc=3, stride_y=2, stride_x=2, padding_y=0, padding_x=0) learning_rate_mult=1 weight_decay_mult=1 bias_learning_rate_mult=1 bias_weight_decay_mult=0
layer<36> con (num_filters=8, nr=3, nc=3, stride_y=2, stride_x=2, padding_y=0, padding_x=0) learning_rate_mult=1 weight_decay_mult=1 bias_learning_rate_mult=1 bias_weight_decay_mult=0
layer<37> tag1
layer<37> tag1
layer<38> tag4
layer<38> tag4
layer<39> prelu (initial_param_value=0.3)
layer<39> prelu (initial_param_value=0.3)
layer<40> add_prev
layer<40> add_prev
layer<41> bn_con eps=1e-05 learning_rate_mult=1 weight_decay_mult=0
layer<41> bn_con eps=1e-05 learning_rate_mult=1 weight_decay_mult=0
bias_learning_rate_mult=1 bias_weight_decay_mult=1
...
...
layer<118> relu
layer<118> relu
layer<119> bn_con eps=1e-05 learning_rate_mult=1 weight_decay_mult=0
layer<119> bn_con eps=1e-05 learning_rate_mult=1 weight_decay_mult=0
bias_learning_rate_mult=1 bias_weight_decay_mult=1
layer<120> con (num_filters=8, nr=3, nc=3, stride_y=2, stride_x=2, padding_y=0, padding_x=0) learning_rate_mult=1 weight_decay_mult=1 bias_learning_rate_mult=1 bias_weight_decay_mult=0
layer<120> con (num_filters=8, nr=3, nc=3, stride_y=2, stride_x=2, padding_y=0, padding_x=0) learning_rate_mult=1 weight_decay_mult=1 bias_learning_rate_mult=1 bias_weight_decay_mult=0
layer<121> tag1
layer<121> tag1
layer<122> relu
layer<122> relu
layer<123> add_prev
layer<123> add_prev
layer<124> bn_con eps=1e-05 learning_rate_mult=1 weight_decay_mult=0
layer<124> bn_con eps=1e-05 learning_rate_mult=1 weight_decay_mult=0
bias_learning_rate_mult=1 bias_weight_decay_mult=1
layer<125> con (num_filters=8, nr=3, nc=3, stride_y=1, stride_x=1, padding_y=1, padding_x=1) learning_rate_mult=1 weight_decay_mult=1 bias_learning_rate_mult=1 bias_weight_decay_mult=0
layer<125> con (num_filters=8, nr=3, nc=3, stride_y=1, stride_x=1, padding_y=1, padding_x=1) learning_rate_mult=1 weight_decay_mult=1 bias_learning_rate_mult=1 bias_weight_decay_mult=0
layer<126> relu
layer<126> relu
layer<127> bn_con eps=1e-05 learning_rate_mult=1 weight_decay_mult=0
layer<127> bn_con eps=1e-05 learning_rate_mult=1 weight_decay_mult=0
bias_learning_rate_mult=1 bias_weight_decay_mult=1
layer<128> con (num_filters=8, nr=3, nc=3, stride_y=1, stride_x=1, padding_y=1, padding_x=1) learning_rate_mult=1 weight_decay_mult=1 bias_learning_rate_mult=1 bias_weight_decay_mult=0
layer<128> con (num_filters=8, nr=3, nc=3, stride_y=1, stride_x=1, padding_y=1, padding_x=1) learning_rate_mult=1 weight_decay_mult=1 bias_learning_rate_mult=1 bias_weight_decay_mult=0
layer<129> tag1
layer<129> tag1
layer<130> input<matrix>
layer<130> input<matrix>
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment