Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
nni
Commits
12410686
Unverified
Commit
12410686
authored
Jun 21, 2019
by
chicm-ms
Committed by
GitHub
Jun 21, 2019
Browse files
Merge pull request #20 from microsoft/master
pull code
parents
611a45fc
61fec446
Changes
235
Show whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
432 additions
and
42 deletions
+432
-42
examples/trials/kaggle-tgs-salt/predict.py
examples/trials/kaggle-tgs-salt/predict.py
+4
-4
examples/trials/kaggle-tgs-salt/preprocess.py
examples/trials/kaggle-tgs-salt/preprocess.py
+3
-3
examples/trials/kaggle-tgs-salt/train.py
examples/trials/kaggle-tgs-salt/train.py
+9
-9
examples/trials/kaggle-tgs-salt/utils.py
examples/trials/kaggle-tgs-salt/utils.py
+1
-1
examples/trials/mnist-batch-tune-keras/mnist-keras.py
examples/trials/mnist-batch-tune-keras/mnist-keras.py
+1
-1
examples/trials/mnist-batch-tune-keras/search_space.json
examples/trials/mnist-batch-tune-keras/search_space.json
+2
-2
examples/trials/mnist-distributed-pytorch/dist_mnist.py
examples/trials/mnist-distributed-pytorch/dist_mnist.py
+3
-3
examples/trials/mnist-nas/config.yml
examples/trials/mnist-nas/config.yml
+20
-0
examples/trials/mnist-nas/mnist.py
examples/trials/mnist-nas/mnist.py
+253
-0
examples/trials/mnist-nas/operators.py
examples/trials/mnist-nas/operators.py
+109
-0
examples/trials/mnist-nested-search-space/mnist.py
examples/trials/mnist-nested-search-space/mnist.py
+1
-1
examples/trials/nas_cifar10/README.md
examples/trials/nas_cifar10/README.md
+8
-8
examples/trials/nas_cifar10/README_zh_CN.md
examples/trials/nas_cifar10/README_zh_CN.md
+8
-0
examples/trials/network_morphism/FashionMNIST/config.yml
examples/trials/network_morphism/FashionMNIST/config.yml
+1
-1
examples/trials/network_morphism/FashionMNIST/config_pai.yml
examples/trials/network_morphism/FashionMNIST/config_pai.yml
+1
-1
examples/trials/network_morphism/README.md
examples/trials/network_morphism/README.md
+3
-3
examples/trials/network_morphism/README_zh_CN.md
examples/trials/network_morphism/README_zh_CN.md
+2
-2
examples/trials/network_morphism/cifar10/config.yml
examples/trials/network_morphism/cifar10/config.yml
+1
-1
examples/trials/network_morphism/cifar10/config_pai.yml
examples/trials/network_morphism/cifar10/config_pai.yml
+1
-1
examples/trials/sklearn/classification/main.py
examples/trials/sklearn/classification/main.py
+1
-1
No files found.
examples/trials/kaggle-tgs-salt/predict.py
View file @
12410686
examples/trials/kaggle-tgs-salt/preprocess.py
View file @
12410686
examples/trials/kaggle-tgs-salt/train.py
View file @
12410686
examples/trials/kaggle-tgs-salt/utils.py
View file @
12410686
examples/trials/mnist-batch-tune-keras/mnist-keras.py
View file @
12410686
examples/trials/mnist-batch-tune-keras/search_space.json
View file @
12410686
examples/trials/mnist-distributed-pytorch/dist_mnist.py
View file @
12410686
examples/trials/mnist-nas/config.yml
0 → 100644
View file @
12410686
authorName
:
default
experimentName
:
example_mnist
trialConcurrency
:
1
maxExecDuration
:
1h
maxTrialNum
:
10
#choice: local, remote, pai
trainingServicePlatform
:
local
#choice: true, false
useAnnotation
:
true
tuner
:
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner
#SMAC (SMAC should be installed through nnictl)
#codeDir: ~/nni/nni/examples/tuners/random_nas_tuner
codeDir
:
../../tuners/random_nas_tuner
classFileName
:
random_nas_tuner.py
className
:
RandomNASTuner
trial
:
command
:
python3 mnist.py
codeDir
:
.
gpuNum
:
0
examples/trials/mnist-nas/mnist.py
0 → 100644
View file @
12410686
"""A deep MNIST classifier using convolutional layers."""
import
argparse
import
logging
import
math
import
tempfile
import
time
import
tensorflow
as
tf
from
tensorflow.examples.tutorials.mnist
import
input_data
import
operators
as
op
FLAGS
=
None
logger
=
logging
.
getLogger
(
'mnist_AutoML'
)
class
MnistNetwork
(
object
):
'''
MnistNetwork is for initializing and building basic network for mnist.
'''
def
__init__
(
self
,
channel_1_num
,
channel_2_num
,
conv_size
,
hidden_size
,
pool_size
,
learning_rate
,
x_dim
=
784
,
y_dim
=
10
):
self
.
channel_1_num
=
channel_1_num
self
.
channel_2_num
=
channel_2_num
self
.
conv_size
=
conv_size
self
.
hidden_size
=
hidden_size
self
.
pool_size
=
pool_size
self
.
learning_rate
=
learning_rate
self
.
x_dim
=
x_dim
self
.
y_dim
=
y_dim
self
.
images
=
tf
.
placeholder
(
tf
.
float32
,
[
None
,
self
.
x_dim
],
name
=
'input_x'
)
self
.
labels
=
tf
.
placeholder
(
tf
.
float32
,
[
None
,
self
.
y_dim
],
name
=
'input_y'
)
self
.
keep_prob
=
tf
.
placeholder
(
tf
.
float32
,
name
=
'keep_prob'
)
self
.
train_step
=
None
self
.
accuracy
=
None
def
build_network
(
self
):
'''
Building network for mnist, meanwhile specifying its neural architecture search space
'''
# Reshape to use within a convolutional neural net.
# Last dimension is for "features" - there is only one here, since images are
# grayscale -- it would be 3 for an RGB image, 4 for RGBA, etc.
with
tf
.
name_scope
(
'reshape'
):
try
:
input_dim
=
int
(
math
.
sqrt
(
self
.
x_dim
))
except
:
print
(
'input dim cannot be sqrt and reshape. input dim: '
+
str
(
self
.
x_dim
))
logger
.
debug
(
'input dim cannot be sqrt and reshape. input dim: %s'
,
str
(
self
.
x_dim
))
raise
x_image
=
tf
.
reshape
(
self
.
images
,
[
-
1
,
input_dim
,
input_dim
,
1
])
"""@nni.mutable_layers(
{
layer_choice: [op.conv2d(size=1, in_ch=1, out_ch=self.channel_1_num),
op.conv2d(size=3, in_ch=1, out_ch=self.channel_1_num),
op.twice_conv2d(size=3, in_ch=1, out_ch=self.channel_1_num),
op.twice_conv2d(size=7, in_ch=1, out_ch=self.channel_1_num),
op.dilated_conv(in_ch=1, out_ch=self.channel_1_num),
op.separable_conv(size=3, in_ch=1, out_ch=self.channel_1_num),
op.separable_conv(size=5, in_ch=1, out_ch=self.channel_1_num),
op.separable_conv(size=7, in_ch=1, out_ch=self.channel_1_num)],
fixed_inputs: [x_image],
layer_output: conv1_out
},
{
layer_choice: [op.post_process(ch_size=self.channel_1_num)],
fixed_inputs: [conv1_out],
layer_output: post1_out
},
{
layer_choice: [op.max_pool(size=3),
op.max_pool(size=5),
op.max_pool(size=7),
op.avg_pool(size=3),
op.avg_pool(size=5),
op.avg_pool(size=7)],
fixed_inputs: [post1_out],
layer_output: pool1_out
},
{
layer_choice: [op.conv2d(size=1, in_ch=self.channel_1_num, out_ch=self.channel_2_num),
op.conv2d(size=3, in_ch=self.channel_1_num, out_ch=self.channel_2_num),
op.twice_conv2d(size=3, in_ch=self.channel_1_num, out_ch=self.channel_2_num),
op.twice_conv2d(size=7, in_ch=self.channel_1_num, out_ch=self.channel_2_num),
op.dilated_conv(in_ch=self.channel_1_num, out_ch=self.channel_2_num),
op.separable_conv(size=3, in_ch=self.channel_1_num, out_ch=self.channel_2_num),
op.separable_conv(size=5, in_ch=self.channel_1_num, out_ch=self.channel_2_num),
op.separable_conv(size=7, in_ch=self.channel_1_num, out_ch=self.channel_2_num)],
fixed_inputs: [pool1_out],
optional_inputs: [post1_out],
optional_input_size: [0, 1],
layer_output: conv2_out
},
{
layer_choice: [op.post_process(ch_size=self.channel_2_num)],
fixed_inputs: [conv2_out],
layer_output: post2_out
},
{
layer_choice: [op.max_pool(size=3),
op.max_pool(size=5),
op.max_pool(size=7),
op.avg_pool(size=3),
op.avg_pool(size=5),
op.avg_pool(size=7)],
fixed_inputs: [post2_out],
optional_inputs: [post1_out, pool1_out],
optional_input_size: [0, 1],
layer_output: pool2_out
}
)"""
# Fully connected layer 1 -- after 2 round of downsampling, our 28x28 image
# is down to 7x7x64 feature maps -- maps this to 1024 features.
last_dim_list
=
pool2_out
.
get_shape
().
as_list
()
assert
(
last_dim_list
[
1
]
==
last_dim_list
[
2
])
last_dim
=
last_dim_list
[
1
]
with
tf
.
name_scope
(
'fc1'
):
w_fc1
=
op
.
weight_variable
(
[
last_dim
*
last_dim
*
self
.
channel_2_num
,
self
.
hidden_size
])
b_fc1
=
op
.
bias_variable
([
self
.
hidden_size
])
h_pool2_flat
=
tf
.
reshape
(
pool2_out
,
[
-
1
,
last_dim
*
last_dim
*
self
.
channel_2_num
])
h_fc1
=
tf
.
nn
.
relu
(
tf
.
matmul
(
h_pool2_flat
,
w_fc1
)
+
b_fc1
)
# Dropout - controls the complexity of the model, prevents co-adaptation of features.
with
tf
.
name_scope
(
'dropout'
):
h_fc1_drop
=
tf
.
nn
.
dropout
(
h_fc1
,
self
.
keep_prob
)
# Map the 1024 features to 10 classes, one for each digit
with
tf
.
name_scope
(
'fc2'
):
w_fc2
=
op
.
weight_variable
([
self
.
hidden_size
,
self
.
y_dim
])
b_fc2
=
op
.
bias_variable
([
self
.
y_dim
])
y_conv
=
tf
.
matmul
(
h_fc1_drop
,
w_fc2
)
+
b_fc2
with
tf
.
name_scope
(
'loss'
):
cross_entropy
=
tf
.
reduce_mean
(
tf
.
nn
.
softmax_cross_entropy_with_logits
(
labels
=
self
.
labels
,
logits
=
y_conv
))
with
tf
.
name_scope
(
'adam_optimizer'
):
self
.
train_step
=
tf
.
train
.
AdamOptimizer
(
self
.
learning_rate
).
minimize
(
cross_entropy
)
with
tf
.
name_scope
(
'accuracy'
):
correct_prediction
=
tf
.
equal
(
tf
.
argmax
(
y_conv
,
1
),
tf
.
argmax
(
self
.
labels
,
1
))
self
.
accuracy
=
tf
.
reduce_mean
(
tf
.
cast
(
correct_prediction
,
tf
.
float32
))
def
download_mnist_retry
(
data_dir
,
max_num_retries
=
20
):
"""Try to download mnist dataset and avoid errors"""
for
_
in
range
(
max_num_retries
):
try
:
return
input_data
.
read_data_sets
(
data_dir
,
one_hot
=
True
)
except
tf
.
errors
.
AlreadyExistsError
:
time
.
sleep
(
1
)
raise
Exception
(
"Failed to download MNIST."
)
def
main
(
params
):
'''
Main function, build mnist network, run and send result to NNI.
'''
# Import data
mnist
=
download_mnist_retry
(
params
[
'data_dir'
])
print
(
'Mnist download data done.'
)
logger
.
debug
(
'Mnist download data done.'
)
# Create the model
# Build the graph for the deep net
mnist_network
=
MnistNetwork
(
channel_1_num
=
params
[
'channel_1_num'
],
channel_2_num
=
params
[
'channel_2_num'
],
conv_size
=
params
[
'conv_size'
],
hidden_size
=
params
[
'hidden_size'
],
pool_size
=
params
[
'pool_size'
],
learning_rate
=
params
[
'learning_rate'
])
mnist_network
.
build_network
()
logger
.
debug
(
'Mnist build network done.'
)
# Write log
graph_location
=
tempfile
.
mkdtemp
()
logger
.
debug
(
'Saving graph to: %s'
,
graph_location
)
train_writer
=
tf
.
summary
.
FileWriter
(
graph_location
)
train_writer
.
add_graph
(
tf
.
get_default_graph
())
test_acc
=
0.0
with
tf
.
Session
()
as
sess
:
sess
.
run
(
tf
.
global_variables_initializer
())
for
i
in
range
(
params
[
'batch_num'
]):
batch
=
mnist
.
train
.
next_batch
(
params
[
'batch_size'
])
mnist_network
.
train_step
.
run
(
feed_dict
=
{
mnist_network
.
images
:
batch
[
0
],
mnist_network
.
labels
:
batch
[
1
],
mnist_network
.
keep_prob
:
1
-
params
[
'dropout_rate'
]}
)
if
i
%
100
==
0
:
test_acc
=
mnist_network
.
accuracy
.
eval
(
feed_dict
=
{
mnist_network
.
images
:
mnist
.
test
.
images
,
mnist_network
.
labels
:
mnist
.
test
.
labels
,
mnist_network
.
keep_prob
:
1.0
})
"""@nni.report_intermediate_result(test_acc)"""
logger
.
debug
(
'test accuracy %g'
,
test_acc
)
logger
.
debug
(
'Pipe send intermediate result done.'
)
test_acc
=
mnist_network
.
accuracy
.
eval
(
feed_dict
=
{
mnist_network
.
images
:
mnist
.
test
.
images
,
mnist_network
.
labels
:
mnist
.
test
.
labels
,
mnist_network
.
keep_prob
:
1.0
})
"""@nni.report_final_result(test_acc)"""
logger
.
debug
(
'Final result is %g'
,
test_acc
)
logger
.
debug
(
'Send final result done.'
)
def
get_params
():
''' Get parameters from command line '''
parser
=
argparse
.
ArgumentParser
()
parser
.
add_argument
(
"--data_dir"
,
type
=
str
,
default
=
'/tmp/tensorflow/mnist/input_data'
,
help
=
"data directory"
)
parser
.
add_argument
(
"--dropout_rate"
,
type
=
float
,
default
=
0.5
,
help
=
"dropout rate"
)
parser
.
add_argument
(
"--channel_1_num"
,
type
=
int
,
default
=
32
)
parser
.
add_argument
(
"--channel_2_num"
,
type
=
int
,
default
=
64
)
parser
.
add_argument
(
"--conv_size"
,
type
=
int
,
default
=
5
)
parser
.
add_argument
(
"--pool_size"
,
type
=
int
,
default
=
2
)
parser
.
add_argument
(
"--hidden_size"
,
type
=
int
,
default
=
1024
)
parser
.
add_argument
(
"--learning_rate"
,
type
=
float
,
default
=
1e-4
)
parser
.
add_argument
(
"--batch_num"
,
type
=
int
,
default
=
2000
)
parser
.
add_argument
(
"--batch_size"
,
type
=
int
,
default
=
32
)
args
,
_
=
parser
.
parse_known_args
()
return
args
if
__name__
==
'__main__'
:
try
:
params
=
vars
(
get_params
())
main
(
params
)
except
Exception
as
exception
:
logger
.
exception
(
exception
)
raise
examples/trials/mnist-nas/operators.py
0 → 100644
View file @
12410686
import
tensorflow
as
tf
import
math
def
weight_variable
(
shape
):
"""weight_variable generates a weight variable of a given shape."""
initial
=
tf
.
truncated_normal
(
shape
,
stddev
=
0.1
)
return
tf
.
Variable
(
initial
)
def
bias_variable
(
shape
):
"""bias_variable generates a bias variable of a given shape."""
initial
=
tf
.
constant
(
0.1
,
shape
=
shape
)
return
tf
.
Variable
(
initial
)
def
sum_op
(
inputs
):
"""sum_op"""
fixed_input
=
inputs
[
0
][
0
]
optional_input
=
inputs
[
1
][
0
]
fixed_shape
=
fixed_input
.
get_shape
().
as_list
()
optional_shape
=
optional_input
.
get_shape
().
as_list
()
assert
fixed_shape
[
1
]
==
fixed_shape
[
2
]
assert
optional_shape
[
1
]
==
optional_shape
[
2
]
pool_size
=
math
.
ceil
(
optional_shape
[
1
]
/
fixed_shape
[
1
])
pool_out
=
tf
.
nn
.
avg_pool
(
optional_input
,
ksize
=
[
1
,
pool_size
,
pool_size
,
1
],
strides
=
[
1
,
pool_size
,
pool_size
,
1
],
padding
=
'SAME'
)
conv_matrix
=
weight_variable
([
1
,
1
,
optional_shape
[
3
],
fixed_shape
[
3
]])
conv_out
=
tf
.
nn
.
conv2d
(
pool_out
,
conv_matrix
,
strides
=
[
1
,
1
,
1
,
1
],
padding
=
'SAME'
)
return
fixed_input
+
conv_out
def
conv2d
(
inputs
,
size
=-
1
,
in_ch
=-
1
,
out_ch
=-
1
):
"""conv2d returns a 2d convolution layer with full stride."""
if
not
inputs
[
1
]:
x_input
=
inputs
[
0
][
0
]
else
:
x_input
=
sum_op
(
inputs
)
if
size
in
[
1
,
3
]:
w_matrix
=
weight_variable
([
size
,
size
,
in_ch
,
out_ch
])
return
tf
.
nn
.
conv2d
(
x_input
,
w_matrix
,
strides
=
[
1
,
1
,
1
,
1
],
padding
=
'SAME'
)
else
:
raise
Exception
(
"Unknown filter size: %d."
%
size
)
def
twice_conv2d
(
inputs
,
size
=-
1
,
in_ch
=-
1
,
out_ch
=-
1
):
"""twice_conv2d"""
if
not
inputs
[
1
]:
x_input
=
inputs
[
0
][
0
]
else
:
x_input
=
sum_op
(
inputs
)
if
size
in
[
3
,
7
]:
w_matrix1
=
weight_variable
([
1
,
size
,
in_ch
,
int
(
out_ch
/
2
)])
out
=
tf
.
nn
.
conv2d
(
x_input
,
w_matrix1
,
strides
=
[
1
,
1
,
1
,
1
],
padding
=
'SAME'
)
w_matrix2
=
weight_variable
([
size
,
1
,
int
(
out_ch
/
2
),
out_ch
])
return
tf
.
nn
.
conv2d
(
out
,
w_matrix2
,
strides
=
[
1
,
1
,
1
,
1
],
padding
=
'SAME'
)
else
:
raise
Exception
(
"Unknown filter size: %d."
%
size
)
def
dilated_conv
(
inputs
,
size
=
3
,
in_ch
=-
1
,
out_ch
=-
1
):
"""dilated_conv"""
if
not
inputs
[
1
]:
x_input
=
inputs
[
0
][
0
]
else
:
x_input
=
sum_op
(
inputs
)
if
size
==
3
:
w_matrix
=
weight_variable
([
size
,
size
,
in_ch
,
out_ch
])
return
tf
.
nn
.
atrous_conv2d
(
x_input
,
w_matrix
,
rate
=
2
,
padding
=
'SAME'
)
else
:
raise
Exception
(
"Unknown filter size: %d."
%
size
)
def
separable_conv
(
inputs
,
size
=-
1
,
in_ch
=-
1
,
out_ch
=-
1
):
"""separable_conv"""
if
not
inputs
[
1
]:
x_input
=
inputs
[
0
][
0
]
else
:
x_input
=
sum_op
(
inputs
)
if
size
in
[
3
,
5
,
7
]:
depth_matrix
=
weight_variable
([
size
,
size
,
in_ch
,
1
])
point_matrix
=
weight_variable
([
1
,
1
,
1
*
in_ch
,
out_ch
])
return
tf
.
nn
.
separable_conv2d
(
x_input
,
depth_matrix
,
point_matrix
,
strides
=
[
1
,
1
,
1
,
1
],
padding
=
'SAME'
)
else
:
raise
Exception
(
"Unknown filter size: %d."
%
size
)
def
avg_pool
(
inputs
,
size
=-
1
):
"""avg_pool downsamples a feature map."""
if
not
inputs
[
1
]:
x_input
=
inputs
[
0
][
0
]
else
:
x_input
=
sum_op
(
inputs
)
if
size
in
[
3
,
5
,
7
]:
return
tf
.
nn
.
avg_pool
(
x_input
,
ksize
=
[
1
,
size
,
size
,
1
],
strides
=
[
1
,
size
,
size
,
1
],
padding
=
'SAME'
)
else
:
raise
Exception
(
"Unknown filter size: %d."
%
size
)
def
max_pool
(
inputs
,
size
=-
1
):
"""max_pool downsamples a feature map."""
if
not
inputs
[
1
]:
x_input
=
inputs
[
0
][
0
]
else
:
x_input
=
sum_op
(
inputs
)
if
size
in
[
3
,
5
,
7
]:
return
tf
.
nn
.
max_pool
(
x_input
,
ksize
=
[
1
,
size
,
size
,
1
],
strides
=
[
1
,
size
,
size
,
1
],
padding
=
'SAME'
)
else
:
raise
Exception
(
"Unknown filter size: %d."
%
size
)
def
post_process
(
inputs
,
ch_size
=-
1
):
"""post_process"""
x_input
=
inputs
[
0
][
0
]
bias_matrix
=
bias_variable
([
ch_size
])
return
tf
.
nn
.
relu
(
x_input
+
bias_matrix
)
examples/trials/mnist-nested-search-space/mnist.py
View file @
12410686
examples/trials/
NAS
/README.md
→
examples/trials/
nas_cifar10
/README.md
View file @
12410686
examples/trials/nas_cifar10/README_zh_CN.md
0 → 100644
View file @
12410686
**在 NNI 中运行神经网络架构搜索**
===
参考
[
NNI-NAS-Example
](
https://github.com/Crysple/NNI-NAS-Example
)
,来使用贡献者提供的 NAS 接口。
谢谢可爱的贡献者!
欢迎越来越多的人加入我们!
\ No newline at end of file
examples/trials/network_morphism/FashionMNIST/config.yml
View file @
12410686
examples/trials/network_morphism/FashionMNIST/config_pai.yml
View file @
12410686
examples/trials/network_morphism/README.md
View file @
12410686
examples/trials/network_morphism/README_zh_CN.md
View file @
12410686
...
...
@@ -99,10 +99,10 @@ nnictl create --config config.yml
`Fashion-MNIST`
是来自
[
Zalando
](
https://jobs.zalando.com/tech/
)
文章的图片 — 有 60,000 个样例的训练集和 10,000 个样例的测试集。 每个样例是 28x28 的灰度图,分为 10 个类别。 由于 MNIST 数据集过于简单,该数据集现在开始被广泛使用,用来替换 MNIST 作为基准数据集。
这里有两个样例,
[
FashionMNIST-keras.py
](
.
./../../../examples/trials/network_morphism
/FashionMNIST/FashionMNIST_keras.py
)
和
[
FashionMNIST-pytorch.py
](
.
./../../../examples/trials/network_morphism
/FashionMNIST/FashionMNIST_pytorch.py
)
。 注意,在
`config.yml`
中,需要为此数据集修改
`input_width`
为 28,以及
`input_channel`
为 1。
这里有两个样例,
[
FashionMNIST-keras.py
](
./FashionMNIST/FashionMNIST_keras.py
)
和
[
FashionMNIST-pytorch.py
](
./FashionMNIST/FashionMNIST_pytorch.py
)
。 注意,在
`config.yml`
中,需要为此数据集修改
`input_width`
为 28,以及
`input_channel`
为 1。
### Cifar10
`CIFAR-10`
数据集
[
Canadian Institute For Advanced Research
](
https://www.cifar.ca/
)
是广泛用于机器学习和视觉算法训练的数据集。 它是机器学习领域最广泛使用的数据集之一。 CIFAR-10 数据集包含了 60,000 张 32x32 的彩色图片,分为 10 类。
这里有两个样例,
[
cifar10-keras.py
](
../../../../examples/trials/network_morphism/cifar10/cifar10_keras.py
)
和
[
cifar10-pytorch.py
](
../../../../examples/trials/network_morphism/cifar10/cifar10_pytorch.py
)
。 在
`config.yml`
中,该数据集
`input_width`
的值是 32,并且
`input_channel`
是 3。
\ No newline at end of file
这里有两个样例,
[
cifar10-keras.py
](
./cifar10/cifar10_keras.py
)
和
[
cifar10-pytorch.py
](
./cifar10/cifar10_pytorch.py
)
。 在
`config.yml`
中,该数据集
`input_width`
的值是 32,并且
`input_channel`
是 3。
\ No newline at end of file
examples/trials/network_morphism/cifar10/config.yml
View file @
12410686
examples/trials/network_morphism/cifar10/config_pai.yml
View file @
12410686
examples/trials/sklearn/classification/main.py
View file @
12410686
Prev
1
2
3
4
5
6
7
8
9
…
12
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment