Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
ResNet50_tensorflow
Commits
da0d0d27
Commit
da0d0d27
authored
Nov 01, 2020
by
anivegesana
Browse files
Fix typos
parent
868cea8f
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
73 additions
and
74 deletions
+73
-74
official/vision/beta/projects/yolo/modeling/layers/nn_blocks.py
...al/vision/beta/projects/yolo/modeling/layers/nn_blocks.py
+73
-74
No files found.
official/vision/beta/projects/yolo/modeling/layers/nn_blocks.py
View file @
da0d0d27
...
...
@@ -33,20 +33,20 @@ class DarkConv(ks.layers.Layer):
strides: integer of tuple how much to move the kernel after each kernel use
padding: string 'valid' or 'same', if same, then pad the image, else do not
dialtion_rate: tuple to indicate how much to modulate kernel weights and
the
how many pixels ina featur map to skip
use_bias: boolean to indicate w
i
ther to use bias in convolution layer
kernel_initializer: string to indicate which function to use to initialize weig
t
hs
how many pixels in
a featur
e
map to skip
use_bias: boolean to indicate w
he
ther to use bias in convolution layer
kernel_initializer: string to indicate which function to use to initialize weigh
t
s
bias_initializer: string to indicate which function to use to initialize bias
kernel_regularizer: string to indicate which function to use to regularizer weights
bias_regularizer: string to indicate which function to use to regularizer bias
group_id: integer for which group of features to pass through the conv.
groups: integer for how many splits there should be in the convolution feature stack input
grouping_only: skip the convolution and only return the group of featuresindicated by grouping_only
use_bn: boolean for wether to use batchnormalization
use_sync_bn: boolean for wether sync batch normalization statistics
grouping_only: skip the convolution and only return the group of features
indicated by grouping_only
use_bn: boolean for w
h
ether to use batch
normalization
use_sync_bn: boolean for w
h
ether sync batch normalization statistics
of all batch norm layers to the models global statistics (across all input batches)
norm_moment: float for moment to use for batchnorm
norm_epsilon: float for batchnorm epsilon
norm_moment: float for moment to use for batch
norm
alization
norm_epsilon: float for batch
norm
alization
epsilon
activation: string or None for activation function to use in layer,
if None activation is replaced by linear
leaky_alpha: float to use as alpha if activation function is leaky
...
...
@@ -91,7 +91,7 @@ class DarkConv(ks.layers.Layer):
self
.
_kernel_regularizer
=
kernel_regularizer
self
.
_bias_regularizer
=
bias_regularizer
# batchnorm params
# batch
norm
alization
params
self
.
_use_bn
=
use_bn
if
self
.
_use_bn
:
self
.
_use_bias
=
False
...
...
@@ -137,7 +137,6 @@ class DarkConv(ks.layers.Layer):
kernel_regularizer
=
self
.
_kernel_regularizer
,
bias_regularizer
=
self
.
_bias_regularizer
)
#self.conv =tf.nn.convolution(filters=self._filters, strides=self._strides, padding=self._padding
if
self
.
_use_bn
:
if
self
.
_use_sync_bn
:
self
.
bn
=
tf
.
keras
.
layers
.
experimental
.
SyncBatchNormalization
(
...
...
@@ -207,18 +206,18 @@ class DarkTiny(ks.layers.Layer):
Args:
filters: integer for output depth, or the number of features to learn
use_bias: boolean to indicate w
i
ther to use bias in convolution layer
kernel_initializer: string to indicate which function to use to initialize weig
t
hs
use_bias: boolean to indicate w
he
ther to use bias in convolution layer
kernel_initializer: string to indicate which function to use to initialize weigh
t
s
bias_initializer: string to indicate which function to use to initialize bias
kernel_regularizer: string to indicate which function to use to regularize
r
weights
bias_regularizer: string to indicate which function to use to regularize
r
bias
use_bn: boolean for wether to use batchnormalization
use_sync_bn: boolean for wether sync batch normalization statistics
of all batch norm layers to the models global statistics (across all input batches)
kernel_regularizer: string to indicate which function to use to regularize weights
bias_regularizer: string to indicate which function to use to regularize bias
use_bn: boolean for w
h
ether to use batch
normalization
use_sync_bn: boolean for w
h
ether
to
sync batch normalization statistics
of all batch norm layers to the models
'
global statistics (across all input batches)
group_id: integer for which group of features to pass through the csp tiny stack.
groups: integer for how many splits there should be in the convolution feature stack output
norm_moment: float for moment to use for batchnorm
norm_epsilon: float for batchnorm epsilon
norm_moment: float for moment to use for batch
norm
alization
norm_epsilon: float for batch
norm
alization
epsilon
activation: string or None for activation function to use in layer,
if None activation is replaced by linear
**kwargs: Keyword Arguments
...
...
@@ -314,22 +313,22 @@ class DarkResidual(ks.layers.Layer):
Args:
filters: integer for output depth, or the number of features to learn
use_bias: boolean to indicate w
i
ther to use bias in convolution layer
kernel_initializer: string to indicate which function to use to initialize weig
t
hs
use_bias: boolean to indicate w
he
ther to use bias in convolution layer
kernel_initializer: string to indicate which function to use to initialize weigh
t
s
bias_initializer: string to indicate which function to use to initialize bias
kernel_regularizer: string to indicate which function to use to regularizer weights
bias_regularizer: string to indicate which function to use to regularizer bias
use_bn: boolean for wether to use batchnormalization
use_sync_bn: boolean for wether sync batch normalization statistics
use_bn: boolean for w
h
ether to use batch
normalization
use_sync_bn: boolean for w
h
ether sync batch normalization statistics
of all batch norm layers to the models global statistics (across all input batches)
norm_moment: float for moment to use for batchnorm
norm_epsilon: float for batchnorm epsilon
norm_moment: float for moment to use for batch
norm
alization
norm_epsilon: float for batch
norm
alization
epsilon
conv_activation: string or None for activation function to use in layer,
if None activation is replaced by linear
leaky_alpha: float to use as alpha if activation function is leaky
sc_activation: string for activation function to use in layer
downsample: boolean for if image input is larger than layer output, set downsample to True
so the dimen
t
ions are forced to match
so the dimen
s
ions are forced to match
**kwargs: Keyword Arguments
'''
...
...
@@ -472,24 +471,24 @@ class CSPTiny(ks.layers.Layer):
Args:
filters: integer for output depth, or the number of features to learn
use_bias: boolean to indicate w
i
ther to use bias in convolution layer
kernel_initializer: string to indicate which function to use to initialize weig
t
hs
use_bias: boolean to indicate w
he
ther to use bias in convolution layer
kernel_initializer: string to indicate which function to use to initialize weigh
t
s
bias_initializer: string to indicate which function to use to initialize bias
use_bn: boolean for wether to use batchnormalization
use_bn: boolean for w
h
ether to use batch
normalization
kernel_regularizer: string to indicate which function to use to regularizer weights
bias_regularizer: string to indicate which function to use to regularizer bias
use_sync_bn: boolean for wether sync batch normalization statistics
use_sync_bn: boolean for w
h
ether sync batch normalization statistics
of all batch norm layers to the models global statistics (across all input batches)
group_id: integer for which group of features to pass through the csp tiny stack.
groups: integer for how many splits there should be in the convolution feature stack output
norm_moment: float for moment to use for batchnorm
norm_epsilon: float for batchnorm epsilon
norm_moment: float for moment to use for batch
norm
alization
norm_epsilon: float for batch
norm
alization
epsilon
conv_activation: string or None for activation function to use in layer,
if None activation is replaced by linear
leaky_alpha: float to use as alpha if activation function is leaky
sc_activation: string for activation function to use in layer
downsample: boolean for if image input is larger than layer output, set downsample to True
so the dimen
t
ions are forced to match
so the dimen
s
ions are forced to match
**kwargs: Keyword Arguments
"""
def
__init__
(
...
...
@@ -665,11 +664,11 @@ class CSPDownSample(ks.layers.Layer):
bias_initializer: string to indicate which function to use to initialize bias
kernel_regularizer: string to indicate which function to use to regularizer weights
bias_regularizer: string to indicate which function to use to regularizer bias
use_bn: boolean for wether to use batchnormalization
use_sync_bn: boolean for wether sync batch normalization statistics
use_bn: boolean for w
h
ether to use batch
normalization
use_sync_bn: boolean for w
h
ether sync batch normalization statistics
of all batch norm layers to the models global statistics (across all input batches)
norm_moment: float for moment to use for batchnorm
norm_epsilon: float for batchnorm epsilon
norm_moment: float for moment to use for batch
norm
alization
norm_epsilon: float for batch
norm
alization
epsilon
**kwargs: Keyword Arguments
"""
def
__init__
(
...
...
@@ -767,11 +766,11 @@ class CSPConnect(ks.layers.Layer):
bias_initializer: string to indicate which function to use to initialize bias
kernel_regularizer: string to indicate which function to use to regularizer weights
bias_regularizer: string to indicate which function to use to regularizer bias
use_bn: boolean for wether to use batchnormalization
use_sync_bn: boolean for wether sync batch normalization statistics
use_bn: boolean for w
h
ether to use batch
normalization
use_sync_bn: boolean for w
h
ether sync batch normalization statistics
of all batch norm layers to the models global statistics (across all input batches)
norm_moment: float for moment to use for batchnorm
norm_epsilon: float for batchnorm epsilon
norm_moment: float for moment to use for batch
norm
alization
norm_epsilon: float for batch
norm
alization
epsilon
**kwargs: Keyword Arguments
"""
def
__init__
(
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment