Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
dlib
Commits
e179f410
Commit
e179f410
authored
Sep 28, 2015
by
Davis King
Browse files
clarified specs
parent
7f7ddcaa
Changes
3
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
23 additions
and
10 deletions
+23
-10
dlib/dnn/layers_abstract.h
dlib/dnn/layers_abstract.h
+9
-0
dlib/dnn/loss_abstract.h
dlib/dnn/loss_abstract.h
+10
-7
dlib/dnn/solvers_abstract.h
dlib/dnn/solvers_abstract.h
+4
-3
No files found.
dlib/dnn/layers_abstract.h
View file @
e179f410
...
...
@@ -274,6 +274,15 @@ namespace dlib
class
relu_
{
/*!
WHAT THIS OBJECT REPRESENTS
This is an implementation of the EXAMPLE_LAYER_ interface defined above.
In particular, it defines a rectified linear layer. Therefore, it passes
its inputs though the function f(x)=max(x,0) where f() is applied pointwise
across the input tensor.
!*/
public:
relu_
(
...
...
dlib/dnn/loss_abstract.h
View file @
e179f410
...
...
@@ -23,7 +23,7 @@ namespace dlib
needs. You do this by creating a class that defines an interface matching
the one described by this EXAMPLE_LOSS_LAYER_ class. Note that there is no
dlib::EXAMPLE_LOSS_LAYER_ type. It is shown here purely to document the
interface that a loss layer
object
must implement.
interface that a loss layer must implement.
A loss layer can optionally provide a to_label() method that converts the
output of a network into a user defined type. If to_label() is not
...
...
@@ -56,7 +56,7 @@ namespace dlib
requires
- SUBNET implements the SUBNET interface defined at the top of
layers_abstract.h.
- sub.get_output().num_samples()%sample_expansion_factor == 0
- sub.get_output().num_samples()%sample_expansion_factor == 0
.
- All outputs in each layer of sub have the same number of samples. That
is, for all valid i:
- sub.get_output().num_samples() == layer<i>(sub).get_output().num_samples()
...
...
@@ -67,7 +67,7 @@ namespace dlib
- Converts the output of the provided network to label_type objects and
stores the results into the range indicated by iter. In particular, for
all valid i and j, it will be the case that:
*(
truth
+i/sample_expansion_factor) is the output corresponding to the
*(
iter
+i/sample_expansion_factor) is the output corresponding to the
ith sample in layer<j>(sub).get_output().
!*/
...
...
@@ -90,7 +90,8 @@ namespace dlib
- input_tensor.num_samples()%sample_expansion_factor == 0.
- for all valid i:
- layer<i>(sub).get_output().num_samples() == input_tensor.num_samples().
- layer<i>(sub).get_gradient_input() has the same dimensions as layer<i>(sub).get_output().
- layer<i>(sub).get_gradient_input() has the same dimensions as
layer<i>(sub).get_output().
- truth == an iterator pointing to the beginning of a range of
input_tensor.num_samples()/sample_expansion_factor elements. In
particular, they must be label_type elements.
...
...
@@ -98,7 +99,7 @@ namespace dlib
- *(truth+i/sample_expansion_factor) is the label of the ith sample in
layer<j>(sub).get_output().
ensures
- This function computes
the
loss function that describes how well the output
- This function computes
a
loss function that describes how well the output
of sub matches the expected labels given by truth. Let's write the loss
function as L(input_tensor, truth, sub).
- Then compute_loss() computes the gradient of L() with respect to the
...
...
@@ -125,8 +126,10 @@ namespace dlib
{
/*!
WHAT THIS OBJECT REPRESENTS
You use this loss to perform binary classification with the hinge loss.
Therefore, the possible outputs/labels when using this loss are +1 and -1.
This object implements the loss layer interface defined above by
EXAMPLE_LOSS_LAYER_. In particular, you use this loss to perform binary
classification with the hinge loss. Therefore, the possible outputs/labels
when using this loss are +1 and -1.
!*/
public:
...
...
dlib/dnn/solvers_abstract.h
View file @
e179f410
...
...
@@ -25,7 +25,7 @@ namespace dlib
apply it to its update rule.
Note that there is no dlib::EXAMPLE_SOLVER type. It is shown here purely
to document the interface
that
a solver object must implement.
to document the interface a solver object must implement.
!*/
public:
...
...
@@ -40,9 +40,10 @@ namespace dlib
);
/*!
requires
- LAYER_DETAILS implements the EXAMPLE_LAYER_ interface defined in layers_abstract.h.
- LAYER_DETAILS implements the EXAMPLE_LAYER_ interface defined in
layers_abstract.h.
- l.get_layer_params().size() != 0
- l.get_layer_params()
and
params_grad
have the same dimensions
.
-
have_same_dimensions(
l.get_layer_params()
,
params_grad
) == true
.
- When this function is invoked on a particular solver instance, it is
always supplied with the same LAYER_DETAILS object.
ensures
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment