Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
dlib
Commits
0ce6ed5b
Commit
0ce6ed5b
authored
Apr 04, 2018
by
Davis King
Browse files
Cleanup of cuda code.
parent
8073f4b1
Changes
4
Hide whitespace changes
Inline
Side-by-side
Showing
4 changed files
with
26 additions
and
4 deletions
+26
-4
dlib/dnn/cuda_data_ptr.h
dlib/dnn/cuda_data_ptr.h
+5
-1
dlib/dnn/cudnn_dlibapi.cpp
dlib/dnn/cudnn_dlibapi.cpp
+2
-2
dlib/dnn/cudnn_dlibapi.h
dlib/dnn/cudnn_dlibapi.h
+18
-0
dlib/dnn/layers_abstract.h
dlib/dnn/layers_abstract.h
+1
-1
No files found.
dlib/dnn/cuda_data_ptr.h
View file @
0ce6ed5b
...
@@ -160,8 +160,12 @@ namespace dlib
...
@@ -160,8 +160,12 @@ namespace dlib
cuda_data_void_ptr
get
(
size_t
size
)
cuda_data_void_ptr
get
(
size_t
size
)
/*!
/*!
ensures
ensures
- This object will return the buffer of requested size o
f
larger
- This object will return the buffer of requested size o
r
larger
.
- buffer.size() >= size
- buffer.size() >= size
- Client code should not hold the returned cuda_data_void_ptr for long
durations, but instead should call get() whenever the buffer is
needed. Doing so ensures that multiple buffers are not kept around
in the event of a resize.
!*/
!*/
{
{
if
(
buffer
.
size
()
<
size
)
if
(
buffer
.
size
()
<
size
)
...
...
dlib/dnn/cudnn_dlibapi.cpp
View file @
0ce6ed5b
...
@@ -160,12 +160,12 @@ namespace dlib
...
@@ -160,12 +160,12 @@ namespace dlib
std
::
vector
<
std
::
weak_ptr
<
resizable_cuda_buffer
>>
buffers
;
std
::
vector
<
std
::
weak_ptr
<
resizable_cuda_buffer
>>
buffers
;
};
};
std
::
shared_ptr
<
resizable_cuda_buffer
>
device_global_buffer
()
static
std
::
shared_ptr
<
resizable_cuda_buffer
>
device_global_buffer
()
{
{
thread_local
cudnn_device_buffer
buffer
;
thread_local
cudnn_device_buffer
buffer
;
return
buffer
.
get_buffer
();
return
buffer
.
get_buffer
();
}
}
// ------------------------------------------------------------------------------------
// ------------------------------------------------------------------------------------
class
cudnn_activation_descriptor
class
cudnn_activation_descriptor
...
...
dlib/dnn/cudnn_dlibapi.h
View file @
0ce6ed5b
...
@@ -17,6 +17,24 @@ namespace dlib
...
@@ -17,6 +17,24 @@ namespace dlib
namespace
cuda
namespace
cuda
{
{
// ----------------------------------------------------------------------------------------
std
::
shared_ptr
<
resizable_cuda_buffer
>
device_global_buffer
(
);
/*!
ensures
- Returns a pointer to a globally shared CUDA memory buffer on the
currently selected CUDA device. The buffer is also thread local. So
each host thread will get its own buffer. You can use this global buffer
as scratch space for CUDA computations that all take place on the default
stream. Using it in this way ensures that there aren't any race conditions
involving the use of the buffer.
- The global buffer is deallocated once all references to it are
destructed. It will be reallocated as required. So if you want to avoid
these reallocations then hold a copy of the shared_ptr returned by this
function.
!*/
// -----------------------------------------------------------------------------------
// -----------------------------------------------------------------------------------
class
tensor_descriptor
class
tensor_descriptor
...
...
dlib/dnn/layers_abstract.h
View file @
0ce6ed5b
...
@@ -366,7 +366,7 @@ namespace dlib
...
@@ -366,7 +366,7 @@ namespace dlib
follows:
follows:
ensures
ensures
- calling clean()
C
auses this object to forget about everything except its
- calling clean()
c
auses this object to forget about everything except its
parameters. This is useful if your layer caches information between
parameters. This is useful if your layer caches information between
forward and backward passes and you want to clean out that cache
forward and backward passes and you want to clean out that cache
information before saving the network to disk.
information before saving the network to disk.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment