- 17 Jan, 2016 1 commit
-
-
Davis King authored
-
- 04 Jan, 2016 1 commit
-
-
Davis King authored
-
- 03 Jan, 2016 1 commit
-
-
Davis King authored
-
- 24 Dec, 2015 3 commits
-
-
Davis King authored
tensors with different sizes and it will zero pad them as needed.
-
Davis King authored
-
Davis King authored
-
- 23 Dec, 2015 1 commit
-
-
Davis King authored
since that's a little different in cuDNN. I also removed my CUDA code for doing batch normalization and replaced it with cuDNN's new batch normalization methods. Finally, I forgot to add a convolutional option to the bn_ object. Now it has one so you can set the mode however you like, either BATCH_NORM_FC or BATCH_NORM_CONV.
-
- 12 Dec, 2015 2 commits
-
-
Davis King authored
-
Davis King authored
cudnnAddTensor() function and updated the specs and asserts accordingly.
-
- 09 Dec, 2015 1 commit
-
-
Davis King authored
-
- 08 Dec, 2015 4 commits
-
-
Davis King authored
batch_normalize_conv.
-
Davis King authored
-
Davis King authored
-
Davis King authored
of add to them so that it's consistent with how the layer interface expects this to be done.
-
- 06 Dec, 2015 1 commit
-
-
Davis King authored
for this object.
-
- 21 Nov, 2015 1 commit
-
-
Davis King authored
-
- 18 Nov, 2015 1 commit
-
-
Davis King authored
runs on the GPU, and made affine_transform() take only tensors.
-
- 16 Nov, 2015 3 commits
-
-
Davis King authored
-
Davis King authored
-
Davis King authored
use either CPU or GPU. Fixed a bug in gemm().
-
- 13 Nov, 2015 1 commit
-
-
Davis King authored
form.
-
- 11 Nov, 2015 1 commit
-
-
Davis King authored
-
- 09 Nov, 2015 2 commits
-
-
Davis King authored
-
Davis King authored
code based on how dlib was built. All the layer implementations will interact with these functions.
-