- 06 Apr, 2018 1 commit
-
-
Davis King authored
-
- 04 Apr, 2018 1 commit
-
-
Davis King authored
-
- 26 Jan, 2018 1 commit
-
-
Davis King authored
dot_prods() that can accumulate in addition to assign.
-
- 25 Jan, 2018 1 commit
-
-
Davis King authored
-
- 04 Sep, 2017 1 commit
-
-
Davis King authored
stride values. This lets you run the tensor resizing routine on subwindows in a tensor.
-
- 14 Aug, 2017 2 commits
-
-
Davis King authored
concat layer's backward() method. It was assigning the gradient to previous layers instead of adding the gradient, as required by the layer interface specification. This change also noticeably speeds up concat layers since only one CUDA kernel launch now happens per concat operation, rather than one kernel launch for each sample in a tensor.
-
Davis King authored
-
- 11 Aug, 2017 1 commit
-
-
Davis King authored
-
- 04 Jul, 2017 1 commit
-
-
Davis King authored
-
- 21 Apr, 2017 1 commit
-
-
Davis King authored
-
- 02 Apr, 2017 1 commit
-
-
Davis King authored
rather than the entire tensor.
-
- 18 Nov, 2016 1 commit
-
-
Davis King authored
-
- 02 Nov, 2016 1 commit
-
-
Davis King authored
versions were calling into cuDNN, however, the cuDNN functions for doing this are horrifically slow, well over 100x slower than they should be, which is surprising since these functions are so trivial.
-
- 27 Oct, 2016 1 commit
-
-
Davis King authored
-
- 26 Oct, 2016 1 commit
-
-
Davis King authored
little.
-
- 23 Oct, 2016 1 commit
-
-
Davis King authored
-
- 01 Oct, 2016 1 commit
-
-
Davis King authored
-
- 28 Aug, 2016 1 commit
-
-
Davis King authored
-
- 22 Aug, 2016 1 commit
-
-
Davis King authored
-
- 16 Aug, 2016 1 commit
-
-
Davis King authored
-
- 16 Jul, 2016 1 commit
-
-
Davis King authored
-
- 26 May, 2016 1 commit
-
-
Fm authored
-
- 22 May, 2016 2 commits
-
-
Davis King authored
layers. Updated the solvers to support this.
-
Davis King authored
-
- 17 May, 2016 1 commit
-
-
Fm authored
-
- 14 May, 2016 1 commit
-
-
Davis King authored
skip layers and add_prev style layers. In particular, now in-place layers only overwrite the gradient information in their child layer if they are operating in in-place mode. Otherwise, they add their gradients to their child layers. It should also be noted that it's safe for in-place layers to overwrite gradients when in in-place mode since their child layers are inaccessible when in-place layers operate in in-place mode. This prevents any other layers from trying to add to the child layer, thereby avoiding the potability of layer interference. So the bug this change fixes is that, when not in in-place mode the child layers are still accessible but in-place layers were *still* overwriting child gradients.
-
- 28 Apr, 2016 2 commits
-
-
Davis King authored
-
Davis King authored
-
- 17 Apr, 2016 1 commit
-
-
Davis King authored
-
- 01 Apr, 2016 1 commit
-
-
Davis King authored
-
- 27 Mar, 2016 1 commit
-
-
Davis King authored
-
- 24 Jan, 2016 1 commit
-
-
Davis King authored
-
- 23 Jan, 2016 1 commit
-
-
Davis King authored
implementation of assign_conv_bias_gradient().
-
- 03 Jan, 2016 2 commits
-
-
Davis King authored
-
Davis King authored
the number of threads and blocks rather than using the hard coded numbers I had in there. This makes some functions noticeably faster. Also added a dot() function that is fully asynchronous.
-
- 24 Dec, 2015 2 commits
-
-
Davis King authored
tensors with different sizes and it will zero pad them as needed.
-
Davis King authored
-
- 23 Dec, 2015 1 commit
-
-
Davis King authored
since that's a little different in cuDNN. I also removed my CUDA code for doing batch normalization and replaced it with cuDNN's new batch normalization methods. Finally, I forgot to add a convolutional option to the bn_ object. Now it has one so you can set the mode however you like, either BATCH_NORM_FC or BATCH_NORM_CONV.
-
- 12 Dec, 2015 1 commit
-
-
Davis King authored
-
- 09 Dec, 2015 1 commit
-
-
Davis King authored
CUDA version of add_bias_gradient().
-