- 18 Jan, 2020 2 commits
-
-
Davis King authored
-
Adrià Arrufat authored
-
- 17 Jan, 2020 1 commit
-
-
Juha Reunanen authored
-
- 15 Jan, 2020 6 commits
-
-
Manjunath Bhat authored
* Adding Mish activation function * Bug fixed * Added test for Mish * Removed unwanted comments * Simplified calculation and removed comments * Kernel added and gradient computation simplified * Gradient simplified * Corrected gradient calculations * Compute output when input greater than 8 * Minor correction * Remove unnecessary pgrad for Mish * Removed CUDNN calls * Add standalone CUDA implementation of the Mish activation function * Fix in-place gradient in the CUDA version; refactor a little * Swap delta and omega * Need to have src (=x) (and not dest) available for Mish * Add test case that makes sure that cuda::mish and cpu::mish return the same results * Minor tweaking to keep the previous behaviour Co-authored-by:Juha Reunanen <juha.reunanen@tomaattinen.com>
-
Davis King authored
-
Davis King authored
-
thebhatman authored
-
Davis King authored
-
thebhatman authored
-
- 13 Jan, 2020 6 commits
-
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
Davis King authored
It's a cheap check, and easy for someone to forget about otherwise.
-
- 12 Jan, 2020 1 commit
-
-
Davis King authored
-
- 10 Jan, 2020 2 commits
-
-
Davis King authored
-
Adrià Arrufat authored
-
- 08 Jan, 2020 2 commits
-
-
jeffeDurand authored
-
Juha Reunanen authored
-
- 05 Jan, 2020 3 commits
-
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
- 28 Dec, 2019 2 commits
-
-
Davis King authored
-
Davis King authored
-
- 22 Dec, 2019 1 commit
-
-
Davis King authored
-
- 21 Dec, 2019 1 commit
-
-
Davis King authored
-
- 14 Dec, 2019 3 commits
-
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
- 05 Dec, 2019 1 commit
-
-
Davis King authored
-
- 29 Nov, 2019 1 commit
-
-
Davis King authored
-
- 28 Nov, 2019 2 commits
-
-
Davis King authored
-
Davis King authored
the user provided objective function. If it's faster than the LIPO upper bounding Monte Carlo model then we skip or limit the Monte Carlo stuff and just fun the objective function more often. Previously, find_max_global() simply assumed the objective functions were really expensive to invoke. TLDR: this change makes find_max_global() run a lot faster on objective functions that are themselves very fast to execute, since it will skip the expensive Monte Carlo modeling and just call the objective function a bunch instead.
-
- 15 Nov, 2019 2 commits
-
-
Davis E. King authored
-
Juha Reunanen authored
* Add instance segmentation example - first version of training code * Add MMOD options; get rid of the cache approach, and instead load all MMOD rects upfront * Improve console output * Set filter count * Minor tweaking * Inference - first version, at least compiles! * Ignore overlapped boxes * Ignore even small instances * Set overlaps_ignore * Add TODO remarks * Revert "Set overlaps_ignore" This reverts commit 65adeff1f89af62b10c691e7aa86c04fc358d03e. * Set result size * Set label image size * Take ignore-color into account * Fix the cropping rect's aspect ratio; also slightly expand the rect * Draw the largest findings last * Improve masking of the current instance * Add some perturbation to the inputs * Simplify ground-truth reading; fix random cropping * Read even class labels * Tweak default minibatch size * Learn only one class * Really train only instances of the selected class * Remove outdated TODO remark * Automatically skip images with no detections * Print to console what was found * Fix class index problem * Fix indentation * Allow to choose multiple classes * Draw rect in the color of the corresponding class * Write detector window classes to ostream; also group detection windows by class (when ostreaming) * Train a separate instance segmentation network for each classlabel * Use separate synchronization file for each seg net of each class * Allow more overlap * Fix sorting criterion * Fix interpolating the predicted mask * Improve bilinear interpolation: if output type is an integer, round instead of truncating * Add helpful comments * Ignore large aspect ratios; refactor the code; tweak some network parameters * Simplify the segmentation network structure; make the object detection network more complex in turn * Problem: CUDA errors not reported properly to console Solution: stop and join data loader threads even in case of exceptions * Minor parameters tweaking * Loss may have increased, even if prob_loss_increasing_thresh > prob_loss_increasing_thresh_max_value * Add previous_loss_values_dump_amount to previous_loss_values.size() when deciding if loss has been increasing * Improve behaviour when loss actually increased after disk sync * Revert some of the earlier change * Disregard dumped loss values only when deciding if learning rate should be shrunk, but *not* when deciding if loss has been going up since last disk sync * Revert "Revert some of the earlier change" This reverts commit 6c852124efe6473a5c962de0091709129d6fcde3. * Keep enough previous loss values, until the disk sync * Fix maintaining the dumped (now "effectively disregarded") loss values count * Detect cats instead of aeroplanes * Add helpful logging * Clarify the intention and the code * Review fixes * Add operator== for the other pixel types as well; remove the inline * If available, use constexpr if * Revert "If available, use constexpr if" This reverts commit 503d4dd3355ff8ad613116e3ffcc0fa664674f69. * Simplify code as per review comments * Keep estimating steps_without_progress, even if steps_since_last_learning_rate_shrink < iter_without_progress_thresh * Clarify console output * Revert "Keep estimating steps_without_progress, even if steps_since_last_learning_rate_shrink < iter_without_progress_thresh" This reverts commit 9191ebc7762d17d81cdfc334a80ca9a667365740. * To keep the changes to a bare minimum, revert the steps_since_last_learning_rate_shrink change after all (at least for now) * Even empty out some of the previous test loss values * Minor review fixes * Can't use C++14 features here * Do not use the struct name as a variable name
-
- 14 Nov, 2019 1 commit
-
-
Davis King authored
-
- 01 Nov, 2019 3 commits
-
-
Davis King authored
Fix find_max() going into an infinite loop in some cases when a non-differentiable function is given.
-
Davis King authored
-
Davis King authored
-