1. 31 Jul, 2019 3 commits
  2. 26 Jul, 2019 5 commits
  3. 24 Jul, 2019 3 commits
  4. 23 Jul, 2019 5 commits
  5. 22 Jul, 2019 1 commit
  6. 19 Jul, 2019 3 commits
  7. 16 Jul, 2019 1 commit
  8. 15 Jul, 2019 2 commits
  9. 12 Jul, 2019 3 commits
  10. 11 Jul, 2019 1 commit
  11. 10 Jul, 2019 1 commit
    • ekka's avatar
      Add checks to roi_heads in detection module (#1091) · 6693b2c6
      ekka authored
      * add float32 to keypoint_rcnn docs
      
      * add float32 to faster_rcnn docs
      
      * add float32 to mask_rcnn
      
      * Update faster_rcnn.py
      
      * Update keypoint_rcnn.py
      
      * Update mask_rcnn.py
      
      * Update faster_rcnn.py
      
      * make keypoints float
      
      * make masks uint8
      
      * Update keypoint_rcnn.py
      
      * make labels Int64
      
      * make labels Int64
      
      * make labels Int64
      
      * Add checks for boxes, labels, masks, keypoints
      
      * update mask dim
      
      * remove dtype
      
      * check only if targets is not None
      
      * account for targets being a list
      
      * update target to be list of dict
      
      * Update faster_rcnn.py
      
      * Update keypoint_rcnn.py
      
      * allow boxes to be of float16 type as well
      
      * remove checks on mask
      6693b2c6
  12. 09 Jul, 2019 1 commit
  13. 08 Jul, 2019 1 commit
  14. 06 Jul, 2019 1 commit
    • Zhun Zhong's avatar
      Fix bug to RandomErasing (#1095) · 34833427
      Zhun Zhong authored
      * Fix bug to Random Erasing
      
      1. Avoid forever loop for getting parameters of erase.
      2. replace' img_b' by 'img_c', because it indicates the channel.
      3. replace v = torch.rand([img_c, h, w]) by v = torch.empty([img_c, h, w], dtype=torch.float32).normal_(). Normally distributed achieves better performance.
      
      * add test
      
      * Update test_transforms.py
      
      * Update transforms.py
      
      * Update test_transforms.py
      
      * Update transforms.py
      
      * Update functional.py
      34833427
  15. 05 Jul, 2019 2 commits
  16. 04 Jul, 2019 5 commits
  17. 03 Jul, 2019 1 commit
  18. 02 Jul, 2019 1 commit
    • yaysummeriscoming's avatar
      Fixed width multiplier (#1005) · 8350645b
      yaysummeriscoming authored
      * Fixed width multiplier
      
      Layer channels are now rounded to a multiple of 8, as per the official tensorflow implementation.  I found this fix when looking through: https://github.com/d-li14/mobilenetv2.pytorch
      
      * Channel multiple now a user configurable option
      
      The official tensorflow slim mobilenet v2 implementation rounds the number of channels in each layer to a multiple of 8.  This is now user configurable - 1 turns off rounding
      
      * Fixed whitespace error
      
      Fixed error: ./torchvision/models/mobilenet.py:152:1: W293 blank line contains whitespace
      8350645b