1. 31 Oct, 2019 1 commit
  2. 28 Oct, 2019 1 commit
  3. 26 Oct, 2019 1 commit
    • raghuramank100's avatar
      Quantizable resnet and mobilenet models (#1471) · b4cb5765
      raghuramank100 authored
      * add quantized models
      
      * Modify mobilenet.py documentation and clean up comments
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Move fuse_model method to QuantizableInvertedResidual and clean up args documentation
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Restore relu settings to default in resnet.py
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix missing return in forward
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix missing return in forwards
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Change pretrained -> pretrained_float_models
      Replace InvertedResidual with block
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Update tests to follow similar structure to test_models.py, allowing for modular testing
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Replace forward method with simple function assignment
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix error in arguments for resnet18
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * pretrained_float_model argument missing for mobilenet
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * reference script for quantization aware training and post training quantization
      
      * reference script for quantization aware training and post training quantization
      
      * set pretrained_float_model as False and explicitly provide float model
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Address review comments:
      1. Replace forward with _forward
      2. Use pretrained models in reference train/eval script
      3. Modify test to skip if fbgemm is not supported
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix lint errors.
      Use _forward for common code between float and quantized models
      Clean up linting for reference train scripts
      Test over all quantizable models
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Update default values for args in quantization/train.py
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Update models to conform to new API with quantize argument
      Remove apex in training script, add post training quant as an option
      Add support for separate calibration data set.
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix minor errors in train_quantization.py
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Remove duplicate file
      
      * Bugfix
      
      * Minor improvements on the models
      
      * Expose print_freq to evaluate
      
      * Minor improvements on train_quantization.py
      
      * Ensure that quantized models are created and run on the specified backends
      Fix errors in test only mode
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Add model urls
      
      * Fix errors in quantized model tests.
      Speedup creation of random quantized model by removing histogram observers
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Move setting qengine prior to convert.
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix lint error
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Add readme.md
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Readme.md
      
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * Fix lint
      b4cb5765
  4. 18 Oct, 2019 1 commit
  5. 15 Oct, 2019 1 commit
    • Lara Haidar's avatar
      Support Exporting RPN to ONNX (#1329) · 1d6145d1
      Lara Haidar authored
      * Support Exporting RPN to ONNX
      
      * address PR comments
      
      * fix cat
      
      * add flatten
      
      * replace cat by stack
      
      * update test to run only on rpn module
      
      * use tolerate_small_mismatch
      1d6145d1
  6. 02 Oct, 2019 1 commit
  7. 27 Sep, 2019 1 commit
    • eellison's avatar
      Make Googlnet & InceptionNet scriptable (#1349) · b9cbc227
      eellison authored
      * make googlnet scriptable
      
      * Remove typing import in favor of torch.jit.annotations
      
      * add inceptionnet
      
      * flake fixes
      
      * fix asssert true
      
      * add import division for torchscript
      
      * fix script compilation
      
      * fix flake, py2 division error
      
      * fix py2 division error
      b9cbc227
  8. 23 Sep, 2019 1 commit
    • Dmitry Belenko's avatar
      Bugfix for MNASNet (#1224) · 367e8514
      Dmitry Belenko authored
      * Add initial mnasnet impl
      
      * Remove all type hints, comply with PyTorch overall style
      
      * Expose models
      
      * Remove avgpool from features() and add separately
      
      * Fix python3-only stuff, replace subclasses with functions
      
      * fix __all__
      
      * Fix typo
      
      * Remove conditional dropout
      
      * Make dropout functional
      
      * Addressing @fmassa's feedback, round 1
      
      * Replaced adaptive avgpool with mean on H and W to prevent collapsing the batch dimension
      
      * Partially address feedback
      
      * YAPF
      
      * Removed redundant class vars
      
      * Update urls to releases
      
      * Add information to models.rst
      
      * Replace init with kaiming_normal_ in fan-out mode
      
      * Use load_state_dict_from_url
      
      * Fix depth scaling on first 2 layers
      
      * Restore initialization
      
      * Match reference implementation initialization for dense layer
      
      * Meant to use Kaiming
      
      * Remove spurious relu
      
      * Point to the newest 0.5 checkpoint
      
      * Latest pretrained checkpoint
      
      * Restore 1.0 checkpoint
      
      * YAPF
      
      * Implement backwards compat as suggested by Soumith
      
      * Update checkpoint URL
      
      * Move warnings up
      
      * Record a couple more function parameters
      
      * Update comment
      
      * Set the correct version such that if the BC-patched model is saved, it could be reloaded with BC patching again
      
      * Set a member var, not class var
      
      * Update mnasnet.py
      
      Remove unused member var as per review.
      
      * Update the path to weights
      367e8514
  9. 20 Sep, 2019 2 commits
    • eellison's avatar
      Make fcn_resnet Scriptable (#1352) · a6a926bc
      eellison authored
      * script_fcn_resnet
      
      * Make old models load
      
      * DeepLabV3 also got torchscript-ready
      a6a926bc
    • eellison's avatar
      Make Densenet Scriptable (#1342) · 21110d93
      eellison authored
      * make densenet scriptable
      
      * make py2 compat
      
      * use torch List polyfill
      
      * fix unpacking for checkpointing
      
      * fewer changes to _Denseblock
      
      * improve error message
      
      * print traceback
      
      * add typing dependency
      
      * add typing dependency to travis too
      
      * Make loading old checkpoints work
      21110d93
  10. 18 Sep, 2019 1 commit
  11. 17 Sep, 2019 2 commits
  12. 02 Sep, 2019 1 commit
    • eellison's avatar
      make shufflenet and resnet scriptable (#1270) · 26c9630b
      eellison authored
      * make shufflenet scriptable
      
      * make resnet18 scriptable
      
      * set downsample to identity instead of __constants__ api
      
      * use __constants__ for downsample instead of identity
      
      * import tensor to fix flake
      
      * use torch.Tensor type annotation instead of import
      26c9630b
  13. 30 Aug, 2019 1 commit
  14. 07 Aug, 2019 1 commit
    • Myosaki's avatar
      Correction wrong comments (#1211) · 5414faa0
      Myosaki authored
      `self.fc1(x)` converts the shape of `x` into "N x 1024", and `self.fc2(x)` converts into "N x num_classes".
      
      By adding `print(x.shape)` under each comment line, the console displays as follows (batch_size is 1):
      
      ```text
      torch.Size([1, 2048])
      torch.Size([1, 1024])
      torch.Size([1, 1024])
      torch.Size([1, 1024])
      torch.Size([1, 1000])
      ```
      5414faa0
  15. 06 Aug, 2019 2 commits
  16. 05 Aug, 2019 1 commit
  17. 04 Aug, 2019 1 commit
  18. 01 Aug, 2019 1 commit
  19. 26 Jul, 2019 1 commit
    • Bruno Korbar's avatar
      Add VideoModelZoo models (#1130) · 7c95f97a
      Bruno Korbar authored
      * [0.4_video] models - initial commit
      
      * addressing fmassas inline comments
      
      * pep8 and flake8
      
      * simplify "hacks"
      
      * sorting out latest comments
      
      * nitpick
      
      * Updated tests and constructors
      
      * Added docstrings - ready to merge
      7c95f97a
  20. 23 Jul, 2019 1 commit
  21. 19 Jul, 2019 1 commit
  22. 12 Jul, 2019 2 commits
  23. 10 Jul, 2019 1 commit
    • ekka's avatar
      Add checks to roi_heads in detection module (#1091) · 6693b2c6
      ekka authored
      * add float32 to keypoint_rcnn docs
      
      * add float32 to faster_rcnn docs
      
      * add float32 to mask_rcnn
      
      * Update faster_rcnn.py
      
      * Update keypoint_rcnn.py
      
      * Update mask_rcnn.py
      
      * Update faster_rcnn.py
      
      * make keypoints float
      
      * make masks uint8
      
      * Update keypoint_rcnn.py
      
      * make labels Int64
      
      * make labels Int64
      
      * make labels Int64
      
      * Add checks for boxes, labels, masks, keypoints
      
      * update mask dim
      
      * remove dtype
      
      * check only if targets is not None
      
      * account for targets being a list
      
      * update target to be list of dict
      
      * Update faster_rcnn.py
      
      * Update keypoint_rcnn.py
      
      * allow boxes to be of float16 type as well
      
      * remove checks on mask
      6693b2c6
  24. 05 Jul, 2019 2 commits
  25. 04 Jul, 2019 3 commits
  26. 02 Jul, 2019 1 commit
    • yaysummeriscoming's avatar
      Fixed width multiplier (#1005) · 8350645b
      yaysummeriscoming authored
      * Fixed width multiplier
      
      Layer channels are now rounded to a multiple of 8, as per the official tensorflow implementation.  I found this fix when looking through: https://github.com/d-li14/mobilenetv2.pytorch
      
      * Channel multiple now a user configurable option
      
      The official tensorflow slim mobilenet v2 implementation rounds the number of channels in each layer to a multiple of 8.  This is now user configurable - 1 turns off rounding
      
      * Fixed whitespace error
      
      Fixed error: ./torchvision/models/mobilenet.py:152:1: W293 blank line contains whitespace
      8350645b
  27. 26 Jun, 2019 1 commit
    • Sergey Zagoruyko's avatar
      Add pretrained Wide ResNet (#912) · 2b6da28c
      Sergey Zagoruyko authored
      * add wide resnet
      
      * add docstring for wide resnet
      
      * update WRN-50-2 model
      
      * add docs
      
      * extend WRN docstring
      
      * use pytorch storage for WRN
      
      * fix rebase
      
      * fix typo in docs
      2b6da28c
  28. 24 Jun, 2019 2 commits
    • Francisco Massa's avatar
      03e25734
    • Dmitry Belenko's avatar
      Implementation of the MNASNet family of models (#829) · 69b28578
      Dmitry Belenko authored
      * Add initial mnasnet impl
      
      * Remove all type hints, comply with PyTorch overall style
      
      * Expose models
      
      * Remove avgpool from features() and add separately
      
      * Fix python3-only stuff, replace subclasses with functions
      
      * fix __all__
      
      * Fix typo
      
      * Remove conditional dropout
      
      * Make dropout functional
      
      * Addressing @fmassa's feedback, round 1
      
      * Replaced adaptive avgpool with mean on H and W to prevent collapsing the batch dimension
      
      * Partially address feedback
      
      * YAPF
      
      * Removed redundant class vars
      
      * Update urls to releases
      
      * Add information to models.rst
      
      * Replace init with kaiming_normal_ in fan-out mode
      
      * Use load_state_dict_from_url
      69b28578
  29. 18 Jun, 2019 1 commit
  30. 14 Jun, 2019 2 commits
  31. 11 Jun, 2019 1 commit