- 04 Oct, 2021 1 commit
-
-
Philip Meier authored
* add ufmt as code formatter * cleanup * quote ufmt requirement * split imports into more groups * regenerate circleci config * fix CI * clarify local testing utils section * use ufmt pre-commit hook * split relative imports into local category * Revert "split relative imports into local category" This reverts commit f2e224cde2008c56c9347c1f69746d39065cdd51. * pin black and usort dependencies * fix local test utils detection * fix ufmt rev * add reference utils to local category * fix usort config * remove custom categories sorting * Run pre-commit without fixing flake8 * got a double import in merge Co-authored-by:Nicolas Hug <nicolashug@fb.com>
-
- 21 Sep, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Adding ExponentialLR and LinearLR * Fix arg type of --lr-warmup-decay * Adding support of Zero gamma BN and SGD with nesterov. * Fix --lr-warmup-decay for video_classification. * Update bn_reinit * Fix pre-existing bug on num_classes of model * Remove zero gamma. * Use fstrings.
-
- 17 Sep, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Warmup on Classficiation references. * Adjust epochs for cosine. * Warmup on Segmentation references. * Warmup on Video classification references. * Adding support of both types of warmup in segmentation. * Use LinearLR in detection. * Fix deprecation warning.
-
- 15 Sep, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Add RandomMixupCutmix. * Add test with real data. * Use dataloader and collate in the test. * Making RandomMixupCutmix JIT scriptable. * Move out label_smoothing and try roll instead of flip * Adding mixup/cutmix in references script. * Handle one-hot encoded target in accuracy. * Add support of devices on tests. * Separate Mixup from Cutmix. * Add check for floats. * Adding device on expect value. * Remove hardcoded weights. * One-hot only when necessary. * Fix linter. * Moving mixup and cutmix to references. * Final code clean up.
-
- 14 Sep, 2021 2 commits
-
-
Vasilis Vryniotis authored
* Update log message. * Update fstring.
-
Prabhat Roy authored
-
- 09 Sep, 2021 1 commit
-
-
Prabhat Roy authored
* Added Exponential Moving Average support to classification reference script * Addressed review comments * Updated model argument
-
- 02 Sep, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Adding label smoothing on classification reference. * Replace underscore with dash.
-
- 26 Aug, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Adding code skeleton * Adding MBConvConfig. * Extend SqueezeExcitation to support custom min_value and activation. * Implement MBConv. * Replace stochastic_depth with operator. * Adding the rest of the EfficientNet implementation * Update torchvision/models/efficientnet.py * Replacing 1st activation of SE with SiLU. * Adding efficientnet_b3. * Replace mobilenetv3 assets with custom. * Switch to standard sigmoid and reconfiguring BN. * Reconfiguration of efficientnet. * Add repr * Add weights. * Update weights. * Adding B5-B7 weights. * Update docs and hubconf. * Fix doc link. * Fix typo on comment.
-
- 06 May, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Add submitit script, partition param and parser on its own method. * Fix method names, handle add_help correctly and refactoring. * Delete run_with_submitit.py file
-
- 02 Feb, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Refactoring mobilenetv3 to make code reusable. * Adding quantizable MobileNetV3 architecture. * Fix bug on reference script. * Moving documentation of quantized models in the right place. * Update documentation. * Workaround for loading correct weights of quant model. * Update weight URL and readme. * Adding eval.
-
- 28 Jan, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Adding presets in the classification reference scripts. * Adding presets in the object detection reference scripts. * Adding presets in the segmentation reference scripts. * Adding presets in the video classification reference scripts. * Moving flip at the end to align with image classification signature.
-
- 14 Jan, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Add MobileNetV3 Architecture in TorchVision (#3182) * Adding implementation of network architecture * Adding rmsprop support on the train.py * Adding auto-augment and random-erase in the training scripts. * Adding support for reduced tail on MobileNetV3. * Tagging blocks with comments. * Adding documentation, pre-trained model URL and a minor refactoring. * Handling better untrained supported models.
-
- 31 Mar, 2020 1 commit
-
-
Philip Meier authored
* remove sys.version_info == 2 * remove sys.version_info < 3 * remove from __future__ imports
-
- 26 Oct, 2019 1 commit
-
-
raghuramank100 authored
* add quantized models * Modify mobilenet.py documentation and clean up comments Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Move fuse_model method to QuantizableInvertedResidual and clean up args documentation Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Restore relu settings to default in resnet.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix missing return in forward Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix missing return in forwards Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Change pretrained -> pretrained_float_models Replace InvertedResidual with block Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update tests to follow similar structure to test_models.py, allowing for modular testing Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Replace forward method with simple function assignment Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix error in arguments for resnet18 Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * pretrained_float_model argument missing for mobilenet Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * reference script for quantization aware training and post training quantization * reference script for quantization aware training and post training quantization * set pretrained_float_model as False and explicitly provide float model Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Address review comments: 1. Replace forward with _forward 2. Use pretrained models in reference train/eval script 3. Modify test to skip if fbgemm is not supported Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint errors. Use _forward for common code between float and quantized models Clean up linting for reference train scripts Test over all quantizable models Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update default values for args in quantization/train.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update models to conform to new API with quantize argument Remove apex in training script, add post training quant as an option Add support for separate calibration data set. Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix minor errors in train_quantization.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Remove duplicate file * Bugfix * Minor improvements on the models * Expose print_freq to evaluate * Minor improvements on train_quantization.py * Ensure that quantized models are created and run on the specified backends Fix errors in test only mode Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Add model urls * Fix errors in quantized model tests. Speedup creation of random quantized model by removing histogram observers Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Move setting qengine prior to convert. Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint error Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Add readme.md Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Readme.md Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint
-
- 19 Jul, 2019 1 commit
-
-
Vinh Nguyen authored
* adding mixed precision training with Apex * fix APEX default optimization level * adding python version check for apex * fix LINT errors and raise exceptions if apex not available * fixing apex distributed training * fix throughput calculation: include forward pass * remove torch.cuda.set_device(args.gpu) as it's already called in init_distributed_mode * fix linter: new line * move Apex initialization code back to the beginning of main * move apex initialization to before lr_scheduler - for peace of mind. Though, doing apex initialization after lr_scheduler seems to work fine as well
-
- 06 Jun, 2019 1 commit
-
-
Vinh Nguyen authored
* adding mixed precision training with Apex * fix APEX default optimization level * adding python version check for apex * fix LINT errors and raise exceptions if apex not available
-
- 21 May, 2019 1 commit
-
-
Francisco Massa authored
Allows for easily evaluating the pre-trained models in the modelzoo
-
- 19 May, 2019 1 commit
-
-
Francisco Massa authored
-
- 08 May, 2019 1 commit
-
-
Francisco Massa authored
* Miscellaneous improvements to the classification reference scritps * Fix lint
-
- 02 Apr, 2019 2 commits
-
-
Francisco Massa authored
* Add groups support to ResNet * Kill BaseResNet * Make it support multi-machine training
-
Surgan Jandial authored
Making references/classification/train.py and references/classification/utils.py compatible with python2 (#831) * linter fixes * linter fixes
-
- 28 Mar, 2019 1 commit
-
-
Francisco Massa authored
* Initial version of classification reference training script * Updates * Minor updates * Expose a few more options * Load optimizer and lr_scheduler when resuming Also log the learning rate * Evaluation-only and minor improvements Identified a bug in the reporting of the results. They need to be reduced between all processes * Address Soumith's comment * Fix some approximations on the evaluation metric * Flake8
-