- 20 May, 2020 1 commit
-
-
Erik authored
* Update README.md added some clarity to get the examples executable. Waiting to hear back if instructions should mention to setup COCO dataset * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md
-
- 18 May, 2020 2 commits
-
-
Vasiliy Kuznetsov authored
Summary: Redo of https://github.com/pytorch/vision/pull/2191 Makes the classification QAT tutorial not crash when used with DDP. There were two issues: 1. the model was moved to GPU before the observers were added, and they are created on CPU. In the context of this repo, the fix is to finalize the model before moving to GPU. We can potentially follow up with a better error message in the future, in a separate PR. 2. the QAT conversion was running on the DDP'ed model, which had various problems. The fix is to unwrap the model from DDP before cloning it for evaluation. There is still work to do on verifying that BN is working correctly in QAT + DDP, but saving that for a separate PR. Test Plan: ``` python -m torch.distributed.launch --use_env references/classification/train_quantization.py --data-path {path_to_imagenet_1k} --output_dir {output_dir} ``` Reviewers: Subscribers: Tasks: Tags:
-
Francisco Massa authored
-
- 11 May, 2020 1 commit
-
-
Erik authored
adding slight clarification to evaluation logic area, regarding images
-
- 29 Apr, 2020 1 commit
-
-
D. Khuê Lê-Huu authored
-
- 10 Apr, 2020 1 commit
-
-
moto authored
-
- 31 Mar, 2020 1 commit
-
-
Philip Meier authored
* remove sys.version_info == 2 * remove sys.version_info < 3 * remove from __future__ imports
-
- 30 Mar, 2020 1 commit
-
-
PatrickBue authored
-
- 20 Mar, 2020 1 commit
-
-
Philip Meier authored
* add default parameters to README * fix vgg_*_bn
-
- 13 Mar, 2020 1 commit
-
-
hx89 authored
-
- 10 Mar, 2020 1 commit
-
-
Kentaro Yoshioka authored
usage and performance are from the vision0.5 release notes.
-
- 10 Feb, 2020 1 commit
-
-
Francisco Massa authored
-
- 19 Dec, 2019 4 commits
-
-
Francisco Massa authored
-
Francisco Massa authored
-
MultiK authored
* fix a little bug about resume When resuming, we need to start from the last epoch not 0. * the second way for resuming the second way for resuming
-
Francisco Massa authored
Bugfix on GroupedBatchSampler for corner case where there are not enough examples in a category to form a batch (#1677)
-
- 26 Nov, 2019 2 commits
-
-
Rahul Somani authored
* Generalised for custom dataset * Typo, redundant code, sensible default * Args for name of train and val dir
-
Yoshitomo Matsubara authored
-
- 25 Nov, 2019 2 commits
-
-
Yoshitomo Matsubara authored
-
Will Brennan authored
-
- 04 Nov, 2019 2 commits
-
-
Rahul Somani authored
-
hx89 authored
-
- 30 Oct, 2019 1 commit
-
-
Vinh Nguyen authored
-
- 29 Oct, 2019 1 commit
-
-
fsavard-eai authored
-
- 26 Oct, 2019 2 commits
-
-
raghuramank100 authored
* add quantized models * Modify mobilenet.py documentation and clean up comments Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Move fuse_model method to QuantizableInvertedResidual and clean up args documentation Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Restore relu settings to default in resnet.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix missing return in forward Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix missing return in forwards Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Change pretrained -> pretrained_float_models Replace InvertedResidual with block Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update tests to follow similar structure to test_models.py, allowing for modular testing Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Replace forward method with simple function assignment Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix error in arguments for resnet18 Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * pretrained_float_model argument missing for mobilenet Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * reference script for quantization aware training and post training quantization * reference script for quantization aware training and post training quantization * set pretrained_float_model as False and explicitly provide float model Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Address review comments: 1. Replace forward with _forward 2. Use pretrained models in reference train/eval script 3. Modify test to skip if fbgemm is not supported Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint errors. Use _forward for common code between float and quantized models Clean up linting for reference train scripts Test over all quantizable models Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update default values for args in quantization/train.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Update models to conform to new API with quantize argument Remove apex in training script, add post training quant as an option Add support for separate calibration data set. Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix minor errors in train_quantization.py Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Remove duplicate file * Bugfix * Minor improvements on the models * Expose print_freq to evaluate * Minor improvements on train_quantization.py * Ensure that quantized models are created and run on the specified backends Fix errors in test only mode Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Add model urls * Fix errors in quantized model tests. Speedup creation of random quantized model by removing histogram observers Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Move setting qengine prior to convert. Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint error Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Add readme.md Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Readme.md Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * Fix lint
-
Francisco Massa authored
* Initial version of README for classification reference scripts * More context
-
- 04 Oct, 2019 2 commits
-
-
Zhicheng Yan authored
* move sampler into TV core. Update UniformClipSampler * Fix reference training script * Skip test if pyav not available * change interpolation from round() to floor() as round(0.5) behaves differently between py2 and py3
-
Koen van de Sande authored
Fix reference training script for Mask R-CNN for PyTorch 1.2 (during evaluation after epoch, mask datatype became bool, pycocotools expects uint8) (#1413)
-
- 29 Aug, 2019 1 commit
-
-
Joaquín Alori authored
-
- 12 Aug, 2019 1 commit
-
-
Gu Wang authored
* explain lr and batch size in references/detection/train.py * fix typo
-
- 05 Aug, 2019 1 commit
-
-
Gu Wang authored
-
- 04 Aug, 2019 1 commit
-
-
Francisco Massa authored
* [WIP] Minor cleanups on R3d * Move all models to video/resnet.py * Remove old files * Make tests less memory intensive * Lint * Fix typo and add pretraing arg to training script
-
- 31 Jul, 2019 3 commits
-
-
Francisco Massa authored
* Move RandomClipSampler to references * Lint and bugfix
-
Francisco Massa authored
Also add docs
-
Francisco Massa authored
* Copy classification scripts for video classification * Initial version of video classification * add version * Training of r2plus1d_18 on kinetics work Gives even slightly better results than expected, with 57.336 top1 clip accuracy. But we count some clips twice in this evaluation * Cleanups on training script * Lint * Minor improvements * Remove some hacks * Lint
-
- 19 Jul, 2019 1 commit
-
-
Vinh Nguyen authored
* adding mixed precision training with Apex * fix APEX default optimization level * adding python version check for apex * fix LINT errors and raise exceptions if apex not available * fixing apex distributed training * fix throughput calculation: include forward pass * remove torch.cuda.set_device(args.gpu) as it's already called in init_distributed_mode * fix linter: new line * move Apex initialization code back to the beginning of main * move apex initialization to before lr_scheduler - for peace of mind. Though, doing apex initialization after lr_scheduler seems to work fine as well
-
- 17 Jul, 2019 1 commit
-
-
Daksh Jotwani authored
* Add loss, sampler, and train script * Fix train script * Add argparse * Fix lint * Change f strings to .format() * Remove unused imports * Change TripletMarginLoss to extend nn.Module * Load eye uint8 tensors directly on device * Refactor model.py to backbone=None * Add docstring for PKSampler * Refactor evaluate() to take loader as arg instead * Change eval method to cat embeddings all at once * Add dataset comments * Add README.md * Add tests for sampler * Refactor threshold finder to helper method * Refactor targets comment * Fix lint * Rename embedding to similarity (More consistent with existing literature)
-
- 12 Jul, 2019 2 commits
-
-
Varun Agrawal authored
updated all docstrings and code references for boxes to be consistent with the scheme (x1, y1, x2, y2) (#1110)
-
flauted authored
* Doc multigpu and propagate data path. * Use raw doc because of backslash.
-
- 14 Jun, 2019 1 commit
-
-
LXYTSOS authored
* can't work with pytorch-cpu fixed utils.py can't work with pytorch-cpu because of this line of code `memory=torch.cuda.max_memory_allocated()` * can't work with pytorch-cpu fixed utils.py can't work with pytorch-cpu because of this line of code 'memory=torch.cuda.max_memory_allocated()'
-