"vscode:/vscode.git/clone" did not exist on "7eebd4404764dd778e18cc0fc4866d97504271f0"
- 04 Oct, 2021 1 commit
-
-
Philip Meier authored
* add ufmt as code formatter * cleanup * quote ufmt requirement * split imports into more groups * regenerate circleci config * fix CI * clarify local testing utils section * use ufmt pre-commit hook * split relative imports into local category * Revert "split relative imports into local category" This reverts commit f2e224cde2008c56c9347c1f69746d39065cdd51. * pin black and usort dependencies * fix local test utils detection * fix ufmt rev * add reference utils to local category * fix usort config * remove custom categories sorting * Run pre-commit without fixing flake8 * got a double import in merge Co-authored-by:Nicolas Hug <nicolashug@fb.com>
-
- 01 Oct, 2021 2 commits
-
-
Nicolas Hug authored
-
Alexander Soare authored
* draft commit * Polish and add corresponding test * Update docs * Update torchvision/models/feature_extraction.py * Update docs/source/feature_extraction.rst Co-authored-by:Francisco Massa <fvsmassa@gmail.com>
-
- 30 Sep, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Moving _make_divisible to utils. * Replace the old ConvBNReLU and ConvBNActivation layers * Fix minor bug. * Moving SE layer to ops. * Adding deprecation warnings on old layers. * Apply changes to regnets.
-
- 29 Sep, 2021 2 commits
-
-
Kai Zhang authored
* initial code * add SqueezeExcitation * initial code * add SqueezeExcitation * add SqueezeExcitation * regnet blocks, stems and model definition * nit * add fc layer * use Callable instead of Enum for block, stem and activation * add regnet_x and regnet_y model build functions, add docs * remove unused depth * use BN/activation constructor and ConvBNActivation * add expected test pkl files * allow custom activation in SqueezeExcitation * use ReLU as the default activation * initial code * add SqueezeExcitation * initial code * add SqueezeExcitation * add SqueezeExcitation * regnet blocks, stems and model definition * nit * add fc layer * use Callable instead of Enum for block, stem and activation * add regnet_x and regnet_y model build functions, add docs * remove unused depth * use BN/activation constructor and ConvBNActivation * reuse SqueezeExcitation from efficientnet * refactor RegNetParams into BlockParams * use nn.init, replace np with torch * update README * construct model with stem, block, classifier instances * Revert "construct model with stem, block, classifier instances" This reverts commit 850f5f3ed01a2a9b36fcbf8405afd6e41d2e58ef. * remove unused blocks * support scaled model * fuse into ConvBNActivation * make reset_parameters private * fix type errors * fix for unit test * add pretrained weights for 6 variant models, update docs
-
Vasilis Vryniotis authored
* Reuse EfficientNet SE layer. * Deprecating the mobilenetv3.SqueezeExcitation layer. * Passing the right activation on quantization. * Making strict named param. * Set default params if missing. * Fixing typos.
-
- 21 Sep, 2021 2 commits
-
-
Kai Zhang authored
* allow custom activation in SqueezeExcitation * use ReLU as the default activation * make scale activation parameterizable Co-authored-by:Vasilis Vryniotis <datumbox@users.noreply.github.com>
-
Beat Buesser authored
* Allow gradient backpropagation through GeneralizedRCNNTransform to inputs Signed-off-by:
Beat Buesser <beat.buesser@ie.ibm.com> * Add unit tests for gradient backpropagation to inputs Signed-off-by:
Beat Buesser <beat.buesser@ie.ibm.com> * Update torchvision/models/detection/transform.py Co-authored-by:
Francisco Massa <fvsmassa@gmail.com> * Update _check_input_backprop Signed-off-by:
Beat Buesser <beat.buesser@ie.ibm.com> * Account for tests requiring cuda Signed-off-by:
Beat Buesser <beat.buesser@ie.ibm.com> Co-authored-by:
Francisco Massa <fvsmassa@gmail.com> Co-authored-by:
Vasilis Vryniotis <datumbox@users.noreply.github.com>
-
- 16 Sep, 2021 1 commit
-
-
julienripoche authored
Co-authored-by:
Julien RIPOCHE <ripoche@magic-lemp.com> Co-authored-by:
Vasilis Vryniotis <datumbox@users.noreply.github.com>
-
- 08 Sep, 2021 1 commit
-
-
Prabhat Roy authored
* Added paper references to detection models * Ignore linter warning * Break long line into two
-
- 06 Sep, 2021 2 commits
-
-
Alexander Soare authored
* add fx feature extraction util * Make it possible to use train and eval mode * FX feature extraction - Tweaks and small bug fixes * FX feature extraction - add tests * move to feature_extraction.py, add LeafModuleAwareTracer, add docs * Tweaks to docs * addressing latest round of feedback * undo line spacing changes * change type hints in docstrings * fix sphinx indentation * expose feature_extraction * add maskrcnn example * add api refernce subheading * address latest review notes, refactor names, fix regex, cosmetics * Add back efficientnet to models * fix tests for effnet * fix linting issue * fix test tracer kwargs Co-authored-by:Francisco Massa <fvsmassa@gmail.com>
-
Vasilis Vryniotis authored
* Add types in transform. * Trace on eval mode. Co-authored-by:Francisco Massa <fvsmassa@gmail.com>
-
- 05 Sep, 2021 1 commit
-
-
Kai Zhang authored
-
- 04 Sep, 2021 1 commit
-
-
Camilo De La Torre authored
-
- 02 Sep, 2021 1 commit
-
-
Camilo De La Torre authored
* Explicitely store a distance value that is reused I don't see a reason to calculate the value twice for each distance. Knowing that this code is going to be called at every epoch and that probably there is not a compiler that optimizes this (+ also code clarity), I think is best to store the value for x and y. * Update torchvision/models/detection/_utils.py Co-authored-by:
Nicolas Hug <contact@nicolas-hug.com> * removing spaces Co-authored-by:
Nicolas Hug <contact@nicolas-hug.com> Co-authored-by:
Vasilis Vryniotis <datumbox@users.noreply.github.com>
-
- 31 Aug, 2021 1 commit
-
-
Aditya Oke authored
* fix * add typings * fixup some more types * Type more * remove mypy ignore * add missing typings * fix a few mypy errors * fix mypy errors * fix mypy * ignore types * fixup annotation * fix remaining types * cleanup #TODO comments Co-authored-by:
Philip Meier <github.pmeier@posteo.de> Co-authored-by:
Vasilis Vryniotis <datumbox@users.noreply.github.com>
-
- 26 Aug, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Adding code skeleton * Adding MBConvConfig. * Extend SqueezeExcitation to support custom min_value and activation. * Implement MBConv. * Replace stochastic_depth with operator. * Adding the rest of the EfficientNet implementation * Update torchvision/models/efficientnet.py * Replacing 1st activation of SE with SiLU. * Adding efficientnet_b3. * Replace mobilenetv3 assets with custom. * Switch to standard sigmoid and reconfiguring BN. * Reconfiguration of efficientnet. * Add repr * Add weights. * Update weights. * Adding B5-B7 weights. * Update docs and hubconf. * Fix doc link. * Fix typo on comment.
-
- 23 Aug, 2021 2 commits
-
-
F-G Fernandez authored
* style: Added typing to models/video * style: Fixed typing * style: Fixed typing * style: Fixed typing * refactor: Removed default value for stem * docs: Fixed docstring of VideoResNet * style: Refactored typing * docs: Fixed docstring * style: Fixed typing * docs: Specified docstring * typing: Fixed tying * docs: Fixed docstring * Undoing change.
-
F-G Fernandez authored
* style: Added typing annotations to segmentation/_utils * style: Added typing annotations to segmentation/segmentation * style: Added typing annotations to remaining segmentation models * style: Fixed typing of DeepLab * style: Fixed typing * fix: Fixed typing annotations & default values * Fixing python_type_check
-
- 17 Aug, 2021 1 commit
-
-
Vasilis Vryniotis authored
-
- 06 Aug, 2021 1 commit
-
-
Vincent Moens authored
using nn.init.trunc_normal_ instead of scipy.stats.truncnorm Co-authored-by:Vincent Moens <vmoens@fb.com>
-
- 28 Jun, 2021 1 commit
-
-
Vasilis Vryniotis authored
-
- 22 Jun, 2021 1 commit
-
-
Nicolas Hug authored
-
- 16 Jun, 2021 1 commit
-
-
Nicolas Hug authored
-
- 01 Jun, 2021 1 commit
-
-
Jiawei Liu authored
* [doc] add minimum input size for alexnet builder * [doc] add minimum input size for vgg builder * [doc] add minimum input size for squeezenet builder * [doc] add minimum input size for densenet builder * [doc] add minimum input size for inception_v3 builder * [doc] add minimum input size for googlenet builder Co-authored-by:Vasilis Vryniotis <datumbox@users.noreply.github.com>
-
- 25 May, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Fix a bug when trainable_layers == 0 * Fix same issue on ssd.
-
- 21 May, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Moving tensors to the right device. * Switch to gpu.medium
-
- 20 May, 2021 1 commit
-
-
Zhiqiang Wang authored
-
- 18 May, 2021 2 commits
-
-
Vasilis Vryniotis authored
* Remove incorrect params from doc and add references to the paper. * Add paper links in doc.
-
Nicolas Hug authored
-
- 17 May, 2021 2 commits
-
-
Nicolas Hug authored
-
Vasilis Vryniotis authored
-
- 13 May, 2021 1 commit
-
-
Vasilis Vryniotis authored
* Converting private parameters to public. * Add kwargs to handle extra params. * Add another kwargs. * Add arguments in _mobilenet_extractor.
-
- 12 May, 2021 1 commit
-
-
Vasilis Vryniotis authored
-
- 11 May, 2021 2 commits
-
-
Vasilis Vryniotis authored
* Partial implementation of SSDlite. * Add normal init and BN hyperparams. * Refactor to keep JIT happy * Completed SSDlite. * Fix lint * Update todos * Add expected file in repo. * Use C4 expansion instead of C4 output. * Change scales formula for Default Boxes. * Add cosine annealing on trainer. * Make T_max count epochs. * Fix test and handle corner-case. * Add support of support width_mult * Add ssdlite presets. * Change ReLU6, [-1,1] rescaling, backbone init & no pretraining. * Use _reduced_tail=True. * Add sync BN support. * Adding the best config along with its weights and documentation. * Make mean/std configurable. * Fix not implemented for half exception
-
Zhiqiang Wang authored
* Refactor grid default boxes with torch.meshgrid * Fix torch jit tracing * Only doing the list multiplication once Co-authored-by:
Francisco Massa <fvsmassa@gmail.com> * Make grid_default_box private as suggested Co-authored-by:
Vasilis Vryniotis <datumbox@users.noreply.github.com> * Replace list multiplication with torch.repeat * Move the clipping into _grid_default_boxes to accelerate Co-authored-by:
Francisco Massa <fvsmassa@gmail.com> Co-authored-by:
Vasilis Vryniotis <datumbox@users.noreply.github.com>
-
- 03 May, 2021 1 commit
-
-
Vasilis Vryniotis authored
-
- 30 Apr, 2021 2 commits
-
-
Vasilis Vryniotis authored
* Early skeleton of API. * Adding MultiFeatureMap and vgg16 backbone. * Making vgg16 backbone same as paper. * Making code generic to support all vggs. * Moving vgg's extra layers a separate class + L2 scaling. * Adding header vgg layers. * Fix maxpool patching. * Refactoring code to allow for support of different backbones & sizes: - Skeleton for Default Boxes generator class - Dynamic estimation of configuration when possible - Addition of types * Complete the implementation of DefaultBox generator. * Replace randn with empty. * Minor refactoring * Making clamping between 0 and 1 optional. * Change xywh to xyxy encoding. * Adding parameters and reusing objects in constructor. * Temporarily inherit from Retina to avoid dup code. * Implement forward methods + temp workarounds to inherit from retina. * Inherit more methods from retinanet. * Fix type error. * Add Regression loss. * Fixing JIT issues. * Change JIT workaround to minimize new code. * Fixing initialization bug. * Add classification loss. * Update todos. * Add weight loading support. * Support SSD512. * Change kernel_size to get output size 1x1 * Add xavier init and refactoring. * Adding unit-tests and fixing JIT issues. * Add a test for dbox generator. * Remove unnecessary import. * Workaround on GeneralizedRCNNTransform to support fixed size input. * Remove unnecessary random calls from the test. * Remove more rand calls from the test. * change mapping and handling of empty labels * Fix JIT warnings. * Speed up loss. * Convert 0-1 dboxes to original size. * Fix warning. * Fix tests. * Update comments. * Fixing minor bugs. * Introduce a custom DBoxMatcher. * Minor refactoring * Move extra layer definition inside feature extractor. * handle no bias on init. * Remove fixed image size limitation * Change initialization values for bias of classification head. * Refactoring and update test file. * Adding ResNet backbone. * Minor refactoring. * Remove inheritance of retina and general refactoring. * SSD should fix the input size. * Fixing messages and comments. * Silently ignoring exception if test-only. * Update comments. * Update regression loss. * Restore Xavier init everywhere, update the negative sampling method, change the clipping approach. * Fixing tests. * Refactor to move the losses from the Head to the SSD. * Removing resnet50 ssd version. * Adding support for best performing backbone and its config. * Refactor and clean up the API. * Fix lint * Update todos and comments. * Adding RandomHorizontalFlip and RandomIoUCrop transforms. * Adding necessary checks to our tranforms. * Adding RandomZoomOut. * Adding RandomPhotometricDistort. * Moving Detection transforms to references. * Update presets * fix lint * leave compose and object * Adding scaling for completeness. * Adding params in the repr * Remove unnecessary import. * minor refactoring * Remove unnecessary call. * Give better names to DBox* classes * Port num_anchors estimation in generator * Remove rescaling and fix presets * Add the ability to pass a custom head and refactoring. * fix lint * Fix unit-test * Update todos. * Change mean values. * Change the default parameter of SSD to train the full VGG16 and remove the catch of exception for eval only. * Adding documentation * Adding weights and updating readmes. * Update the model weights with a more performing model. * Adding doc for head. * Restore import.
-
Prabhat Roy authored
* Refactored set_cell_anchors() in AnchorGenerator * Addressed review comment * Fixed test failure
-
- 29 Apr, 2021 1 commit
-
-
Prabhat Roy authored
-