"projects/DeepLab/deeplab/resnet.py" did not exist on "5b3792fc3ef9ab6a6f8f30634ab2e52fb0941af3"
- 22 Sep, 2020 1 commit
-
-
Philip Meier authored
* partially enable mypy for .models * fix existing errors * ignore error instead of using Union
-
- 05 May, 2020 1 commit
-
-
Bisakh Mondal authored
-
- 31 Mar, 2020 1 commit
-
-
Philip Meier authored
* remove sys.version_info == 2 * remove sys.version_info < 3 * remove from __future__ imports
-
- 12 Mar, 2020 1 commit
-
-
NVS Abhilash authored
-
- 10 Mar, 2020 1 commit
-
-
eellison authored
* fix googlenet no aux logits * small fix Co-authored-by:eellison <eellison@fb.com>
-
- 04 Mar, 2020 1 commit
-
-
Philip Meier authored
-
- 31 Oct, 2019 1 commit
-
-
hx89 authored
* quantizable googlenet * Minor improvements * Rename basic_conv2d with conv_block plus additional fixes * More renamings and fixes * Bugfix * Fix missing import for mypy * Add pretrained weights
-
- 27 Sep, 2019 1 commit
-
-
eellison authored
* make googlnet scriptable * Remove typing import in favor of torch.jit.annotations * add inceptionnet * flake fixes * fix asssert true * add import division for torchscript * fix script compilation * fix flake, py2 division error * fix py2 division error
-
- 07 Aug, 2019 1 commit
-
-
Myosaki authored
`self.fc1(x)` converts the shape of `x` into "N x 1024", and `self.fc2(x)` converts into "N x num_classes". By adding `print(x.shape)` under each comment line, the console displays as follows (batch_size is 1): ```text torch.Size([1, 2048]) torch.Size([1, 1024]) torch.Size([1, 1024]) torch.Size([1, 1024]) torch.Size([1, 1000]) ```
-
- 19 Jul, 2019 1 commit
-
-
apache2046 authored
Fix the old flatten method which use the size(0) to caculate the batch size, the old method will intruduce Gather opertion in the onnx output, which will faild parsed by tensorRT 5.0 (#1134)
-
- 18 Jun, 2019 1 commit
-
-
taylanbil authored
I grepped the repo for Ouputs and these were the only occurences
-
- 30 Apr, 2019 1 commit
-
-
Philip Meier authored
* added progress flag to model getters * flake8 * bug fix * backward commpability
-
- 05 Apr, 2019 1 commit
-
-
Francisco Massa authored
-
- 04 Apr, 2019 1 commit
-
-
Sepehr Sameni authored
* add aux_logits support to inception it is related to pytorch/pytorch#18668 * instantiate InceptionAux only when requested it is related to pytorch/pytorch#18668 * revert googlenet * support and aux_logits in pretrained models * return namedtuple when aux_logit is True
-
- 29 Mar, 2019 1 commit
-
-
Michael Kösel authored
* Match Tensorflows implementation of GoogLeNet * just disable the branch when pretrained is true * don't use legacy code
-
- 26 Mar, 2019 1 commit
-
-
ekka authored
-
- 09 Mar, 2019 1 commit
-
-
ekka authored
* Added dimensions in the comments The update provides the dimensions of the processed data following the style of inceptionV3 implementation. * Changed docs and comments Updated doc with the argument `transform_input`. Modified comments to match inceptionV3 style.
-
- 07 Mar, 2019 1 commit
-
-
Michael Kösel authored
* Add GoogLeNet (Inception v1) * Fix missing padding * Add missing ReLu to aux classifier * Add Batch normalized version of GoogLeNet * Use ceil_mode instead of padding and initialize weights using "xavier" * Match BVLC GoogLeNet zero initialization of classifier * Small cleanup * use adaptive avg pool * adjust network to match TensorFlow * Update url of pre-trained model and add classification results on ImageNet * Bugfix that improves performance by 1 point
-