Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
vision
Commits
bb261c5c
Commit
bb261c5c
authored
Nov 04, 2019
by
hx89
Committed by
Francisco Massa
Nov 04, 2019
Browse files
Add commands to run quantized model with pretrained weights (#1547)
parent
9cdc8144
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
21 additions
and
0 deletions
+21
-0
references/classification/README.md
references/classification/README.md
+21
-0
No files found.
references/classification/README.md
View file @
bb261c5c
...
@@ -67,3 +67,24 @@ For Mobilenet-v2, the model was trained with quantization aware training, the se
...
@@ -67,3 +67,24 @@ For Mobilenet-v2, the model was trained with quantization aware training, the se
Training converges at about 10 epochs.
Training converges at about 10 epochs.
For post training quant, device is set to CPU. For training, the device is set to CUDA
For post training quant, device is set to CPU. For training, the device is set to CUDA
### Command to evaluate quantized models using the pre-trained weights:
For all quantized models except inception_v3:
```
python references/classification/train_quantization.py --data-path='imagenet_full_size/' \
--device='cpu' --test-only --backend='fbgemm' --model='<model_name>'
```
For inception_v3, since it expects tensors with a size of N x 3 x 299 x 299, before running above command,
need to change the input size of dataset_test in train.py to:
```
dataset_test = torchvision.datasets.ImageFolder(
valdir,
transforms.Compose([
transforms.Resize(342),
transforms.CenterCrop(299),
transforms.ToTensor(),
normalize,
]))
```
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment