@@ -3,7 +3,7 @@ For MobilenetV2+ see this file [mobilenet/README.md](mobilenet/README.md)
...
@@ -3,7 +3,7 @@ For MobilenetV2+ see this file [mobilenet/README.md](mobilenet/README.md)
# MobileNetV1
# MobileNetV1
[MobileNets](https://arxiv.org/abs/1704.04861) are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices with [TensorFlow Mobile](https://www.tensorflow.org/mobile/).
[MobileNets](https://arxiv.org/abs/1704.04861) are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices with [TensorFlow Lite](https://www.tensorflow.org/lite).
MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature.
MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature.
...
@@ -59,15 +59,17 @@ The linked model tar files contain the following:
...
@@ -59,15 +59,17 @@ The linked model tar files contain the following:
* Eval graph text protos (to be easily viewed)
* Eval graph text protos (to be easily viewed)
* Frozen trained models
* Frozen trained models
* Info file containing input and output information
* Info file containing input and output information
* Converted [TensorFlow Lite](https://www.tensorflow.org/mobile/tflite/) flatbuffer model
* Converted [TensorFlow Lite](https://www.tensorflow.org/lite) flatbuffer model
Note that quantized model GraphDefs are still float models, they just have FakeQuantization
Note that quantized model GraphDefs are still float models, they just have FakeQuantization
operation embedded to simulate quantization. These are converted by [TensorFlow Lite](https://www.tensorflow.org/mobile/tflite/)
operation embedded to simulate quantization. These are converted by [TensorFlow Lite](https://www.tensorflow.org/lite)
to be fully quantized. The final effect of quantization can be seen by comparing the frozen fake
to be fully quantized. The final effect of quantization can be seen by comparing the frozen fake
quantized graph to the size of the TFLite flatbuffer, i.e. The TFLite flatbuffer is about 1/4
quantized graph to the size of the TFLite flatbuffer, i.e. The TFLite flatbuffer is about 1/4
the size.
the size.
For more information on the quantization techniques used here, see
For more information on the quantization techniques used here, see