Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
ResNet50_tensorflow
Commits
4e95c874
Commit
4e95c874
authored
Dec 13, 2021
by
A. Unique TensorFlower
Committed by
TF Object Detection Team
Dec 13, 2021
Browse files
fixing documentation on how to use object detection model on Android to take into account metadata
PiperOrigin-RevId: 416012252
parent
642238de
Changes
3
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
56 additions
and
49 deletions
+56
-49
research/object_detection/g3doc/running_on_mobile_tensorflowlite.md
...bject_detection/g3doc/running_on_mobile_tensorflowlite.md
+27
-19
research/object_detection/g3doc/running_on_mobile_tf2.md
research/object_detection/g3doc/running_on_mobile_tf2.md
+27
-30
research/object_detection/g3doc/tf2_training_and_evaluation.md
...rch/object_detection/g3doc/tf2_training_and_evaluation.md
+2
-0
No files found.
research/object_detection/g3doc/running_on_mobile_tensorflowlite.md
View file @
4e95c874
...
...
@@ -10,12 +10,12 @@ devices. It enables on-device machine learning inference with low latency and a
small binary size. TensorFlow Lite uses many techniques for this such as
quantized kernels that allow smaller and faster (fixed-point math) models.
For this section, you will need to build
[
TensorFlow from
source
](
https://www.tensorflow.org/install/install_sources
)
to
get the
TensorFlow Lite support for the SSD model. At this time only SSD models
are supported.
Models like faster_rcnn are not supported at this time. You will
also need to install the
[
bazel build
tool
](
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android#bazel
)
.
For this section, you will need to build
[
TensorFlow from
source
](
https://www.tensorflow.org/install/install_sources
)
to
get the
TensorFlow Lite support for the SSD model. At this time only SSD models
are supported.
Models like faster_rcnn are not supported at this time. You will
also need to install the
[
bazel build
tool
](
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android#bazel
)
.
To make these commands easier to run, let’s set up some environment variables:
...
...
@@ -96,7 +96,17 @@ bazel run -c opt tensorflow/lite/python:tflite_convert -- \
--allow_custom_ops
```
# Running our model on Android
## Adding Metadata to the model
To make it easier to use tflite models on mobile, you will need to add
[
metadata
](
https://www.tensorflow.org/lite/convert/metadata
)
to your model and
also
[
pack
](
https://www.tensorflow.org/lite/convert/metadata#pack_metadata_and_associated_files_into_the_model
)
the associated labels file to it.
If you need more information, this process is also explained in the
[
Metadata writer Object detectors documentation
](
https://www.tensorflow.org/lite/convert/metadata_writer_tutorial#object_detectors
)
## Running our model on Android
To run our TensorFlow Lite model on device, we will use Android Studio to build
and run the TensorFlow Lite detection example with the new model. The example is
...
...
@@ -119,8 +129,8 @@ cp /tmp/tflite/detect.tflite \
$TF_EXAMPLES
/lite/examples/object_detection/android/app/src/main/assets
```
You will also need to copy your new labelmap labelmap.txt to the asset
s
directory.
It's important to notice that the labels file should be packed in the model (a
s
mentioned previously)
We will now edit the gradle build file to use these assets. First, open the
`build.gradle`
file
...
...
@@ -128,17 +138,15 @@ We will now edit the gradle build file to use these assets. First, open the
out the model download script to avoid your assets being overwritten:
`// apply
from:'download_model.gradle'`
```
If your model is named `detect.tflite`, and your labels file `labelmap.txt`, the
example will use them automatically as long as they've been properly copied into
the base assets directory. If you need to use a custom path or filename, open up
the
If your model is named `detect.tflite`, the example will use it automatically as
long as they've been properly copied into the base assets directory. If you need
to use a custom path or filename, open up the
$TF_EXAMPLES/lite/examples/object_detection/android/app/src/main/java/org/tensorflow/demo/DetectorActivity.java
file in a text editor and find the definition of TF_OD_API_LABELS_FILE. Update
this path to point to your new label map file:
"labels_list.txt". Note that if your model is quantized,
the flag TF_OD_API_IS_QUANTIZED is set to true, and if your model is floating
point, the flag TF_OD_API_IS_QUANTIZED is set to false. This new section of
DetectorActivity.java should now look as follows for a quantized model:
file in a text editor and find the definition of TF_OD_API_MODEL_FILE. Note that
if your model is quantized, the flag TF_OD_API_IS_QUANTIZED is set to true, and
if your model is floating point, the flag TF_OD_API_IS_QUANTIZED is set to
false. This new section of DetectorActivity.java should now look as follows for
a quantized model:
```
shell
private static final boolean TF_OD_API_IS_QUANTIZED = true;
...
...
research/object_detection/g3doc/running_on_mobile_tf2.md
View file @
4e95c874
...
...
@@ -92,27 +92,15 @@ converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8,
converter
.
representative_dataset
=
<
...
>
```
### Step 3:
A
dd Metadata
### Step 3:
a
dd Metadata
to the model
The model needs to be packed with
[
TFLite Metadata
](
https://www.tensorflow.org/lite/convert/metadata
)
to enable
easy integration into mobile apps using the
[
TFLite Task Library
](
https://www.tensorflow.org/lite/inference_with_metadata/task_library/object_detector
)
.
This metadata helps the inference code perform the correct pre & post processing
as required by the model. Use the following code to create the metadata.
```
python
from
tflite_support.metadata_writers
import
object_detector
from
tflite_support.metadata_writers
import
writer_utils
writer
=
object_detector
.
MetadataWriter
.
create_for_inference
(
writer_utils
.
load_file
(
_TFLITE_MODEL_PATH
),
input_norm_mean
=
[
0
],
input_norm_std
=
[
255
],
label_file_paths
=
[
_TFLITE_LABEL_PATH
])
writer_utils
.
save_file
(
writer
.
populate
(),
_TFLITE_MODEL_WITH_METADATA_PATH
)
```
See the TFLite Metadata Writer API
[
documentation
](
https://www.tensorflow.org/lite/convert/metadata_writer_tutorial#object_detectors
)
for more details.
To make it easier to use tflite models on mobile, you will need to add
[
metadata
](
https://www.tensorflow.org/lite/convert/metadata
)
to your model and
also
[
pack
](
https://www.tensorflow.org/lite/convert/metadata#pack_metadata_and_associated_files_into_the_model
)
the associated labels file to it.
If you need more information, This process is also explained in the
[
Image classification sample
](
https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification/metadata
)
## Running our model on Android
...
...
@@ -142,9 +130,9 @@ the
that support API >= 21. Additional details are available on the
[
TensorFlow Lite example page
](
https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection/android
)
.
Next we need to point the app to our new detect.tflite file
and give it th
e
names of our new labels. Specifically, we will copy our TensorFlow Lit
e
model with metadata to the app assets directory with the
following command:
Next we need to point the app to our new detect.tflite file
. Specifically, w
e
will copy our TensorFlow Lite flatbuffer to the app assets directory with th
e
following command:
```
shell
mkdir
$TF_EXAMPLES
/lite/examples/object_detection/android/app/src/main/assets
...
...
@@ -152,21 +140,30 @@ cp /tmp/tflite/detect.tflite \
$TF_EXAMPLES
/lite/examples/object_detection/android/app/src/main/assets
```
It's important to notice that the labels file should be packed in the model (as
mentioned on Step 3)
We will now edit the gradle build file to use these assets. First, open the
`build.gradle`
file
`$TF_EXAMPLES/lite/examples/object_detection/android/app/build.gradle`
. Comment
out the model download script to avoid your assets being overwritten:
```
shell
// apply from:
'download_model.gradle'
```
out the model download script to avoid your assets being overwritten:
`// apply
from:'download_model.gradle'`
```
If your model is named `detect.tflite`, the example will use it automatically as
long as they've been properly copied into the base assets directory. If you need
to use a custom path or filename, open up the
$TF_EXAMPLES/lite/examples/object_detection/android/app/src/main/java/org/tensorflow/demo/DetectorActivity.java
file in a text editor and find the definition of TF_OD_API_MODEL_FILE. Update
this path to point to your new model file.
file in a text editor and find the definition of TF_OD_API_MODEL_FILE. Note that
if your model is quantized, the flag TF_OD_API_IS_QUANTIZED is set to true, and
if your model is floating point, the flag TF_OD_API_IS_QUANTIZED is set to
false. This new section of DetectorActivity.java should now look as follows for
a quantized model:
```
shell
private static final boolean TF_OD_API_IS_QUANTIZED = true;
private static final String TF_OD_API_MODEL_FILE = "detect.tflite";
private static final String TF_OD_API_LABELS_FILE = "labels_list.txt";
```
Once you’ve copied the TensorFlow Lite model and edited the gradle build script
to not use the downloaded assets, you can build and deploy the app using the
...
...
research/object_detection/g3doc/tf2_training_and_evaluation.md
View file @
4e95c874
...
...
@@ -84,6 +84,7 @@ A local evaluation job can be run with the following command:
PIPELINE_CONFIG_PATH
={
path to pipeline config file
}
MODEL_DIR
={
path to model directory
}
CHECKPOINT_DIR
=
${
MODEL_DIR
}
MODEL_DIR
={
path to model directory
}
python object_detection/model_main_tf2.py
\
--pipeline_config_path
=
${
PIPELINE_CONFIG_PATH
}
\
--model_dir
=
${
MODEL_DIR
}
\
...
...
@@ -151,6 +152,7 @@ launched using the following command:
PIPELINE_CONFIG_PATH
={
path to pipeline config file
}
MODEL_DIR
={
path to model directory
}
CHECKPOINT_DIR
=
${
MODEL_DIR
}
MODEL_DIR
={
path to model directory
}
python object_detection/model_main_tf2.py
\
--pipeline_config_path
=
${
PIPELINE_CONFIG_PATH
}
\
--model_dir
=
${
MODEL_DIR
}
\
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment