Commit 4e95c874 authored by A. Unique TensorFlower's avatar A. Unique TensorFlower Committed by TF Object Detection Team
Browse files

fixing documentation on how to use object detection model on Android to take into account metadata

PiperOrigin-RevId: 416012252
parent 642238de
...@@ -10,12 +10,12 @@ devices. It enables on-device machine learning inference with low latency and a ...@@ -10,12 +10,12 @@ devices. It enables on-device machine learning inference with low latency and a
small binary size. TensorFlow Lite uses many techniques for this such as small binary size. TensorFlow Lite uses many techniques for this such as
quantized kernels that allow smaller and faster (fixed-point math) models. quantized kernels that allow smaller and faster (fixed-point math) models.
For this section, you will need to build [TensorFlow from For this section, you will need to build
source](https://www.tensorflow.org/install/install_sources) to get the [TensorFlow from source](https://www.tensorflow.org/install/install_sources) to
TensorFlow Lite support for the SSD model. At this time only SSD models are supported. get the TensorFlow Lite support for the SSD model. At this time only SSD models
Models like faster_rcnn are not supported at this time. You will also need to install the are supported. Models like faster_rcnn are not supported at this time. You will
[bazel build also need to install the
tool](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android#bazel). [bazel build tool](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android#bazel).
To make these commands easier to run, let’s set up some environment variables: To make these commands easier to run, let’s set up some environment variables:
...@@ -96,7 +96,17 @@ bazel run -c opt tensorflow/lite/python:tflite_convert -- \ ...@@ -96,7 +96,17 @@ bazel run -c opt tensorflow/lite/python:tflite_convert -- \
--allow_custom_ops --allow_custom_ops
``` ```
# Running our model on Android ## Adding Metadata to the model
To make it easier to use tflite models on mobile, you will need to add
[metadata](https://www.tensorflow.org/lite/convert/metadata) to your model and
also
[pack](https://www.tensorflow.org/lite/convert/metadata#pack_metadata_and_associated_files_into_the_model)
the associated labels file to it.
If you need more information, this process is also explained in the
[Metadata writer Object detectors documentation](https://www.tensorflow.org/lite/convert/metadata_writer_tutorial#object_detectors)
## Running our model on Android
To run our TensorFlow Lite model on device, we will use Android Studio to build To run our TensorFlow Lite model on device, we will use Android Studio to build
and run the TensorFlow Lite detection example with the new model. The example is and run the TensorFlow Lite detection example with the new model. The example is
...@@ -119,8 +129,8 @@ cp /tmp/tflite/detect.tflite \ ...@@ -119,8 +129,8 @@ cp /tmp/tflite/detect.tflite \
$TF_EXAMPLES/lite/examples/object_detection/android/app/src/main/assets $TF_EXAMPLES/lite/examples/object_detection/android/app/src/main/assets
``` ```
You will also need to copy your new labelmap labelmap.txt to the assets It's important to notice that the labels file should be packed in the model (as
directory. mentioned previously)
We will now edit the gradle build file to use these assets. First, open the We will now edit the gradle build file to use these assets. First, open the
`build.gradle` file `build.gradle` file
...@@ -128,17 +138,15 @@ We will now edit the gradle build file to use these assets. First, open the ...@@ -128,17 +138,15 @@ We will now edit the gradle build file to use these assets. First, open the
out the model download script to avoid your assets being overwritten: `// apply out the model download script to avoid your assets being overwritten: `// apply
from:'download_model.gradle'` ``` from:'download_model.gradle'` ```
If your model is named `detect.tflite`, and your labels file `labelmap.txt`, the If your model is named `detect.tflite`, the example will use it automatically as
example will use them automatically as long as they've been properly copied into long as they've been properly copied into the base assets directory. If you need
the base assets directory. If you need to use a custom path or filename, open up to use a custom path or filename, open up the
the
$TF_EXAMPLES/lite/examples/object_detection/android/app/src/main/java/org/tensorflow/demo/DetectorActivity.java $TF_EXAMPLES/lite/examples/object_detection/android/app/src/main/java/org/tensorflow/demo/DetectorActivity.java
file in a text editor and find the definition of TF_OD_API_LABELS_FILE. Update file in a text editor and find the definition of TF_OD_API_MODEL_FILE. Note that
this path to point to your new label map file: if your model is quantized, the flag TF_OD_API_IS_QUANTIZED is set to true, and
"labels_list.txt". Note that if your model is quantized, if your model is floating point, the flag TF_OD_API_IS_QUANTIZED is set to
the flag TF_OD_API_IS_QUANTIZED is set to true, and if your model is floating false. This new section of DetectorActivity.java should now look as follows for
point, the flag TF_OD_API_IS_QUANTIZED is set to false. This new section of a quantized model:
DetectorActivity.java should now look as follows for a quantized model:
```shell ```shell
private static final boolean TF_OD_API_IS_QUANTIZED = true; private static final boolean TF_OD_API_IS_QUANTIZED = true;
......
...@@ -92,27 +92,15 @@ converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8, ...@@ -92,27 +92,15 @@ converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8,
converter.representative_dataset = <...> converter.representative_dataset = <...>
``` ```
### Step 3: Add Metadata ### Step 3: add Metadata to the model
The model needs to be packed with To make it easier to use tflite models on mobile, you will need to add
[TFLite Metadata](https://www.tensorflow.org/lite/convert/metadata) to enable [metadata](https://www.tensorflow.org/lite/convert/metadata) to your model and
easy integration into mobile apps using the also
[TFLite Task Library](https://www.tensorflow.org/lite/inference_with_metadata/task_library/object_detector). [pack](https://www.tensorflow.org/lite/convert/metadata#pack_metadata_and_associated_files_into_the_model)
This metadata helps the inference code perform the correct pre & post processing the associated labels file to it.
as required by the model. Use the following code to create the metadata. If you need more information, This process is also explained in the
[Image classification sample](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification/metadata)
```python
from tflite_support.metadata_writers import object_detector
from tflite_support.metadata_writers import writer_utils
writer = object_detector.MetadataWriter.create_for_inference(
writer_utils.load_file(_TFLITE_MODEL_PATH), input_norm_mean=[0],
input_norm_std=[255], label_file_paths=[_TFLITE_LABEL_PATH])
writer_utils.save_file(writer.populate(), _TFLITE_MODEL_WITH_METADATA_PATH)
```
See the TFLite Metadata Writer API [documentation](https://www.tensorflow.org/lite/convert/metadata_writer_tutorial#object_detectors)
for more details.
## Running our model on Android ## Running our model on Android
...@@ -142,9 +130,9 @@ the ...@@ -142,9 +130,9 @@ the
that support API >= 21. Additional details are available on the that support API >= 21. Additional details are available on the
[TensorFlow Lite example page](https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection/android). [TensorFlow Lite example page](https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection/android).
Next we need to point the app to our new detect.tflite file and give it the Next we need to point the app to our new detect.tflite file . Specifically, we
names of our new labels. Specifically, we will copy our TensorFlow Lite will copy our TensorFlow Lite flatbuffer to the app assets directory with the
model with metadata to the app assets directory with the following command: following command:
```shell ```shell
mkdir $TF_EXAMPLES/lite/examples/object_detection/android/app/src/main/assets mkdir $TF_EXAMPLES/lite/examples/object_detection/android/app/src/main/assets
...@@ -152,21 +140,30 @@ cp /tmp/tflite/detect.tflite \ ...@@ -152,21 +140,30 @@ cp /tmp/tflite/detect.tflite \
$TF_EXAMPLES/lite/examples/object_detection/android/app/src/main/assets $TF_EXAMPLES/lite/examples/object_detection/android/app/src/main/assets
``` ```
It's important to notice that the labels file should be packed in the model (as
mentioned on Step 3)
We will now edit the gradle build file to use these assets. First, open the We will now edit the gradle build file to use these assets. First, open the
`build.gradle` file `build.gradle` file
`$TF_EXAMPLES/lite/examples/object_detection/android/app/build.gradle`. Comment `$TF_EXAMPLES/lite/examples/object_detection/android/app/build.gradle`. Comment
out the model download script to avoid your assets being overwritten: out the model download script to avoid your assets being overwritten: `// apply
from:'download_model.gradle'` ```
```shell
// apply from:'download_model.gradle'
```
If your model is named `detect.tflite`, the example will use it automatically as If your model is named `detect.tflite`, the example will use it automatically as
long as they've been properly copied into the base assets directory. If you need long as they've been properly copied into the base assets directory. If you need
to use a custom path or filename, open up the to use a custom path or filename, open up the
$TF_EXAMPLES/lite/examples/object_detection/android/app/src/main/java/org/tensorflow/demo/DetectorActivity.java $TF_EXAMPLES/lite/examples/object_detection/android/app/src/main/java/org/tensorflow/demo/DetectorActivity.java
file in a text editor and find the definition of TF_OD_API_MODEL_FILE. Update file in a text editor and find the definition of TF_OD_API_MODEL_FILE. Note that
this path to point to your new model file. if your model is quantized, the flag TF_OD_API_IS_QUANTIZED is set to true, and
if your model is floating point, the flag TF_OD_API_IS_QUANTIZED is set to
false. This new section of DetectorActivity.java should now look as follows for
a quantized model:
```shell
private static final boolean TF_OD_API_IS_QUANTIZED = true;
private static final String TF_OD_API_MODEL_FILE = "detect.tflite";
private static final String TF_OD_API_LABELS_FILE = "labels_list.txt";
```
Once you’ve copied the TensorFlow Lite model and edited the gradle build script Once you’ve copied the TensorFlow Lite model and edited the gradle build script
to not use the downloaded assets, you can build and deploy the app using the to not use the downloaded assets, you can build and deploy the app using the
......
...@@ -84,6 +84,7 @@ A local evaluation job can be run with the following command: ...@@ -84,6 +84,7 @@ A local evaluation job can be run with the following command:
PIPELINE_CONFIG_PATH={path to pipeline config file} PIPELINE_CONFIG_PATH={path to pipeline config file}
MODEL_DIR={path to model directory} MODEL_DIR={path to model directory}
CHECKPOINT_DIR=${MODEL_DIR} CHECKPOINT_DIR=${MODEL_DIR}
MODEL_DIR={path to model directory}
python object_detection/model_main_tf2.py \ python object_detection/model_main_tf2.py \
--pipeline_config_path=${PIPELINE_CONFIG_PATH} \ --pipeline_config_path=${PIPELINE_CONFIG_PATH} \
--model_dir=${MODEL_DIR} \ --model_dir=${MODEL_DIR} \
...@@ -151,6 +152,7 @@ launched using the following command: ...@@ -151,6 +152,7 @@ launched using the following command:
PIPELINE_CONFIG_PATH={path to pipeline config file} PIPELINE_CONFIG_PATH={path to pipeline config file}
MODEL_DIR={path to model directory} MODEL_DIR={path to model directory}
CHECKPOINT_DIR=${MODEL_DIR} CHECKPOINT_DIR=${MODEL_DIR}
MODEL_DIR={path to model directory}
python object_detection/model_main_tf2.py \ python object_detection/model_main_tf2.py \
--pipeline_config_path=${PIPELINE_CONFIG_PATH} \ --pipeline_config_path=${PIPELINE_CONFIG_PATH} \
--model_dir=${MODEL_DIR} \ --model_dir=${MODEL_DIR} \
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment