Create your own video plot like the one above with this [Colab notebook](https://colab.research.google.com/github/tensorflow/models/blob/master/official/projects/movinet/plot_movinet_video_stream_predictions.ipynb).
## Description
## Description
Mobile Video Networks (MoViNets) are efficient video classification models
Mobile Video Networks (MoViNets) are efficient video classification models
...
@@ -55,6 +60,8 @@ approach that performs redundant computation and limits temporal scope.
...
@@ -55,6 +60,8 @@ approach that performs redundant computation and limits temporal scope.
## History
## History
-**2022-03-14** Support quantized TF Lite models and add/update Colab
notebooks.
-**2021-07-12** Add TF Lite support and replace 3D stream models with
-**2021-07-12** Add TF Lite support and replace 3D stream models with
mobile-friendly (2+1)D stream.
mobile-friendly (2+1)D stream.
-**2021-05-30** Add streaming MoViNet checkpoints and examples.
-**2021-05-30** Add streaming MoViNet checkpoints and examples.
...
@@ -165,7 +172,7 @@ different architecture. To download the old checkpoints, insert `_legacy` before
...
@@ -165,7 +172,7 @@ different architecture. To download the old checkpoints, insert `_legacy` before
For convenience, we provide converted TF Lite models for inference on mobile
For convenience, we provide converted TF Lite models for inference on mobile
devices. See the [TF Lite Example](#tf-lite-example) to export and run your own
devices. See the [TF Lite Example](#tf-lite-example) to export and run your own
models.
models. We also provide [quantized TF Lite binaries via TF Hub](https://tfhub.dev/s?deployment-format=lite&q=movinet).
For reference, MoViNet-A0-Stream runs with a similar latency to
For reference, MoViNet-A0-Stream runs with a similar latency to
[MobileNetV3-Large]
[MobileNetV3-Large]
...
@@ -226,7 +233,7 @@ backbone = movinet.Movinet(
...
@@ -226,7 +233,7 @@ backbone = movinet.Movinet(
use_external_states=False,
use_external_states=False,
)
)
model=movinet_model.MovinetClassifier(
model=movinet_model.MovinetClassifier(
backbone,num_classes=600,output_states=True)
backbone,num_classes=600,output_states=False)
# Create your example input here.
# Create your example input here.
# Refer to the paper for recommended input shapes.
# Refer to the paper for recommended input shapes.
"This notebook uses [MoViNets (Mobile Video Networks)](https://github.com/tensorflow/models/tree/master/official/projects/movinet) to predict a human action in a streaming video and outputs a visualization of predictions on each frame.\n",
"\n",
"Provide a video URL or upload your own to see how predictions change over time. All models can be run on CPU.\n",
"\n",
"Pretrained models are provided by [TensorFlow Hub](https://tfhub.dev/google/collections/movinet/) and the [TensorFlow Model Garden](https://github.com/tensorflow/models/tree/master/official/projects/movinet), trained on [Kinetics 600](https://deepmind.com/research/open-source/kinetics) for video action classification. All Models use TensorFlow 2 with Keras for inference and training. See the [research paper](https://arxiv.org/pdf/2103.11511.pdf) for more details.\n",
"\n",
"Example output using [this gif](https://github.com/tensorflow/models/raw/f8af2291cced43fc9f1d9b41ddbf772ae7b0d7d2/official/projects/movinet/files/jumpingjack.gif) as input:\n",