Unverified Commit 965cc3ee authored by Ayushman Kumar's avatar Ayushman Kumar Committed by GitHub
Browse files

Merge pull request #7 from tensorflow/master

updated
parents 1f3247f4 1f685c54
![No Maintenance Intended](https://img.shields.io/badge/No%20Maintenance%20Intended-%E2%9C%95-red.svg)
![TensorFlow Requirement: 1.x](https://img.shields.io/badge/TensorFlow%20Requirement-1.x-brightgreen)
![TensorFlow 2 Not Supported](https://img.shields.io/badge/TensorFlow%202%20Not%20Supported-%E2%9C%95-red.svg)
# Compression with Neural Networks
This is a [TensorFlow](http://www.tensorflow.org/) model repo containing
......
![No Maintenance Intended](https://img.shields.io/badge/No%20Maintenance%20Intended-%E2%9C%95-red.svg)
![TensorFlow Requirement: 1.x](https://img.shields.io/badge/TensorFlow%20Requirement-1.x-brightgreen)
![TensorFlow 2 Not Supported](https://img.shields.io/badge/TensorFlow%202%20Not%20Supported-%E2%9C%95-red.svg)
# DeepSpeech2 Model
## Overview
This is an implementation of the [DeepSpeech2](https://arxiv.org/pdf/1512.02595.pdf) model. Current implementation is based on the code from the authors' [DeepSpeech code](https://github.com/PaddlePaddle/DeepSpeech) and the implementation in the [MLPerf Repo](https://github.com/mlperf/reference/tree/master/speech_recognition).
......
......@@ -209,6 +209,35 @@ def generate_dataset(data_dir):
speech_dataset = dataset.DeepSpeechDataset(train_data_conf)
return speech_dataset
def per_device_batch_size(batch_size, num_gpus):
"""For multi-gpu, batch-size must be a multiple of the number of GPUs.
Note that distribution strategy handles this automatically when used with
Keras. For using with Estimator, we need to get per GPU batch.
Args:
batch_size: Global batch size to be divided among devices. This should be
equal to num_gpus times the single-GPU batch_size for multi-gpu training.
num_gpus: How many GPUs are used with DistributionStrategies.
Returns:
Batch size per device.
Raises:
ValueError: if batch_size is not divisible by number of devices
"""
if num_gpus <= 1:
return batch_size
remainder = batch_size % num_gpus
if remainder:
err = ('When running with multiple GPUs, batch size '
'must be a multiple of the number of available GPUs. Found {} '
'GPUs with a batch size of {}; try --batch_size={} instead.'
).format(num_gpus, batch_size, batch_size - remainder)
raise ValueError(err)
return int(batch_size / num_gpus)
def run_deep_speech(_):
"""Run deep speech training and eval loop."""
......@@ -257,8 +286,7 @@ def run_deep_speech(_):
model_dir=flags_obj.model_dir,
batch_size=flags_obj.batch_size)
per_replica_batch_size = distribution_utils.per_replica_batch_size(
flags_obj.batch_size, num_gpus)
per_replica_batch_size = per_device_batch_size(flags_obj.batch_size, num_gpus)
def input_fn_train():
return dataset.input_fn(
......
......@@ -169,6 +169,10 @@ under tensorflow/models. Please refer to the LICENSE for details.
## Change Logs
### March 26, 2020
* Supported EdgeTPU-DeepLab and EdgeTPU-DeepLab-slim on Cityscapes.
**Contributor**: Yun Long.
### November 20, 2019
* Supported MobileNetV3 large and small model variants on Cityscapes.
**Contributor**: Yukun Zhu.
......@@ -312,6 +316,6 @@ and Cityscapes.
Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, Kevin Murphy. <br />
[[link]](https://arxiv.org/abs/1712.00559). In ECCV, 2018.
16 **Searching for MobileNetV3**<br />
16. **Searching for MobileNetV3**<br />
Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, Quoc V. Le, Hartwig Adam. <br />
[[link]](https://arxiv.org/abs/1905.02244). In ICCV, 2019.
......@@ -18,7 +18,7 @@
import copy
import functools
import tensorflow as tf
import tensorflow.compat.v1 as tf
from tensorflow.contrib import slim as contrib_slim
from deeplab.core import nas_network
......@@ -31,10 +31,13 @@ from nets.mobilenet import mobilenet_v3
slim = contrib_slim
# Default end point for MobileNetv2.
# Default end point for MobileNetv2 (one-based indexing).
_MOBILENET_V2_FINAL_ENDPOINT = 'layer_18'
# Default end point for MobileNetv3.
_MOBILENET_V3_LARGE_FINAL_ENDPOINT = 'layer_17'
_MOBILENET_V3_SMALL_FINAL_ENDPOINT = 'layer_13'
# Default end point for EdgeTPU Mobilenet.
_MOBILENET_EDGETPU = 'layer_24'
def _mobilenet_v2(net,
......@@ -170,6 +173,29 @@ def mobilenet_v3_large_seg(net,
final_endpoint=_MOBILENET_V3_LARGE_FINAL_ENDPOINT)
def mobilenet_edgetpu(net,
depth_multiplier,
output_stride,
divisible_by=None,
reuse=None,
scope=None,
final_endpoint=None):
"""EdgeTPU version of mobilenet model for segmentation task."""
del divisible_by
del final_endpoint
conv_defs = copy.deepcopy(mobilenet_v3.V3_EDGETPU)
return _mobilenet_v3(
net,
depth_multiplier=depth_multiplier,
output_stride=output_stride,
divisible_by=8,
conv_defs=conv_defs,
reuse=reuse,
scope=scope, # the scope is 'MobilenetEdgeTPU'
final_endpoint=_MOBILENET_EDGETPU)
def mobilenet_v3_small_seg(net,
depth_multiplier,
output_stride,
......@@ -205,6 +231,7 @@ def mobilenet_v3_small_seg(net,
# A map from network name to network function.
networks_map = {
'mobilenet_v2': _mobilenet_v2,
'mobilenet_edgetpu': mobilenet_edgetpu,
'mobilenet_v3_large_seg': mobilenet_v3_large_seg,
'mobilenet_v3_small_seg': mobilenet_v3_small_seg,
'resnet_v1_18': resnet_v1_beta.resnet_v1_18,
......@@ -294,6 +321,7 @@ def mobilenet_v2_arg_scope(is_training=True,
# A map from network name to network arg scope.
arg_scopes_map = {
'mobilenet_v2': mobilenet_v2.training_scope,
'mobilenet_edgetpu': mobilenet_v2_arg_scope,
'mobilenet_v3_large_seg': mobilenet_v2_arg_scope,
'mobilenet_v3_small_seg': mobilenet_v2_arg_scope,
'resnet_v1_18': resnet_v1_beta.resnet_arg_scope,
......@@ -427,6 +455,7 @@ networks_to_feature_maps = {
# ImageNet pretrained versions of these models.
name_scope = {
'mobilenet_v2': 'MobilenetV2',
'mobilenet_edgetpu': 'MobilenetEdgeTPU',
'mobilenet_v3_large_seg': 'MobilenetV3',
'mobilenet_v3_small_seg': 'MobilenetV3',
'resnet_v1_18': 'resnet_v1_18',
......@@ -464,6 +493,7 @@ def _preprocess_zero_mean_unit_range(inputs, dtype=tf.float32):
_PREPROCESS_FN = {
'mobilenet_v2': _preprocess_zero_mean_unit_range,
'mobilenet_edgetpu': _preprocess_zero_mean_unit_range,
'mobilenet_v3_large_seg': _preprocess_zero_mean_unit_range,
'mobilenet_v3_small_seg': _preprocess_zero_mean_unit_range,
'resnet_v1_18': _preprocess_subtract_imagenet_mean,
......
......@@ -360,8 +360,8 @@
"version": "0.3.2"
},
"kernelspec": {
"display_name": "Python 2",
"name": "python2"
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
......
......@@ -73,6 +73,24 @@ xception71_dpc_cityscapes_trainval | Xception_71 | ImageNet <br> MS
In the table, **OS** denotes output stride.
Note for mobilenet v3 models, we use additional commandline flags as follows:
```
--model_variant={ mobilenet_v3_large_seg | mobilenet_v3_small_seg }
--image_pooling_crop_size=769,769
--image_pooling_stride=4,5
--add_image_level_feature=1
--aspp_convs_filters=128
--aspp_with_concat_projection=0
--aspp_with_squeeze_and_excitation=1
--decoder_use_sum_merge=1
--decoder_filters=19
--decoder_output_is_logits=1
--image_se_uses_qsigmoid=1
--decoder_output_stride=8
--output_stride=32
```
Checkpoint name | Eval OS | Eval scales | Left-right Flip | Multiply-Adds | Runtime (sec) | Cityscapes mIOU | File Size
-------------------------------------------------------------------------------------------------------------------------------- | :-------: | :-------------------------: | :-------------: | :-------------------: | :------------: | :----------------------------: | :-------:
[mobilenetv2_coco_cityscapes_trainfine](http://download.tensorflow.org/models/deeplabv3_mnv2_cityscapes_train_2018_02_05.tar.gz) | 16 <br> 8 | [1.0] <br> [0.75:0.25:1.25] | No <br> Yes | 21.27B <br> 433.24B | 0.8 <br> 51.12 | 70.71% (val) <br> 73.57% (val) | 23MB
......@@ -82,7 +100,45 @@ Checkpoint name
[xception71_dpc_cityscapes_trainfine](http://download.tensorflow.org/models/deeplab_cityscapes_xception71_trainfine_2018_09_08.tar.gz) | 16 | [1.0] | No | 502.07B | - | 80.31% (val) | 445MB
[xception71_dpc_cityscapes_trainval](http://download.tensorflow.org/models/deeplab_cityscapes_xception71_trainvalfine_2018_09_08.tar.gz) | 8 | [0.75:0.25:2] | Yes | - | - | 82.66% (**test**) | 446MB
### EdgeTPU-DeepLab models on Cityscapes
EdgeTPU is Google's machine learning accelerator architecture for edge devices
(exists in Coral devices and Pixel4's Neural Core). Leveraging nerual
architecture search (NAS, also named as Auto-ML) algorithms,
[EdgeTPU-Mobilenet](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet)
has been released which yields higher hardware utilization, lower latency, as
well as better accuracy over Mobilenet-v2/v3. We use EdgeTPU-Mobilenet as the
backbone and provide checkpoints that have been pretrained on Cityscapes
train_fine set. We named them as EdgeTPU-DeepLab models.
Checkpoint name | Network backbone | Pretrained dataset | ASPP | Decoder
-------------------- | :----------------: | :----------------: | :--: | :-----:
EdgeTPU-DeepLab | EdgeMobilenet-1.0 | ImageNet | N/A | N/A
EdgeTPU-DeepLab-slim | EdgeMobilenet-0.75 | ImageNet | N/A | N/A
For EdgeTPU-DeepLab-slim, the backbone feature extractor has depth multiplier =
0.75 and aspp_convs_filters = 128. We do not employ ASPP nor decoder modules to
further reduce the latency. We employ the same train/eval flags used for
MobileNet-v2 DeepLab model. Flags changed for EdgeTPU-DeepLab model are listed
here.
```
--decoder_output_stride=''
--aspp_convs_filters=256
--model_variant=mobilenet_edgetpu
```
For EdgeTPU-DeepLab-slim, also include the following flags.
```
--depth_multiplier=0.75
--aspp_convs_filters=128
```
Checkpoint name | Eval OS | Eval scales | Cityscapes mIOU | Multiply-Adds | Simulator latency on Pixel 4 EdgeTPU
---------------------------------------------------------------------------------------------------- | :--------: | :---------: | :--------------------------: | :------------: | :----------------------------------:
[EdgeTPU-DeepLab](http://download.tensorflow.org/models/edgetpu-deeplab_2020_03_09.tar.gz) | 32 <br> 16 | [1.0] | 70.6% (val) <br> 74.1% (val) | 5.6B <br> 7.1B | 13.8 ms <br> 17.5 ms
[EdgeTPU-DeepLab-slim](http://download.tensorflow.org/models/edgetpu-deeplab-slim_2020_03_09.tar.gz) | 32 <br> 16 | [1.0] | 70.0% (val) <br> 73.2% (val) | 3.5B <br> 4.3B | 9.9 ms <br> 13.2 ms
## DeepLab models trained on ADE20K
......
......@@ -8,9 +8,9 @@ Tensorflow using one of the following commands:
```bash
# For CPU:
pip install tensorflow
pip install 'tensorflow==1.14'
# For GPU:
pip install tensorflow-gpu
pip install 'tensorflow-gpu==1.14'
```
### Protobuf
......
![TensorFlow Requirement: 1.x](https://img.shields.io/badge/TensorFlow%20Requirement-1.x-brightgreen)
![TensorFlow 2 Not Supported](https://img.shields.io/badge/TensorFlow%202%20Not%20Supported-%E2%9C%95-red.svg)
# DELF: DEep Local Features
This project presents code for extracting DELF features, which were introduced
......
![No Maintenance Intended](https://img.shields.io/badge/No%20Maintenance%20Intended-%E2%9C%95-red.svg)
![TensorFlow Requirement: 1.x](https://img.shields.io/badge/TensorFlow%20Requirement-1.x-brightgreen)
![TensorFlow 2 Not Supported](https://img.shields.io/badge/TensorFlow%202%20Not%20Supported-%E2%9C%95-red.svg)
## Introduction
This is the code used for two domain adaptation papers.
......
![TensorFlow Requirement: 1.x](https://img.shields.io/badge/TensorFlow%20Requirement-1.x-brightgreen)
![TensorFlow 2 Not Supported](https://img.shields.io/badge/TensorFlow%202%20Not%20Supported-%E2%9C%95-red.svg)
Code for performing Hierarchical RL based on the following publications:
"Data-Efficient Hierarchical Reinforcement Learning" by
......
![No Maintenance Intended](https://img.shields.io/badge/No%20Maintenance%20Intended-%E2%9C%95-red.svg)
![TensorFlow Requirement: 1.x](https://img.shields.io/badge/TensorFlow%20Requirement-1.x-brightgreen)
![TensorFlow 2 Not Supported](https://img.shields.io/badge/TensorFlow%202%20Not%20Supported-%E2%9C%95-red.svg)
# Filtering Variational Objectives
This folder contains a TensorFlow implementation of the algorithms from
......@@ -208,4 +212,4 @@ This codebase comes with a number of tests to verify correctness, runnable via `
### Contact
This codebase is maintained by Dieterich Lawson, reachable via email at dieterichl@google.com. For questions and issues please open an issue on the tensorflow/models issues tracker and assign it to @dieterichlawson.
This codebase is maintained by Dieterich Lawson. For questions and issues please open an issue on the tensorflow/models issues tracker and assign it to @dieterichlawson.
![No Maintenance Intended](https://img.shields.io/badge/No%20Maintenance%20Intended-%E2%9C%95-red.svg)
![TensorFlow Requirement: 1.x](https://img.shields.io/badge/TensorFlow%20Requirement-1.x-brightgreen)
![TensorFlow 2 Not Supported](https://img.shields.io/badge/TensorFlow%202%20Not%20Supported-%E2%9C%95-red.svg)
# Global Objectives
The Global Objectives library provides TensorFlow loss functions that optimize
directly for a variety of objectives including AUC, recall at precision, and
......
![No Maintenance Intended](https://img.shields.io/badge/No%20Maintenance%20Intended-%E2%9C%95-red.svg)
![TensorFlow Requirement: 1.x](https://img.shields.io/badge/TensorFlow%20Requirement-1.x-brightgreen)
![TensorFlow 2 Not Supported](https://img.shields.io/badge/TensorFlow%202%20Not%20Supported-%E2%9C%95-red.svg)
# Show and Tell: A Neural Image Caption Generator
A TensorFlow implementation of the image-to-text model described in the paper:
......
![TensorFlow Requirement: 1.x](https://img.shields.io/badge/TensorFlow%20Requirement-1.x-brightgreen)
![TensorFlow 2 Not Supported](https://img.shields.io/badge/TensorFlow%202%20Not%20Supported-%E2%9C%95-red.svg)
**NOTE: For the most part, you will find a newer version of this code at [models/research/slim](https://github.com/tensorflow/models/tree/master/research/slim).** In particular:
* `inception_train.py` and `imagenet_train.py` should no longer be used. The slim editions for running on multiple GPUs are the current best examples.
......
![No Maintenance Intended](https://img.shields.io/badge/No%20Maintenance%20Intended-%E2%9C%95-red.svg)
![TensorFlow Requirement: 1.x](https://img.shields.io/badge/TensorFlow%20Requirement-1.x-brightgreen)
![TensorFlow 2 Not Supported](https://img.shields.io/badge/TensorFlow%202%20Not%20Supported-%E2%9C%95-red.svg)
# Learned Optimizer
Code for [Learned Optimizers that Scale and Generalize](https://arxiv.org/abs/1703.04813).
......
![No Maintenance Intended](https://img.shields.io/badge/No%20Maintenance%20Intended-%E2%9C%95-red.svg)
![TensorFlow Requirement: 1.x](https://img.shields.io/badge/TensorFlow%20Requirement-1.x-brightgreen)
![TensorFlow 2 Not Supported](https://img.shields.io/badge/TensorFlow%202%20Not%20Supported-%E2%9C%95-red.svg)
---
Code for the Memory Module as described
in "Learning to Remember Rare Events" by
Lukasz Kaiser, Ofir Nachum, Aurko Roy, and Samy Bengio
......
![No Maintenance Intended](https://img.shields.io/badge/No%20Maintenance%20Intended-%E2%9C%95-red.svg)
![TensorFlow Requirement: 1.x](https://img.shields.io/badge/TensorFlow%20Requirement-1.x-brightgreen)
![TensorFlow 2 Not Supported](https://img.shields.io/badge/TensorFlow%202%20Not%20Supported-%E2%9C%95-red.svg)
# Learning Unsupervised Learning Rules
This repository contains code and weights for the learned update rule
presented in "Learning Unsupervised Learning Rules." At this time, this
code can not meta-train the update rule.
### Structure
`run_eval.py` contains the main training loop. This constructs an op
that runs one iteration of the learned update rule and assigns the
......
![No Maintenance Intended](https://img.shields.io/badge/No%20Maintenance%20Intended-%E2%9C%95-red.svg)
![TensorFlow Requirement: 1.x](https://img.shields.io/badge/TensorFlow%20Requirement-1.x-brightgreen)
![TensorFlow 2 Not Supported](https://img.shields.io/badge/TensorFlow%202%20Not%20Supported-%E2%9C%95-red.svg)
# LexNET for Noun Compound Relation Classification
This is a [Tensorflow](http://www.tensorflow.org/) implementation of the LexNET
......
![No Maintenance Intended](https://img.shields.io/badge/No%20Maintenance%20Intended-%E2%9C%95-red.svg)
![TensorFlow Requirement: 1.x](https://img.shields.io/badge/TensorFlow%20Requirement-1.x-brightgreen)
![TensorFlow 2 Not Supported](https://img.shields.io/badge/TensorFlow%202%20Not%20Supported-%E2%9C%95-red.svg)
<font size=4><b>Language Model on One Billion Word Benchmark</b></font>
<b>Authors:</b>
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment