- 27 Sep, 2020 1 commit
-
-
Shixin authored
* factor make_divisible function and move round_filters to nn_layers * modify SqueezeExcitation to add two additional parameter: divisible_by and gating_activation * modify the InvertedBottleneckBlock to include 1. use_depthwise, 2. use_residual, 3. regularize_depthwise additional boolean flag; Add control for depthwise activation and regularizer; remove expand_ratio from SqueezeExcitation * add Conv2DBNBlock definition * add mobilenet v2, v3 implementation * add mobilenet v1 * put mobilenet_base into class body * fix a type hint error * the invertedbottlenetblock is different for mobilenet and efficientnet. Made necessary changes to cope both. * add target_backbone while call invertedbottleneckblock * add relu6 and hard_sigmoid * add test for mobilenet * add mobilenet to factory * fix some typo; link the reference to the architectures * remove future import Co-authored-by:Shixin Luo <luoshixin@google.com>
-
- 08 Sep, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 330540132
-
- 04 Sep, 2020 1 commit
-
-
Abdullah Rashwan authored
PiperOrigin-RevId: 330022489
-
- 03 Sep, 2020 1 commit
-
-
Abdullah Rashwan authored
PiperOrigin-RevId: 329988482
-
- 02 Sep, 2020 2 commits
-
-
Zhenyu Tan authored
PiperOrigin-RevId: 329763594
-
Abdullah Rashwan authored
PiperOrigin-RevId: 329754787
-
- 27 Aug, 2020 1 commit
-
-
Abdullah Rashwan authored
PiperOrigin-RevId: 328803102
-
- 26 Aug, 2020 4 commits
-
-
Ruomei Yan authored
-
Ruomei Yan authored
-
Ruomei Yan authored
1. the dataset_num_private_threads flags 2. clustering does not support fp16 or mixed precision training
-
Ruomei Yan authored
-
- 13 Aug, 2020 2 commits
-
-
Allen Wang authored
PiperOrigin-RevId: 326534159
-
Allen Wang authored
PiperOrigin-RevId: 326534159
-
- 12 Aug, 2020 2 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 326286926
-
Hongkun Yu authored
PiperOrigin-RevId: 326286926
-
- 05 Aug, 2020 6 commits
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 325115012
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 325115012
-
Allen Wang authored
PiperOrigin-RevId: 325093611
-
Allen Wang authored
PiperOrigin-RevId: 325093611
-
Dan Holtmann-Rice authored
PiperOrigin-RevId: 325088513
-
Dan Holtmann-Rice authored
PiperOrigin-RevId: 325088513
-
- 04 Aug, 2020 2 commits
-
-
A. Unique TensorFlower authored
For object detection models, removing rpn_head.anchors_per_location from config and using anchor.num_scales * len(anchor.aspect_ratios) to compute anchors_per_location PiperOrigin-RevId: 324878153
-
A. Unique TensorFlower authored
For object detection models, removing rpn_head.anchors_per_location from config and using anchor.num_scales * len(anchor.aspect_ratios) to compute anchors_per_location PiperOrigin-RevId: 324878153
-
- 03 Aug, 2020 2 commits
-
-
Francois Chollet authored
PiperOrigin-RevId: 324625967
-
Francois Chollet authored
PiperOrigin-RevId: 324625967
-
- 31 Jul, 2020 3 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 324258617
-
Hongkun Yu authored
PiperOrigin-RevId: 324258617
-
Srihari Humbarwadi authored
-
- 30 Jul, 2020 2 commits
-
-
Tomer Kaftan authored
Pre-emptively disable the KerasTensors refactoring for the detection models in tensorflow models/official/vision/detection, because they rely on several unsupported things that will stop working entirely when the refactoring goes live. Specifically: * The custom layers implement `__call__` instead of `call` and rely on manually enter the keras backend graph * The vision models try to use `tf.while_loop` as Keras op layers during functional API construction, which is unsupported. Updating the models to avoid this would subtly change the variable names and break the pre-existing tf1-style name-based checkpoints, so for now we will just disable the KerasTensors refactoring for these models. PiperOrigin-RevId: 323937426
-
Tomer Kaftan authored
Pre-emptively disable the KerasTensors refactoring for the detection models in tensorflow models/official/vision/detection, because they rely on several unsupported things that will stop working entirely when the refactoring goes live. Specifically: * The custom layers implement `__call__` instead of `call` and rely on manually enter the keras backend graph * The vision models try to use `tf.while_loop` as Keras op layers during functional API construction, which is unsupported. Updating the models to avoid this would subtly change the variable names and break the pre-existing tf1-style name-based checkpoints, so for now we will just disable the KerasTensors refactoring for these models. PiperOrigin-RevId: 323937426
-
- 24 Jul, 2020 1 commit
-
-
Srihari Humbarwadi authored
- Fixed `intermediate_scale` in `Anchor` - "ratio" and "divisible" in doc string
-
- 20 Jul, 2020 2 commits
-
-
Tomer Kaftan authored
Make hack in official/vision/detection models that enters the backend keras graph stop happening once we enable the Functional API KerasTensors refactoring: As a workaround hack for the tf op layer conversion being fragile, the detection models have to explicitly enter the Keras backend graph. When we enable the KerasTensors refactoring of the Functional API internals, the op layer conversion will be much more reliable and this hack will not be necessary. In addition, the hack actually causes the models to break when we enable the refactoring (because it causes tensors to leak out of a graph). So, this CL changes the existing hack to stop applying once we've enabled the KerasTensors refactoring. PiperOrigin-RevId: 322229802
-
Tomer Kaftan authored
Make hack in official/vision/detection models that enters the backend keras graph stop happening once we enable the Functional API KerasTensors refactoring: As a workaround hack for the tf op layer conversion being fragile, the detection models have to explicitly enter the Keras backend graph. When we enable the KerasTensors refactoring of the Functional API internals, the op layer conversion will be much more reliable and this hack will not be necessary. In addition, the hack actually causes the models to break when we enable the refactoring (because it causes tensors to leak out of a graph). So, this CL changes the existing hack to stop applying once we've enabled the KerasTensors refactoring. PiperOrigin-RevId: 322229802
-
- 17 Jul, 2020 2 commits
-
-
Allen Wang authored
PiperOrigin-RevId: 321855387
-
Allen Wang authored
PiperOrigin-RevId: 321855387
-
- 15 Jul, 2020 2 commits
-
-
Jin Young Sohn authored
PiperOrigin-RevId: 321398549
-
Jin Young Sohn authored
PiperOrigin-RevId: 321398549
-
- 13 Jul, 2020 2 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 321025013
-
Hongkun Yu authored
PiperOrigin-RevId: 321025013
-
- 01 Jul, 2020 1 commit
-
-
Darien Schettler authored
supportted --> supported
-