- 22 Jan, 2020 1 commit
-
-
Mark Sandler authored
Internal cleanup (py2->py3) plus the following changes: 285513318 by Sergio Guadarrama: Adds a script for post-training quantization 284222305 by Sergio Guadarrama: Modified squeeze-excite operation to accommodate tensors of undefined (Nonetype) H/W. 282028343 by Sergio Guadarrama: Add MobilenetV3 and MobilenetEdgeTPU to the slim/nets_factory. PiperOrigin-RevId: 289455329 Co-authored-by:Sergio Guadarrama <sguada@gmail.com>
-
- 12 Nov, 2019 1 commit
-
-
Mark Sandler authored
279978375 by Sergio Guadarrama: Pass s=2 to the expanded_conv block so it can apply residual correctly in case of fused convolutions. (Before it was relying on channel mismatch only) -- 279788358 by Sergio Guadarrama: Update README to add mobilenet-edgetpu details -- 279774392 by Sergio Guadarrama: Adds MobilenetV3-EdgeTpu definition. -- 278917344 by Sergio Guadarrama: Create visualwakewords dataset using slim scripts instead of custom scripts. -- 277940048 by Sergio Guadarrama: Internal changes to tf.contrib symbols -- PiperOrigin-RevId: 279978375
-
- 21 Oct, 2019 1 commit
-
-
pkulzc authored
275538818 by Sergio Guadarrama: Support grayscale input images in Slim model training -- 275355841 by Sergio Guadarrama: Fixed cases where tf.TensorShape was constructed with float dimensions This is a prerequisite for making TensorShape and Dimension more strict about the types of their arguments. -- 275131829 by Sergio Guadarrama: updates mobilenet/README.md to be github compatible adds V2+ reference to mobilenet_v1.md file and fixes invalid markdown -- PiperOrigin-RevId: 275538818
-
- 25 Sep, 2018 1 commit
-
-
pkulzc authored
* Merged commit includes the following changes: 213899768 by Sergio Guadarrama: Fixes #3819. -- 213493831 by Sergio Guadarrama: Internal change 212057654 by Sergio Guadarrama: Internal change 210747685 by Sergio Guadarrama: For FPN, when use_depthwise is set to true, use slightly modified mobilenet v1 config. -- 210128931 by Sergio Guadarrama: Allow user-defined current_step in NASNet. -- 209092664 by Sergio Guadarrama: Add quantized fine-tuning / training / eval and export to slim image classifier binaries. -- 207651347 by Sergio Guadarrama: Update mobilenet v1 docs to include revised tflite models. -- 207165245 by Sergio Guadarrama: Internal change 207095064 by Sergio Guadarrama: Internal change PiperOrigin-RevId: 213899768 * Update model_lib.py to fix eval_spec name issue.
-
- 19 Jun, 2018 1 commit
-
-
Mark Sandler authored
2. Flag that allows to prevent imagenet.py from downloading label_to_names from github and/or dumping into training directory (which might be read-only) 3. Adds some comments about how decay steps are computed, since it computed differently when there are clones vs sync replicas. 4. Updates mobilenet.md to describe the training process using train_image_classifer 5. Add citation for TF-Slim model library. PiperOrigin-RevId: 191955231 PiperOrigin-RevId: 193254125 PiperOrigin-RevId: 193371562 PiperOrigin-RevId: 194085628 PiperOrigin-RevId: 194857067 PiperOrigin-RevId: 196125653 PiperOrigin-RevId: 196589070 PiperOrigin-RevId: 199522873 PiperOrigin-RevId: 200351305
-
- 15 May, 2018 1 commit
-
-
Haiyang Kong authored
* Make codes more pythonic. * Restore the indents Restore the indents.
-
- 01 May, 2018 1 commit
-
-
pkulzc authored
* Adding option for one_box_for_all_classes to the box_predictor PiperOrigin-RevId: 192813444 * Extend to accept different ratios of conv channels. PiperOrigin-RevId: 192837477 * Remove inaccurate caveat from proto file. PiperOrigin-RevId: 192850747 * Add option to set dropout for classification net in weight shared box predictor. PiperOrigin-RevId: 192922089 * fix flakiness in testSSDRandomCropWithMultiClassScores due to randomness. PiperOrigin-RevId: 193067658 * Post-process now works again in train mode. PiperOrigin-RevId: 193087707 * Adding support for reading in logits as groundtruth labels and applying an optional temperature (scaling) before softmax in support of distillation. PiperOrigin-RevId: 193119411 * Add a util function to visualize value histogram as a tf.summary.image. PiperOrigin-RevId: 193137342 * Do not add batch norm parameters to final conv2d ops that predict boxes encodings and class scores in weight shared conv box predictor. This allows us to set proper bias and force initial predictions to be background when using focal loss. PiperOrigin-RevId: 193204364 * Make sure the final layers are also resized proportional to conv_depth_ratio. PiperOrigin-RevId: 193228972 * Remove deprecated batch_norm_trainable field from ssd mobilenet v2 config PiperOrigin-RevId: 193244778 * Updating coco evaluation metrics to allow for a batch of image info, rather than a single image. PiperOrigin-RevId: 193382651 * Update protobuf requirements to 3+ in installation docs. PiperOrigin-RevId: 193409179 * Add support for training keypoints. PiperOrigin-RevId: 193576336 * Fix data augmentation functions. PiperOrigin-RevId: 193737238 * Read the default batch size from config file. PiperOrigin-RevId: 193959861 * Fixing a bug in the coco evaluator. PiperOrigin-RevId: 193974479 * num_gt_boxes_per_image and num_det_boxes_per_image value incorrect. Should be not the expand dim. PiperOrigin-RevId: 194122420 * Add option to evaluate any checkpoint (without requiring write access to that directory and overwriting any existing logs there). PiperOrigin-RevId: 194292198 * PiperOrigin-RevId: 190346687 * - Expose slim arg_scope function to compute keys to enable tessting. - Add is_training=None option to mobinenet arg_scopes. This allows the users to set is_training from an outer scope. PiperOrigin-RevId: 190997959 * Add an option to not set slim arg_scope for batch_norm is_training parameter. This enables users to set the is_training parameter from an outer scope. PiperOrigin-RevId: 191611934 * PiperOrigin-RevId: 191955231 * PiperOrigin-RevId: 193254125 * PiperOrigin-RevId: 193371562 * PiperOrigin-RevId: 194085628
-
- 16 Apr, 2018 1 commit
-
-
Shaoning Zeng authored
* fix issue 'could not satisfy explicit device' * remove the line unrelated to fix issue
-
- 21 Sep, 2017 1 commit
-
-
Neal Wu authored
-
- 31 Aug, 2017 1 commit
-
-
derekjchow authored
-
- 14 Jun, 2017 1 commit
-
-
g21589 authored
This patch assigns dequeue node to inputs_device. And nolonger shows "Ignoring device specification /device:GPU:X for node 'clone_X/fifo_queue_Dequeue'" message.
-
- 23 May, 2017 1 commit
-
-
Neal Wu authored
-
- 18 May, 2017 1 commit
-
-
Neal Wu authored
-
- 22 Apr, 2017 1 commit
-
-
Matt Rickard authored
Variable summaries and the learning rate are added elsewhere in the code. A quick search also shows that this function is never called.
-
- 20 Apr, 2017 1 commit
-
-
Matt Rickard authored
The flag description for the momentum flag states that it is `The momentum for the MomentumOptimizer and RMSPropOptimizer`, however its not actually used in the RMSPropOptimizer. Instead, a separate `rmsprop_momentum` flag was used. This deletes that flag for simplicity. It was not referenced anywhere else in the repo.
-
- 14 Mar, 2017 6 commits
- 30 Aug, 2016 1 commit
-
-
Nathan Silberman authored
-
- 27 Aug, 2016 1 commit
-
-
nathansilberman authored
-