- 25 Sep, 2018 1 commit
-
-
pkulzc authored
* Merged commit includes the following changes: 213899768 by Sergio Guadarrama: Fixes #3819. -- 213493831 by Sergio Guadarrama: Internal change 212057654 by Sergio Guadarrama: Internal change 210747685 by Sergio Guadarrama: For FPN, when use_depthwise is set to true, use slightly modified mobilenet v1 config. -- 210128931 by Sergio Guadarrama: Allow user-defined current_step in NASNet. -- 209092664 by Sergio Guadarrama: Add quantized fine-tuning / training / eval and export to slim image classifier binaries. -- 207651347 by Sergio Guadarrama: Update mobilenet v1 docs to include revised tflite models. -- 207165245 by Sergio Guadarrama: Internal change 207095064 by Sergio Guadarrama: Internal change PiperOrigin-RevId: 213899768 * Update model_lib.py to fix eval_spec name issue.
-
- 19 Jun, 2018 1 commit
-
-
Mark Sandler authored
2. Flag that allows to prevent imagenet.py from downloading label_to_names from github and/or dumping into training directory (which might be read-only) 3. Adds some comments about how decay steps are computed, since it computed differently when there are clones vs sync replicas. 4. Updates mobilenet.md to describe the training process using train_image_classifer 5. Add citation for TF-Slim model library. PiperOrigin-RevId: 191955231 PiperOrigin-RevId: 193254125 PiperOrigin-RevId: 193371562 PiperOrigin-RevId: 194085628 PiperOrigin-RevId: 194857067 PiperOrigin-RevId: 196125653 PiperOrigin-RevId: 196589070 PiperOrigin-RevId: 199522873 PiperOrigin-RevId: 200351305
-
- 15 May, 2018 1 commit
-
-
Haiyang Kong authored
* Make codes more pythonic. * Restore the indents Restore the indents.
-
- 01 May, 2018 1 commit
-
-
pkulzc authored
* Adding option for one_box_for_all_classes to the box_predictor PiperOrigin-RevId: 192813444 * Extend to accept different ratios of conv channels. PiperOrigin-RevId: 192837477 * Remove inaccurate caveat from proto file. PiperOrigin-RevId: 192850747 * Add option to set dropout for classification net in weight shared box predictor. PiperOrigin-RevId: 192922089 * fix flakiness in testSSDRandomCropWithMultiClassScores due to randomness. PiperOrigin-RevId: 193067658 * Post-process now works again in train mode. PiperOrigin-RevId: 193087707 * Adding support for reading in logits as groundtruth labels and applying an optional temperature (scaling) before softmax in support of distillation. PiperOrigin-RevId: 193119411 * Add a util function to visualize value histogram as a tf.summary.image. PiperOrigin-RevId: 193137342 * Do not add batch norm parameters to final conv2d ops that predict boxes encodings and class scores in weight shared conv box predictor. This allows us to set proper bias and force initial predictions to be background when using focal loss. PiperOrigin-RevId: 193204364 * Make sure the final layers are also resized proportional to conv_depth_ratio. PiperOrigin-RevId: 193228972 * Remove deprecated batch_norm_trainable field from ssd mobilenet v2 config PiperOrigin-RevId: 193244778 * Updating coco evaluation metrics to allow for a batch of image info, rather than a single image. PiperOrigin-RevId: 193382651 * Update protobuf requirements to 3+ in installation docs. PiperOrigin-RevId: 193409179 * Add support for training keypoints. PiperOrigin-RevId: 193576336 * Fix data augmentation functions. PiperOrigin-RevId: 193737238 * Read the default batch size from config file. PiperOrigin-RevId: 193959861 * Fixing a bug in the coco evaluator. PiperOrigin-RevId: 193974479 * num_gt_boxes_per_image and num_det_boxes_per_image value incorrect. Should be not the expand dim. PiperOrigin-RevId: 194122420 * Add option to evaluate any checkpoint (without requiring write access to that directory and overwriting any existing logs there). PiperOrigin-RevId: 194292198 * PiperOrigin-RevId: 190346687 * - Expose slim arg_scope function to compute keys to enable tessting. - Add is_training=None option to mobinenet arg_scopes. This allows the users to set is_training from an outer scope. PiperOrigin-RevId: 190997959 * Add an option to not set slim arg_scope for batch_norm is_training parameter. This enables users to set the is_training parameter from an outer scope. PiperOrigin-RevId: 191611934 * PiperOrigin-RevId: 191955231 * PiperOrigin-RevId: 193254125 * PiperOrigin-RevId: 193371562 * PiperOrigin-RevId: 194085628
-
- 16 Apr, 2018 1 commit
-
-
Shaoning Zeng authored
* fix issue 'could not satisfy explicit device' * remove the line unrelated to fix issue
-
- 21 Sep, 2017 1 commit
-
-
Neal Wu authored
-
- 31 Aug, 2017 1 commit
-
-
derekjchow authored
-
- 14 Jun, 2017 1 commit
-
-
g21589 authored
This patch assigns dequeue node to inputs_device. And nolonger shows "Ignoring device specification /device:GPU:X for node 'clone_X/fifo_queue_Dequeue'" message.
-
- 23 May, 2017 1 commit
-
-
Neal Wu authored
-
- 18 May, 2017 1 commit
-
-
Neal Wu authored
-
- 22 Apr, 2017 1 commit
-
-
Matt Rickard authored
Variable summaries and the learning rate are added elsewhere in the code. A quick search also shows that this function is never called.
-
- 20 Apr, 2017 1 commit
-
-
Matt Rickard authored
The flag description for the momentum flag states that it is `The momentum for the MomentumOptimizer and RMSPropOptimizer`, however its not actually used in the RMSPropOptimizer. Instead, a separate `rmsprop_momentum` flag was used. This deletes that flag for simplicity. It was not referenced anywhere else in the repo.
-
- 14 Mar, 2017 6 commits
- 30 Aug, 2016 1 commit
-
-
Nathan Silberman authored
-
- 27 Aug, 2016 1 commit
-
-
nathansilberman authored
-