- 22 Mar, 2018 2 commits
-
-
pkulzc authored
* Force cast of num_classes to integer PiperOrigin-RevId: 188335318 * Updating config util to allow overwriting of cosine decay learning rates. PiperOrigin-RevId: 188338852 * Make box_list_ops.py and box_list_ops_test.py work with C API enabled. The C API has improved shape inference over the original Python code. This causes some previously-working conds to fail. Switching to smart_cond fixes this. Another effect of the improved shape inference is that one of the failures tested gets caught earlier, so I modified the test to reflect this. PiperOrigin-RevId: 188409792 * Fix parallel event file writing issue. Without this change, the event files might get corrupted when multiple evaluations are run in parallel. PiperOrigin-RevId: 188502560 * Deprecating the boolean flag of from_detection_checkpoint. Replace with a string field fine_tune_checkpoint_type to train_config to provide extensibility. The fine_tune_checkpoint_type can currently take value of `detection`, `classification`, or others when the restore_map is overwritten. PiperOrigin-RevId: 188518685 * Automated g4 rollback of changelist 188502560 PiperOrigin-RevId: 188519969 * Introducing eval metrics specs for Coco Mask metrics. This allows metrics to be computed in tensorflow using the tf.learn Estimator. PiperOrigin-RevId: 188528485 * Minor fix to make object_detection/metrics/coco_evaluation.py python3 compatible. PiperOrigin-RevId: 188550683 * Updating eval_util to handle eval_metric_ops from multiple `DetectionEvaluator`s. PiperOrigin-RevId: 188560474 * Allow tensor input for new_height and new_width for resize_image. PiperOrigin-RevId: 188561908 * Fix typo in fine_tune_checkpoint_type name in trainer. PiperOrigin-RevId: 188799033 * Adding mobilenet feature extractor to object detection. PiperOrigin-RevId: 188916897 * Allow label maps to optionally contain an explicit background class with id zero. PiperOrigin-RevId: 188951089 * Fix boundary conditions in random_pad_to_aspect_ratio to ensure that min_scale is always less than max_scale. PiperOrigin-RevId: 189026868 * Fallback on from_detection_checkpoint option if fine_tune_checkpoint_type isn't set. PiperOrigin-RevId: 189052833 * Add proper names for learning rate schedules so we don't see cryptic names on tensorboard. PiperOrigin-RevId: 189069837 * Enforcing that all datasets are batched (and then unbatched in the model) with batch_size >= 1. PiperOrigin-RevId: 189117178 * Adding regularization to total loss returned from DetectionModel.loss(). PiperOrigin-RevId: 189189123 * Standardize the names of loss scalars (for SSD, Faster R-CNN and R-FCN) in both training and eval so they can be compared on tensorboard. Log localization and classification losses in evaluation. PiperOrigin-RevId: 189189940 * Remove negative test from box list ops test. PiperOrigin-RevId: 189229327 * Add an option to warmup learning rate in manual stepping schedule. PiperOrigin-RevId: 189361039 * Replace tf.contrib.slim.tfexample_decoder.LookupTensor with object_detection.data_decoders.tf_example_decoder.LookupTensor. PiperOrigin-RevId: 189388556 * Force regularization summary variables under specific family names. PiperOrigin-RevId: 189393190 * Automated g4 rollback of changelist 188619139 PiperOrigin-RevId: 189396001 * Remove step 0 schedule since we do a hard check for it after cl/189361039 PiperOrigin-RevId: 189396697 * PiperOrigin-RevId: 189040463 * PiperOrigin-RevId: 189059229 * PiperOrigin-RevId: 189214402 * Force regularization summary variables under specific family names. PiperOrigin-RevId: 189393190 * Automated g4 rollback of changelist 188619139 PiperOrigin-RevId: 189396001 * Make slim python3 compatible. * Monir fixes. * Add TargetAssignment summaries in a separate family. PiperOrigin-RevId: 189407487 * 1. Setting `family` keyword arg prepends the summary names twice with the same name. Directly adding family suffix to the name gets rid of this problem. 2. Make sure the eval losses have the same name. PiperOrigin-RevId: 189434618 * Minor fixes to make object detection tf 1.4 compatible. PiperOrigin-RevId: 189437519 * Call the base of mobilenet_v1 feature extractor under the right arg scope and set batchnorm is_training based on the value passed in the constructor. PiperOrigin-RevId: 189460890 * Automated g4 rollback of changelist 188409792 PiperOrigin-RevId: 189463882 * Update object detection syncing. PiperOrigin-RevId: 189601955 * Add an option to warmup learning rate, hold it constant for a certain number of steps and cosine decay it. PiperOrigin-RevId: 189606169 * Let the proposal feature extractor function in faster_rcnn meta architectures return the activations (end_points). PiperOrigin-RevId: 189619301 * Fixed bug which caused masks to be mostly zeros (caused by detection_boxes being in absolute coordinates if scale_to_absolute=True. PiperOrigin-RevId: 189641294 * Open sourcing Mobilenetv2 + SSDLite. PiperOrigin-RevId: 189654520 * Remove unused files.
-
cclauss authored
-
- 13 Mar, 2018 1 commit
-
-
Mark Sandler authored
* Internal change. PiperOrigin-RevId: 187042423 * Internal change. PiperOrigin-RevId: 187072380 * Opensource float and eight-bit fixed-point mobilenet_v1 training and eval scripts. PiperOrigin-RevId: 187106140 * Initial check-in for Mobilenet V2 PiperOrigin-RevId: 187213595 * Allow configuring batch normalization decay and epsilon in MobileNet v1 PiperOrigin-RevId: 187425294 * Allow overriding NASNet model HParams. This is a change to the API that will allow users to pass in their own configs to the building functions, which should make these APIs much more customizable for end-user cases. This change removes the use_aux_head argument from the model construction functions, which is no longer necessary given that the use_aux_head option is configurable in the model config. For example, for the mobile ImageNet model, the auxiliary head can be disabled using: config = nasnet.mobile_imagenet_config() config.set_hparam('use_aux_head', 0) logits, endpoints = nasnet.build_nasnet_mobile( inputs, num_classes, config=config) PiperOrigin-RevId: 188617685 * Automated g4 rollback of changelist 188617685 PiperOrigin-RevId: 188619139 * Removes spurious comment
-
- 05 Mar, 2018 1 commit
-
-
cclauss authored
-
- 27 Feb, 2018 1 commit
-
-
pkulzc authored
* Merged commit includes the following changes: 186565198 by Sergio Guadarrama: Applied random_hsv_in_yiq in inception_preprocessing. -- 186501039 by Sergio Guadarrama: Applied random_hsv_in_yiq in inception_preprocessing. -- 186013907 by Sergio Guadarrama: Internal change 185715309 by Sergio Guadarrama: Obviates the need for prepadding on mobilenet v1 and v2 for fully convolutional models. -- 184266252 by Sergio Guadarrama: Give build_nasnet_*() functions an optional flag use_aux_head, and add an internal-only arg scope to NasNetA*Cell._apply_drop_path(). -- 183865228 by Sergio Guadarrama: Internal change 179580924 by Sergio Guadarrama: Internal change 177320302 by Sergio Guadarrama: Internal change 177130184 by Sergio Guadarrama: Make slim nets tests faster by using smaller examples of oversized inputs. -- 176965289 by Sergio Guadarrama: Internal change 176585260 by Sergio Guadarrama: Internal change 176534973 by Sergio Guadarrama: Internal change 175526881 by Sergio Guadarrama: Internal change 174967704 by Sergio Guadarrama: Treat num_classes=0 same as None in a few slim nets overlooked by the recent change. -- 174443227 by Sergio Guadarrama: Internal change 174281864 by Sergio Guadarrama: Internal change 174249903 by Sergio Guadarrama: Fix nasnet image classification and object detection by moving the option to turn ON or OFF batch norm training into it's own arg_scope used only by detection -- 173954505 by Sergio Guadarrama: Merge pull request #2651 from sguada/tmp1 Fixes imports Closes #2636 ORIGINAL_AUTHOR=Jon Shlens <shlens@users.noreply.github.com> COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/models/pull/2636 from tensorflow:sguada-patch-1 19ff570f52df5ab655c00fb439129b201c5f2dce -- 173928094 by Sergio Guadarrama: Remove pending imports -- PiperOrigin-RevId: 186565198 * Remove internal links.
-
- 20 Jan, 2018 1 commit
-
-
cclauss authored
-
- 21 Sep, 2017 1 commit
-
-
Neal Wu authored
-
- 10 Aug, 2017 1 commit
-
-
derekjchow authored
-
- 01 Jun, 2017 1 commit
-
-
Frank Chen authored
Add executable flag to models so that they can be run from the download_and_preprocess_imagenet.sh script automatically
-
- 16 Mar, 2017 1 commit
-
-
Neal Wu authored
-
- 14 Mar, 2017 1 commit
-
-
Neal Wu authored
-
- 15 Dec, 2016 1 commit
-
-
Shintaro Takemura authored
-
- 17 Nov, 2016 1 commit
-
-
Xionghc authored
-
- 10 Mar, 2016 1 commit
-
-
Manjunath Kudlur authored
Updated README
-
- 09 Mar, 2016 1 commit
-
-
Martin Wicke authored
-