Commit 7a9934df authored by Zhichao Lu's avatar Zhichao Lu Committed by lzc5123016
Browse files

Merged commit includes the following changes:

184048729  by Zhichao Lu:

    Modify target_assigner so that it creates regression targets taking keypoints into account.

--
184027183  by Zhichao Lu:

    Resnet V1 FPN based feature extractors for SSD meta architecture in Object Detection V2 API.

--
184004730  by Zhichao Lu:

    Expose a lever to override the configured mask_type.

--
183933113  by Zhichao Lu:

    Weight shared convolutional box predictor as described in https://arxiv.org/abs/1708.02002

--
183929669  by Zhichao Lu:

    Expanding box list operations for future data augmentations.

--
183916792  by Zhichao Lu:

    Fix unrecognized assertion function in tests.

--
183906851  by Zhichao Lu:

    - Change ssd meta architecture to use regression weights to compute loss normalizer.

--
183871003  by Zhichao Lu:

    Fix config_util_test wrong dependency.

--
183782120  by Zhichao Lu:

    Add __init__ file to third_party directories.

--
183779109  by Zhichao Lu:

    Setup regular version sync.

--
183768772  by Zhichao Lu:

    Make test compatible with numpy 1.12 and higher

--
183767893  by Zhichao Lu:

    Make test compatible with numpy 1.12 and higher

--
183719318  by Zhichao Lu:

    Use the new test interface in ssd feature extractor.

--
183714671  by Zhichao Lu:

    Use the new test_case interface for all anchor generators.

--
183708155  by Zhichao Lu:

    Change variable scopes in ConvolutionalBoxPredictor such that previously trained checkpoints are still compatible after the change in BoxPredictor interface

--
183705798  by Zhichao Lu:

    Internal change.

--
183636023  by Zhichao Lu:

    Fixing argument name for np_box_list_ops.concatenate() function.

--
183490404  by Zhichao Lu:

    Make sure code that relies in SSD older code still works.

--
183426762  by Zhichao Lu:

    Internal change

183412315  by Zhichao Lu:

    Internal change

183337814  by Zhichao Lu:

    Internal change

183303933  by Zhichao Lu:

    Internal change

183257349  by Zhichao Lu:

    Internal change

183254447  by Zhichao Lu:

    Internal change

183251200  by Zhichao Lu:

    Internal change

183135002  by Zhichao Lu:

    Internal change

182851500  by Zhichao Lu:

    Internal change

182839607  by Zhichao Lu:

    Internal change

182830719  by Zhichao Lu:

    Internal change

182533923  by Zhichao Lu:

    Internal change

182391090  by Zhichao Lu:

    Internal change

182262339  by Zhichao Lu:

    Internal change

182244645  by Zhichao Lu:

    Internal change

182241613  by Zhichao Lu:

    Internal change

182133027  by Zhichao Lu:

    Internal change

182058807  by Zhichao Lu:

    Internal change

181812028  by Zhichao Lu:

    Internal change

181788857  by Zhichao Lu:

    Internal change

181656761  by Zhichao Lu:

    Internal change

181541125  by Zhichao Lu:

    Internal change

181538702  by Zhichao Lu:

    Internal change

181125385  by Zhichao Lu:

    Internal change

180957758  by Zhichao Lu:

    Internal change

180941434  by Zhichao Lu:

    Internal change

180852569  by Zhichao Lu:

    Internal change

180846001  by Zhichao Lu:

    Internal change

180832145  by Zhichao Lu:

    Internal change

180740495  by Zhichao Lu:

    Internal change

180729150  by Zhichao Lu:

    Internal change

180589008  by Zhichao Lu:

    Internal change

180585408  by Zhichao Lu:

    Internal change

180581039  by Zhichao Lu:

    Internal change

180286388  by Zhichao Lu:

    Internal change

179934081  by Zhichao Lu:

    Internal change

179841242  by Zhichao Lu:

    Internal change

179831694  by Zhichao Lu:

    Internal change

179761005  by Zhichao Lu:

    Internal change

179610632  by Zhichao Lu:

    Internal change

179605363  by Zhichao Lu:

    Internal change

179603774  by Zhichao Lu:

    Internal change

179598614  by Zhichao Lu:

    Internal change

179597809  by Zhichao Lu:

    Internal change

179494630  by Zhichao Lu:

    Internal change

179367492  by Zhichao Lu:

    Internal change

179250050  by Zhichao Lu:

    Internal change

179247385  by Zhichao Lu:

    Internal change

179207897  by Zhichao Lu:

    Internal change

179076230  by Zhichao Lu:

    Internal change

178862066  by Zhichao Lu:

    Internal change

178854216  by Zhichao Lu:

    Internal change

178853109  by Zhichao Lu:

    Internal change

178709753  by Zhichao Lu:

    Internal change

178640707  by Zhichao Lu:

    Internal change

178421534  by Zhichao Lu:

    Internal change

178287174  by Zhichao Lu:

    Internal change

178257399  by Zhichao Lu:

    Internal change

177681867  by Zhichao Lu:

    Internal change

177654820  by Zhichao Lu:

    Internal change

177654052  by Zhichao Lu:

    Internal change

177638787  by Zhichao Lu:

    Internal change

177598305  by Zhichao Lu:

    Internal change

177538488  by Zhichao Lu:

    Internal change

177474197  by Zhichao Lu:

    Internal change

177271928  by Zhichao Lu:

    Internal change

177250285  by Zhichao Lu:

    Internal change

177210762  by Zhichao Lu:

    Internal change

177197135  by Zhichao Lu:

    Internal change

177037781  by Zhichao Lu:

    Internal change

176917394  by Zhichao Lu:

    Internal change

176683171  by Zhichao Lu:

    Internal change

176450793  by Zhichao Lu:

    Internal change

176388133  by Zhichao Lu:

    Internal change

176197721  by Zhichao Lu:

    Internal change

176195315  by Zhichao Lu:

    Internal change

176128748  by Zhichao Lu:

    Internal change

175743440  by Zhichao Lu:

    Use Toggle instead of bool to make the layout optimizer name and usage consistent with other optimizers.

--
175578178  by Zhichao Lu:

    Internal change

175463518  by Zhichao Lu:

    Internal change

175316616  by Zhichao Lu:

    Internal change

175302470  by Zhichao Lu:

    Internal change

175300323  by Zhichao Lu:

    Internal change

175269680  by Zhichao Lu:

    Internal change

175260574  by Zhichao Lu:

    Internal change

175122281  by Zhichao Lu:

    Internal change

175111708  by Zhichao Lu:

    Internal change

175110183  by Zhichao Lu:

    Internal change

174877166  by Zhichao Lu:

    Internal change

174868399  by Zhichao Lu:

    Internal change

174754200  by Zhichao Lu:

    Internal change

174544534  by Zhichao Lu:

    Internal change

174536143  by Zhichao Lu:

    Internal change

174513795  by Zhichao Lu:

    Internal change

174463713  by Zhichao Lu:

    Internal change

174403525  by Zhichao Lu:

    Internal change

174385170  by Zhichao Lu:

    Internal change

174358498  by Zhichao Lu:

    Internal change

174249903  by Zhichao Lu:

    Fix nasnet image classification and object detection by moving the option to turn ON or OFF batch norm training into it's own arg_scope used only by detection

--
174216508  by Zhichao Lu:

    Internal change

174065370  by Zhichao Lu:

    Internal change

174048035  by Zhichao Lu:

    Fix the pointer for downloading the NAS Faster-RCNN model.

--
174042677  by Zhichao Lu:

    Internal change

173964116  by Zhichao Lu:

    Internal change

173790182  by Zhichao Lu:

    Internal change

173779919  by Zhichao Lu:

    Internal change

173753775  by Zhichao Lu:

    Internal change

173753160  by Zhichao Lu:

    Internal change

173737519  by Zhichao Lu:

    Internal change

173696066  by Zhichao Lu:

    Internal change

173611554  by Zhichao Lu:

    Internal change

173475124  by Zhichao Lu:

    Internal change

173412497  by Zhichao Lu:

    Internal change

173404010  by Zhichao Lu:

    Internal change

173375014  by Zhichao Lu:

    Internal change

173345107  by Zhichao Lu:

    Internal change

173298413  by Zhichao Lu:

    Internal change

173289754  by Zhichao Lu:

    Internal change

173275544  by Zhichao Lu:

    Internal change

173273275  by Zhichao Lu:

    Internal change

173271885  by Zhichao Lu:

    Internal change

173264856  by Zhichao Lu:

    Internal change

173263791  by Zhichao Lu:

    Internal change

173261215  by Zhichao Lu:

    Internal change

173175740  by Zhichao Lu:

    Internal change

173010193  by Zhichao Lu:

    Internal change

172815204  by Zhichao Lu:

    Allow for label maps in tf.Example decoding.

--
172696028  by Zhichao Lu:

    Internal change

172509113  by Zhichao Lu:

    Allow for label maps in tf.Example decoding.

--
172475999  by Zhichao Lu:

    Internal change

172166621  by Zhichao Lu:

    Internal change

172151758  by Zhichao Lu:

    Minor updates to some README files.

    As a result of these friendly issues:
    https://github.com/tensorflow/models/issues/2530
    https://github.com/tensorflow/models/issues/2534

--
172147420  by Zhichao Lu:

    Fix illegal summary name and move from slim's get_or_create_global_step deprecated use of tf.contrib.framework* to tf.train*.

--
172111377  by Zhichao Lu:

    Internal change

172004247  by Zhichao Lu:

    Internal change

171996881  by Zhichao Lu:

    Internal change

171835204  by Zhichao Lu:

    Internal change

171826090  by Zhichao Lu:

    Internal change

171784016  by Zhichao Lu:

    Internal change

171699876  by Zhichao Lu:

    Internal change

171053425  by Zhichao Lu:

    Internal change

170905734  by Zhichao Lu:

    Internal change

170889179  by Zhichao Lu:

    Internal change

170734389  by Zhichao Lu:

    Internal change

170705852  by Zhichao Lu:

    Internal change

170401574  by Zhichao Lu:

    Internal change

170352571  by Zhichao Lu:

    Internal change

170215443  by Zhichao Lu:

    Internal change

170184288  by Zhichao Lu:

    Internal change

169936898  by Zhichao Lu:

    Internal change

169763373  by Zhichao Lu:

    Fix broken GitHub links in tensorflow and tensorflow_models resulting from The Great Models Move (a.k.a. the research subfolder)

--
169744825  by Zhichao Lu:

    Internal change

169638135  by Zhichao Lu:

    Internal change

169561814  by Zhichao Lu:

    Internal change

169444091  by Zhichao Lu:

    Internal change

169292330  by Zhichao Lu:

    Internal change

169145185  by Zhichao Lu:

    Internal change

168906035  by Zhichao Lu:

    Internal change

168790411  by Zhichao Lu:

    Internal change

168708911  by Zhichao Lu:

    Internal change

168611969  by Zhichao Lu:

    Internal change

168535975  by Zhichao Lu:

    Internal change

168381815  by Zhichao Lu:

    Internal change

168244740  by Zhichao Lu:

    Internal change

168240024  by Zhichao Lu:

    Internal change

168168016  by Zhichao Lu:

    Internal change

168071571  by Zhichao Lu:

    Move display strings to below the bounding box if they would otherwise be outside the image.

--
168067771  by Zhichao Lu:

    Internal change

167970950  by Zhichao Lu:

    Internal change

167884533  by Zhichao Lu:

    Internal change

167626173  by Zhichao Lu:

    Internal change

167277422  by Zhichao Lu:

    Internal change

167249393  by Zhichao Lu:

    Internal change

167248954  by Zhichao Lu:

    Internal change

167189395  by Zhichao Lu:

    Internal change

167107797  by Zhichao Lu:

    Internal change

167061250  by Zhichao Lu:

    Internal change

166871147  by Zhichao Lu:

    Internal change

166867617  by Zhichao Lu:

    Internal change

166862112  by Zhichao Lu:

    Internal change

166715648  by Zhichao Lu:

    Internal change

166635615  by Zhichao Lu:

    Internal change

166383182  by Zhichao Lu:

    Internal change

166371326  by Zhichao Lu:

    Internal change

166254711  by Zhichao Lu:

    Internal change

166106294  by Zhichao Lu:

    Internal change

166081204  by Zhichao Lu:

    Internal change

165972262  by Zhichao Lu:

    Internal change

165816702  by Zhichao Lu:

    Internal change

165764471  by Zhichao Lu:

    Internal change

165724134  by Zhichao Lu:

    Internal change

165655829  by Zhichao Lu:

    Internal change

165587904  by Zhichao Lu:

    Internal change

165534540  by Zhichao Lu:

    Internal change

165177692  by Zhichao Lu:

    Internal change

165091822  by Zhichao Lu:

    Internal change

165019730  by Zhichao Lu:

    Internal change

165002942  by Zhichao Lu:

    Internal change

164897728  by Zhichao Lu:

    Internal change

164782618  by Zhichao Lu:

    Internal change

164710379  by Zhichao Lu:

    Internal change

164639237  by Zhichao Lu:

    Internal change

164069251  by Zhichao Lu:

    Internal change

164058169  by Zhichao Lu:

    Internal change

163913796  by Zhichao Lu:

    Internal change

163756696  by Zhichao Lu:

    Internal change

163524665  by Zhichao Lu:

    Internal change

163393399  by Zhichao Lu:

    Internal change

163385733  by Zhichao Lu:

    Internal change

162525065  by Zhichao Lu:

    Internal change

162376984  by Zhichao Lu:

    Internal change

162026661  by Zhichao Lu:

    Internal change

161956004  by Zhichao Lu:

    Internal change

161817520  by Zhichao Lu:

    Internal change

161718688  by Zhichao Lu:

    Internal change

161624398  by Zhichao Lu:

    Internal change

161575120  by Zhichao Lu:

    Internal change

161483997  by Zhichao Lu:

    Internal change

161462189  by Zhichao Lu:

    Internal change

161452968  by Zhichao Lu:

    Internal change

161443992  by Zhichao Lu:

    Internal change

161408607  by Zhichao Lu:

    Internal change

161262084  by Zhichao Lu:

    Internal change

161214023  by Zhichao Lu:

    Internal change

161025667  by Zhichao Lu:

    Internal change

160982216  by Zhichao Lu:

    Internal change

160666760  by Zhichao Lu:

    Internal change

160570489  by Zhichao Lu:

    Internal change

160553112  by Zhichao Lu:

    Internal change

160458261  by Zhichao Lu:

    Internal change

160349302  by Zhichao Lu:

    Internal change

160296092  by Zhichao Lu:

    Internal change

160287348  by Zhichao Lu:

    Internal change

160199279  by Zhichao Lu:

    Internal change

160160156  by Zhichao Lu:

    Internal change

160151954  by Zhichao Lu:

    Internal change

160005404  by Zhichao Lu:

    Internal change

159983265  by Zhichao Lu:

    Internal change

159819896  by Zhichao Lu:

    Internal change

159749419  by Zhichao Lu:

    Internal change

159596448  by Zhichao Lu:

    Internal change

159587801  by Zhichao Lu:

    Internal change

159587342  by Zhichao Lu:

    Internal change

159476256  by Zhichao Lu:

    Internal change

159463992  by Zhichao Lu:

    Internal change

159455585  by Zhichao Lu:

    Internal change

159270798  by Zhichao Lu:

    Internal change

159256633  by Zhichao Lu:

    Internal change

159141989  by Zhichao Lu:

    Internal change

159079098  by Zhichao Lu:

    Internal change

159078559  by Zhichao Lu:

    Internal change

159077055  by Zhichao Lu:

    Internal change

159072046  by Zhichao Lu:

    Internal change

159071092  by Zhichao Lu:

    Internal change

159069262  by Zhichao Lu:

    Internal change

159037430  by Zhichao Lu:

    Internal change

159035747  by Zhichao Lu:

    Internal change

159023868  by Zhichao Lu:

    Internal change

158939092  by Zhichao Lu:

    Internal change

158912561  by Zhichao Lu:

    Internal change

158903825  by Zhichao Lu:

    Internal change

158894348  by Zhichao Lu:

    Internal change

158884934  by Zhichao Lu:

    Internal change

158878010  by Zhichao Lu:

    Internal change

158874620  by Zhichao Lu:

    Internal change

158869501  by Zhichao Lu:

    Internal change

158842623  by Zhichao Lu:

    Internal change

158801298  by Zhichao Lu:

    Internal change

158775487  by Zhichao Lu:

    Internal change

158773668  by Zhichao Lu:

    Internal change

158771394  by Zhichao Lu:

    Internal change

158668928  by Zhichao Lu:

    Internal change

158596865  by Zhichao Lu:

    Internal change

158587317  by Zhichao Lu:

    Internal change

158586348  by Zhichao Lu:

    Internal change

158585707  by Zhichao Lu:

    Internal change

158577134  by Zhichao Lu:

    Internal change

158459749  by Zhichao Lu:

    Internal change

158459678  by Zhichao Lu:

    Internal change

158328972  by Zhichao Lu:

    Internal change

158324255  by Zhichao Lu:

    Internal change

158319576  by Zhichao Lu:

    Internal change

158290802  by Zhichao Lu:

    Internal change

158273041  by Zhichao Lu:

    Internal change

158240477  by Zhichao Lu:

    Internal change

158204316  by Zhichao Lu:

    Internal change

158154161  by Zhichao Lu:

    Internal change

158077203  by Zhichao Lu:

    Internal change

158041397  by Zhichao Lu:

    Internal change

158029233  by Zhichao Lu:

    Internal change

157976306  by Zhichao Lu:

    Internal change

157966896  by Zhichao Lu:

    Internal change

157945642  by Zhichao Lu:

    Internal change

157943135  by Zhichao Lu:

    Internal change

157942158  by Zhichao Lu:

    Internal change

157897866  by Zhichao Lu:

    Internal change

157866667  by Zhichao Lu:

    Internal change

157845915  by Zhichao Lu:

    Internal change

157842592  by Zhichao Lu:

    Internal change

157832761  by Zhichao Lu:

    Internal change

157824451  by Zhichao Lu:

    Internal change

157816531  by Zhichao Lu:

    Internal change

157782130  by Zhichao Lu:

    Internal change

157733752  by Zhichao Lu:

    Internal change

157654577  by Zhichao Lu:

    Internal change

157639285  by Zhichao Lu:

    Internal change

157530694  by Zhichao Lu:

    Internal change

157518469  by Zhichao Lu:

    Internal change

157514626  by Zhichao Lu:

    Internal change

157481413  by Zhichao Lu:

    Internal change

157267863  by Zhichao Lu:

    Internal change

157263616  by Zhichao Lu:

    Internal change

157234554  by Zhichao Lu:

    Internal change

157174595  by Zhichao Lu:

    Internal change

157169681  by Zhichao Lu:

    Internal change

157156425  by Zhichao Lu:

    Internal change

157024436  by Zhichao Lu:

    Internal change

157016195  by Zhichao Lu:

    Internal change

156941658  by Zhichao Lu:

    Internal change

156880859  by Zhichao Lu:

    Internal change

156790636  by Zhichao Lu:

    Internal change

156565969  by Zhichao Lu:

    Internal change

156522345  by Zhichao Lu:

    Internal change

156518570  by Zhichao Lu:

    Internal change

156509878  by Zhichao Lu:

    Internal change

156509134  by Zhichao Lu:

    Internal change

156472497  by Zhichao Lu:

    Internal change

156471429  by Zhichao Lu:

    Internal change

156470865  by Zhichao Lu:

    Internal change

156461563  by Zhichao Lu:

    Internal change

156437521  by Zhichao Lu:

    Internal change

156334994  by Zhichao Lu:

    Internal change

156319604  by Zhichao Lu:

    Internal change

156234305  by Zhichao Lu:

    Internal change

156226207  by Zhichao Lu:

    Internal change

156215347  by Zhichao Lu:

    Internal change

156127227  by Zhichao Lu:

    Internal change

156120405  by Zhichao Lu:

    Internal change

156113752  by Zhichao Lu:

    Internal change

156098936  by Zhichao Lu:

    Internal change

155924066  by Zhichao Lu:

    Internal change

155883241  by Zhichao Lu:

    Internal change

155806887  by Zhichao Lu:

    Internal change

155641849  by Zhichao Lu:

    Internal change

155593034  by Zhichao Lu:

    Internal change

155570702  by Zhichao Lu:

    Internal change

155515306  by Zhichao Lu:

    Internal change

155514787  by Zhichao Lu:

    Internal change

155445237  by Zhichao Lu:

    Internal change

155438672  by Zhichao Lu:

    Internal change

155264448  by Zhichao Lu:

    Internal change

155222148  by Zhichao Lu:

    Internal change

155106590  by Zhichao Lu:

    Internal change

155090562  by Zhichao Lu:

    Internal change

154973775  by Zhichao Lu:

    Internal change

154972880  by Zhichao Lu:

    Internal change

154871596  by Zhichao Lu:

    Internal change

154835007  by Zhichao Lu:

    Internal change

154788175  by Zhichao Lu:

    Internal change

154731169  by Zhichao Lu:

    Internal change

154721261  by Zhichao Lu:

    Internal change

154594626  by Zhichao Lu:

    Internal change

154588305  by Zhichao Lu:

    Internal change

154578994  by Zhichao Lu:

    Internal change

154571515  by Zhichao Lu:

    Internal change

154552873  by Zhichao Lu:

    Internal change

154549672  by Zhichao Lu:

    Internal change

154463631  by Zhichao Lu:

    Internal change

154437690  by Zhichao Lu:

    Internal change

154412359  by Zhichao Lu:

    Internal change

154374026  by Zhichao Lu:

    Internal change

154361648  by Zhichao Lu:

    Internal change

154310164  by Zhichao Lu:

    Internal change

154220862  by Zhichao Lu:

    Internal change

154187281  by Zhichao Lu:

    Internal change

154186651  by Zhichao Lu:

    Internal change

154119783  by Zhichao Lu:

    Internal change

154114285  by Zhichao Lu:

    Internal change

154095717  by Zhichao Lu:

    Internal change

154057972  by Zhichao Lu:

    Internal change

154055285  by Zhichao Lu:

    Internal change

153659288  by Zhichao Lu:

    Internal change

153637797  by Zhichao Lu:

    Internal change

153561771  by Zhichao Lu:

    Internal change

153540765  by Zhichao Lu:

    Internal change

153496128  by Zhichao Lu:

    Internal change

153473323  by Zhichao Lu:

    Internal change

153368812  by Zhichao Lu:

    Internal change

153367292  by Zhichao Lu:

    Internal change

153201890  by Zhichao Lu:

    Internal change

153074177  by Zhichao Lu:

    Internal change

152980017  by Zhichao Lu:

    Internal change

152978434  by Zhichao Lu:

    Internal change

152951821  by Zhichao Lu:

    Internal change

152904076  by Zhichao Lu:

    Internal change

152883703  by Zhichao Lu:

    Internal change

152869747  by Zhichao Lu:

    Internal change

152827463  by Zhichao Lu:

    Internal change

152756886  by Zhichao Lu:

    Internal change

152752840  by Zhichao Lu:

    Internal change

152736347  by Zhichao Lu:

    Internal change

152728184  by Zhichao Lu:

    Internal change

152720120  by Zhichao Lu:

    Internal change

152710964  by Zhichao Lu:

    Internal change

152706735  by Zhichao Lu:

    Internal change

152681133  by Zhichao Lu:

    Internal change

152517758  by Zhichao Lu:

    Internal change

152516381  by Zhichao Lu:

    Internal change

152511258  by Zhichao Lu:

    Internal change

152319164  by Zhichao Lu:

    Internal change

152316404  by Zhichao Lu:

    Internal change

152309261  by Zhichao Lu:

    Internal change

152308007  by Zhichao Lu:

    Internal change

152296551  by Zhichao Lu:

    Internal change

152188069  by Zhichao Lu:

    Internal change

152158644  by Zhichao Lu:

    Internal change

152153578  by Zhichao Lu:

    Internal change

152152285  by Zhichao Lu:

    Internal change

152055035  by Zhichao Lu:

    Internal change

152036778  by Zhichao Lu:

    Internal change

152020728  by Zhichao Lu:

    Internal change

152014842  by Zhichao Lu:

    Internal change

151848225  by Zhichao Lu:

    Internal change

151741308  by Zhichao Lu:

    Internal change

151740499  by Zhichao Lu:

    Internal change

151736189  by Zhichao Lu:

    Internal change

151612892  by Zhichao Lu:

    Internal change

151599502  by Zhichao Lu:

    Internal change

151538547  by Zhichao Lu:

    Internal change

151496530  by Zhichao Lu:

    Internal change

151476070  by Zhichao Lu:

    Internal change

151448662  by Zhichao Lu:

    Internal change

151411627  by Zhichao Lu:

    Internal change

151397737  by Zhichao Lu:

    Internal change

151169523  by Zhichao Lu:

    Internal change

151148956  by Zhichao Lu:

    Internal change

150944227  by Zhichao Lu:

    Internal change

150276683  by Zhichao Lu:

    Internal change

149986687  by Zhichao Lu:

    Internal change

149218749  by Zhichao Lu:

    Internal change

PiperOrigin-RevId: 184048729
parent 7ef602be
......@@ -143,5 +143,4 @@ eval_input_reader: {
label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
shuffle: false
num_readers: 1
num_epochs: 1
}
# Faster R-CNN with Inception Resnet v2, Atrous version, with Cosine
# Learning Rate schedule.
# Trained on COCO, initialized from Imagenet classification checkpoint
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# should be configured.
model {
faster_rcnn {
num_classes: 90
image_resizer {
keep_aspect_ratio_resizer {
min_dimension: 600
max_dimension: 1024
}
}
feature_extractor {
type: 'faster_rcnn_inception_resnet_v2'
first_stage_features_stride: 8
}
first_stage_anchor_generator {
grid_anchor_generator {
scales: [0.25, 0.5, 1.0, 2.0]
aspect_ratios: [0.5, 1.0, 2.0]
height_stride: 8
width_stride: 8
}
}
first_stage_atrous_rate: 2
first_stage_box_predictor_conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.01
}
}
}
first_stage_nms_score_threshold: 0.0
first_stage_nms_iou_threshold: 0.7
first_stage_max_proposals: 300
first_stage_localization_loss_weight: 2.0
first_stage_objectness_loss_weight: 1.0
initial_crop_size: 17
maxpool_kernel_size: 1
maxpool_stride: 1
second_stage_box_predictor {
mask_rcnn_box_predictor {
use_dropout: false
dropout_keep_probability: 1.0
fc_hyperparams {
op: FC
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
variance_scaling_initializer {
factor: 1.0
uniform: true
mode: FAN_AVG
}
}
}
}
}
second_stage_post_processing {
batch_non_max_suppression {
score_threshold: 0.0
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 100
}
score_converter: SOFTMAX
}
second_stage_localization_loss_weight: 2.0
second_stage_classification_loss_weight: 1.0
}
}
train_config: {
batch_size: 1
optimizer {
momentum_optimizer: {
learning_rate: {
cosine_decay_learning_rate {
learning_rate_base: 0.0006
total_steps: 1200000
warmup_learning_rate: 0.00006
warmup_steps: 20000
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
data_augmentation_options {
random_horizontal_flip {
}
}
}
train_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/mscoco_train.record"
}
label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
}
eval_config: {
num_examples: 8000
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 10
}
eval_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/mscoco_val.record"
}
label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
shuffle: false
num_readers: 1
}
......@@ -8,7 +8,7 @@ model {
faster_rcnn {
num_classes: 90
image_resizer {
# TODO: Only fixed_shape_resizer is currently supported for NASNet
# TODO(shlens): Only fixed_shape_resizer is currently supported for NASNet
# featurization. The reason for this is that nasnet.py only supports
# inputs with fully known shapes. We need to update nasnet.py to handle
# shapes not known at compile time.
......@@ -43,7 +43,7 @@ model {
}
first_stage_nms_score_threshold: 0.0
first_stage_nms_iou_threshold: 0.7
first_stage_max_proposals: 50
first_stage_max_proposals: 500
first_stage_localization_loss_weight: 2.0
first_stage_objectness_loss_weight: 1.0
initial_crop_size: 17
......@@ -131,7 +131,6 @@ train_input_reader: {
}
eval_config: {
metrics_set: "pascal_voc_metrics"
num_examples: 8000
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
......@@ -144,5 +143,4 @@ eval_input_reader: {
label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
shuffle: false
num_readers: 1
num_epochs: 1
}
# Faster R-CNN with Resnet-101 (v1), Atrous version
# Trained on COCO, initialized from Imagenet classification checkpoint
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# should be configured.
model {
faster_rcnn {
num_classes: 90
image_resizer {
keep_aspect_ratio_resizer {
min_dimension: 600
max_dimension: 1024
}
}
feature_extractor {
type: 'faster_rcnn_resnet101'
first_stage_features_stride: 8
}
first_stage_anchor_generator {
grid_anchor_generator {
scales: [0.25, 0.5, 1.0, 2.0]
aspect_ratios: [0.5, 1.0, 2.0]
height_stride: 8
width_stride: 8
}
}
first_stage_atrous_rate: 2
first_stage_box_predictor_conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.01
}
}
}
first_stage_nms_score_threshold: 0.0
first_stage_nms_iou_threshold: 0.7
first_stage_max_proposals: 300
first_stage_localization_loss_weight: 2.0
first_stage_objectness_loss_weight: 1.0
initial_crop_size: 14
maxpool_kernel_size: 2
maxpool_stride: 2
second_stage_box_predictor {
mask_rcnn_box_predictor {
use_dropout: false
dropout_keep_probability: 1.0
fc_hyperparams {
op: FC
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
variance_scaling_initializer {
factor: 1.0
uniform: true
mode: FAN_AVG
}
}
}
}
}
second_stage_post_processing {
batch_non_max_suppression {
score_threshold: 0.0
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 300
}
score_converter: SOFTMAX
}
second_stage_localization_loss_weight: 2.0
second_stage_classification_loss_weight: 1.0
}
}
train_config: {
batch_size: 1
optimizer {
momentum_optimizer: {
learning_rate: {
manual_step_learning_rate {
initial_learning_rate: 0.0003
schedule {
step: 0
learning_rate: .0003
}
schedule {
step: 900000
learning_rate: .00003
}
schedule {
step: 1200000
learning_rate: .000003
}
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
data_augmentation_options {
random_horizontal_flip {
}
}
}
train_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/mscoco_train.record"
}
label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
}
eval_config: {
num_examples: 8000
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 10
}
eval_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/mscoco_val.record"
}
label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
shuffle: false
num_readers: 1
}
# Faster R-CNN with Resnet-101 (v1) configuration for MSCOCO Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# should be configured.
# Faster R-CNN with Resnet-101 (v1)
# Trained on COCO, initialized from Imagenet classification checkpoint
model {
faster_rcnn {
......@@ -107,13 +104,7 @@ train_config: {
use_moving_average: false
}
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
from_detection_checkpoint: true
# Note: The below line limits the training process to 200K steps, which we
# empirically found to be sufficient enough to train the pets dataset. This
# effectively bypasses the learning rate schedule (the learning rate will
# never decay). Remove the below line to train indefinitely.
num_steps: 200000
fine_tune_checkpoint: "/namespace/vale-project/models/classification/imagenet/resnet_v1_101/msra_model.ckpt"
data_augmentation_options {
random_horizontal_flip {
}
......@@ -121,25 +112,32 @@ train_config: {
}
train_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/mscoco_train.record"
label_map_path: "/placer/prod/home/vale-project-placer/datasets/mscoco/raw/mscoco_label_map.pbtxt"
external_input_reader {
[object_detection.protos.GoogleInputReader.google_input_reader] {
ss_table_input_reader: {
input_path: "/placer/prod/home/vale-project-placer/datasets/mscoco/example_sstables/mscoco_alltasks_trainvalminusminival2014-?????-of-00101"
data_type: TF_EXAMPLE
}
}
}
label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
}
eval_config: {
num_examples: 8000
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 10
metrics_set: "coco_detection_metrics"
use_moving_averages: false
}
eval_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/mscoco_val.record"
}
label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
label_map_path: "/placer/prod/home/vale-project-placer/datasets/mscoco/raw/mscoco_label_map.pbtxt"
shuffle: false
num_readers: 1
num_epochs: 1
external_input_reader {
[object_detection.protos.GoogleInputReader.google_input_reader] {
ss_table_input_reader: {
input_path: "/placer/prod/home/vale-project-placer/datasets/mscoco/example_sstables/mscoco_alltasks_minival2014-?????-of-00020"
data_type: TF_EXAMPLE
}
}
}
}
......@@ -141,5 +141,4 @@ eval_input_reader: {
label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
shuffle: false
num_readers: 1
num_epochs: 1
}
......@@ -141,5 +141,4 @@ eval_input_reader: {
label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
shuffle: false
num_readers: 1
num_epochs: 1
}
# Mask R-CNN with Inception Resnet v2, Atrous version
# Configured for MSCOCO Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# should be configured.
model {
faster_rcnn {
num_classes: 90
image_resizer {
keep_aspect_ratio_resizer {
min_dimension: 800
max_dimension: 1365
}
}
number_of_stages: 3
feature_extractor {
type: 'faster_rcnn_inception_resnet_v2'
first_stage_features_stride: 8
}
first_stage_anchor_generator {
grid_anchor_generator {
scales: [0.25, 0.5, 1.0, 2.0]
aspect_ratios: [0.5, 1.0, 2.0]
height_stride: 8
width_stride: 8
}
}
first_stage_atrous_rate: 2
first_stage_box_predictor_conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.01
}
}
}
first_stage_nms_score_threshold: 0.0
first_stage_nms_iou_threshold: 0.7
first_stage_max_proposals: 300
first_stage_localization_loss_weight: 2.0
first_stage_objectness_loss_weight: 1.0
initial_crop_size: 17
maxpool_kernel_size: 1
maxpool_stride: 1
second_stage_box_predictor {
mask_rcnn_box_predictor {
use_dropout: false
dropout_keep_probability: 1.0
predict_instance_masks: true
mask_height: 33
mask_width: 33
mask_prediction_conv_depth: 0
mask_prediction_num_conv_layers: 4
fc_hyperparams {
op: FC
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
variance_scaling_initializer {
factor: 1.0
uniform: true
mode: FAN_AVG
}
}
}
conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.01
}
}
}
}
}
second_stage_post_processing {
batch_non_max_suppression {
score_threshold: 0.0
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 100
}
score_converter: SOFTMAX
}
second_stage_localization_loss_weight: 2.0
second_stage_classification_loss_weight: 1.0
second_stage_mask_prediction_loss_weight: 4.0
}
}
train_config: {
batch_size: 1
optimizer {
momentum_optimizer: {
learning_rate: {
manual_step_learning_rate {
initial_learning_rate: 0.0003
schedule {
step: 0
learning_rate: .0003
}
schedule {
step: 900000
learning_rate: .00003
}
schedule {
step: 1200000
learning_rate: .000003
}
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
from_detection_checkpoint: true
# Note: The below line limits the training process to 200K steps, which we
# empirically found to be sufficient enough to train the pets dataset. This
# effectively bypasses the learning rate schedule (the learning rate will
# never decay). Remove the below line to train indefinitely.
num_steps: 200000
data_augmentation_options {
random_horizontal_flip {
}
}
}
train_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/mscoco_train.record"
}
label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
}
eval_config: {
num_examples: 8000
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 10
}
eval_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/mscoco_val.record"
}
label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
shuffle: false
num_readers: 1
}
# Mask R-CNN with Inception V2
# Configured for MSCOCO Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# should be configured.
model {
faster_rcnn {
num_classes: 90
image_resizer {
keep_aspect_ratio_resizer {
min_dimension: 800
max_dimension: 1365
}
}
number_of_stages: 3
feature_extractor {
type: 'faster_rcnn_inception_v2'
first_stage_features_stride: 16
}
first_stage_anchor_generator {
grid_anchor_generator {
scales: [0.25, 0.5, 1.0, 2.0]
aspect_ratios: [0.5, 1.0, 2.0]
height_stride: 16
width_stride: 16
}
}
first_stage_box_predictor_conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.01
}
}
}
first_stage_nms_score_threshold: 0.0
first_stage_nms_iou_threshold: 0.7
first_stage_max_proposals: 300
first_stage_localization_loss_weight: 2.0
first_stage_objectness_loss_weight: 1.0
initial_crop_size: 14
maxpool_kernel_size: 2
maxpool_stride: 2
second_stage_box_predictor {
mask_rcnn_box_predictor {
use_dropout: false
dropout_keep_probability: 1.0
predict_instance_masks: true
mask_height: 15
mask_width: 15
mask_prediction_conv_depth: 0
mask_prediction_num_conv_layers: 2
fc_hyperparams {
op: FC
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
variance_scaling_initializer {
factor: 1.0
uniform: true
mode: FAN_AVG
}
}
}
conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.01
}
}
}
}
}
second_stage_post_processing {
batch_non_max_suppression {
score_threshold: 0.0
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 300
}
score_converter: SOFTMAX
}
second_stage_localization_loss_weight: 2.0
second_stage_classification_loss_weight: 1.0
second_stage_mask_prediction_loss_weight: 4.0
}
}
train_config: {
batch_size: 1
optimizer {
momentum_optimizer: {
learning_rate: {
manual_step_learning_rate {
initial_learning_rate: 0.0002
schedule {
step: 0
learning_rate: .0002
}
schedule {
step: 900000
learning_rate: .00002
}
schedule {
step: 1200000
learning_rate: .000002
}
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
from_detection_checkpoint: true
# Note: The below line limits the training process to 200K steps, which we
# empirically found to be sufficient enough to train the pets dataset. This
# effectively bypasses the learning rate schedule (the learning rate will
# never decay). Remove the below line to train indefinitely.
num_steps: 200000
data_augmentation_options {
random_horizontal_flip {
}
}
}
train_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/mscoco_train.record"
}
label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
}
eval_config: {
num_examples: 8000
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 10
}
eval_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/mscoco_val.record"
}
label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
shuffle: false
num_readers: 1
}
# Mask R-CNN with Resnet-101 (v1), Atrous version
# Configured for MSCOCO Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# should be configured.
model {
faster_rcnn {
num_classes: 90
image_resizer {
keep_aspect_ratio_resizer {
min_dimension: 800
max_dimension: 1365
}
}
number_of_stages: 3
feature_extractor {
type: 'faster_rcnn_resnet101'
first_stage_features_stride: 8
}
first_stage_anchor_generator {
grid_anchor_generator {
scales: [0.25, 0.5, 1.0, 2.0]
aspect_ratios: [0.5, 1.0, 2.0]
height_stride: 8
width_stride: 8
}
}
first_stage_atrous_rate: 2
first_stage_box_predictor_conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.01
}
}
}
first_stage_nms_score_threshold: 0.0
first_stage_nms_iou_threshold: 0.7
first_stage_max_proposals: 300
first_stage_localization_loss_weight: 2.0
first_stage_objectness_loss_weight: 1.0
initial_crop_size: 14
maxpool_kernel_size: 2
maxpool_stride: 2
second_stage_box_predictor {
mask_rcnn_box_predictor {
use_dropout: false
dropout_keep_probability: 1.0
predict_instance_masks: true
mask_height: 33
mask_width: 33
mask_prediction_conv_depth: 0
mask_prediction_num_conv_layers: 4
fc_hyperparams {
op: FC
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
variance_scaling_initializer {
factor: 1.0
uniform: true
mode: FAN_AVG
}
}
}
conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.01
}
}
}
}
}
second_stage_post_processing {
batch_non_max_suppression {
score_threshold: 0.0
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 300
}
score_converter: SOFTMAX
}
second_stage_localization_loss_weight: 2.0
second_stage_classification_loss_weight: 1.0
second_stage_mask_prediction_loss_weight: 4.0
}
}
train_config: {
batch_size: 1
optimizer {
momentum_optimizer: {
learning_rate: {
manual_step_learning_rate {
initial_learning_rate: 0.0003
schedule {
step: 0
learning_rate: .0003
}
schedule {
step: 900000
learning_rate: .00003
}
schedule {
step: 1200000
learning_rate: .000003
}
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
from_detection_checkpoint: true
# Note: The below line limits the training process to 200K steps, which we
# empirically found to be sufficient enough to train the pets dataset. This
# effectively bypasses the learning rate schedule (the learning rate will
# never decay). Remove the below line to train indefinitely.
num_steps: 200000
data_augmentation_options {
random_horizontal_flip {
}
}
}
train_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/mscoco_train.record"
}
label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
}
eval_config: {
num_examples: 8000
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 10
}
eval_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/mscoco_val.record"
}
label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
shuffle: false
num_readers: 1
}
# Mask R-CNN with Resnet-101 (v1) configured for the Oxford-IIIT Pet Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# should be configured.
model {
faster_rcnn {
num_classes: 37
image_resizer {
keep_aspect_ratio_resizer {
min_dimension: 600
max_dimension: 1024
}
}
number_of_stages: 3
feature_extractor {
type: 'faster_rcnn_resnet101'
first_stage_features_stride: 16
}
first_stage_anchor_generator {
grid_anchor_generator {
scales: [0.25, 0.5, 1.0, 2.0]
aspect_ratios: [0.5, 1.0, 2.0]
height_stride: 16
width_stride: 16
}
}
first_stage_box_predictor_conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.01
}
}
}
first_stage_nms_score_threshold: 0.0
first_stage_nms_iou_threshold: 0.7
first_stage_max_proposals: 300
first_stage_localization_loss_weight: 2.0
first_stage_objectness_loss_weight: 1.0
initial_crop_size: 14
maxpool_kernel_size: 2
maxpool_stride: 2
second_stage_box_predictor {
mask_rcnn_box_predictor {
use_dropout: false
dropout_keep_probability: 1.0
predict_instance_masks: true
conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.01
}
}
}
fc_hyperparams {
op: FC
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
variance_scaling_initializer {
factor: 1.0
uniform: true
mode: FAN_AVG
}
}
}
}
}
second_stage_post_processing {
batch_non_max_suppression {
score_threshold: 0.0
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 300
}
score_converter: SOFTMAX
}
second_stage_localization_loss_weight: 2.0
second_stage_classification_loss_weight: 1.0
}
}
train_config: {
batch_size: 1
optimizer {
momentum_optimizer: {
learning_rate: {
manual_step_learning_rate {
initial_learning_rate: 0.0007
schedule {
step: 0
learning_rate: 0.0007
}
schedule {
step: 15000
learning_rate: 0.00007
}
schedule {
step: 30000
learning_rate: 0.000007
}
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
from_detection_checkpoint: true
# Note: The below line limits the training process to 200K steps, which we
# empirically found to be sufficient enough to train the pets dataset. This
# effectively bypasses the learning rate schedule (the learning rate will
# never decay). Remove the below line to train indefinitely.
num_steps: 200000
data_augmentation_options {
random_horizontal_flip {
}
}
}
train_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/pet_train.record"
}
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
}
eval_config: {
num_examples: 2000
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 10
}
eval_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/pet_val.record"
}
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
shuffle: false
num_readers: 1
}
# Mask R-CNN with Resnet-50 (v1), Atrous version
# Configured for MSCOCO Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# should be configured.
model {
faster_rcnn {
num_classes: 90
image_resizer {
keep_aspect_ratio_resizer {
min_dimension: 800
max_dimension: 1365
}
}
number_of_stages: 3
feature_extractor {
type: 'faster_rcnn_resnet50'
first_stage_features_stride: 8
}
first_stage_anchor_generator {
grid_anchor_generator {
scales: [0.25, 0.5, 1.0, 2.0]
aspect_ratios: [0.5, 1.0, 2.0]
height_stride: 8
width_stride: 8
}
}
first_stage_atrous_rate: 2
first_stage_box_predictor_conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.01
}
}
}
first_stage_nms_score_threshold: 0.0
first_stage_nms_iou_threshold: 0.7
first_stage_max_proposals: 300
first_stage_localization_loss_weight: 2.0
first_stage_objectness_loss_weight: 1.0
initial_crop_size: 14
maxpool_kernel_size: 2
maxpool_stride: 2
second_stage_box_predictor {
mask_rcnn_box_predictor {
use_dropout: false
dropout_keep_probability: 1.0
predict_instance_masks: true
mask_height: 33
mask_width: 33
mask_prediction_conv_depth: 0
mask_prediction_num_conv_layers: 4
fc_hyperparams {
op: FC
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
variance_scaling_initializer {
factor: 1.0
uniform: true
mode: FAN_AVG
}
}
}
conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.01
}
}
}
}
}
second_stage_post_processing {
batch_non_max_suppression {
score_threshold: 0.0
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 300
}
score_converter: SOFTMAX
}
second_stage_localization_loss_weight: 2.0
second_stage_classification_loss_weight: 1.0
second_stage_mask_prediction_loss_weight: 4.0
}
}
train_config: {
batch_size: 1
optimizer {
momentum_optimizer: {
learning_rate: {
manual_step_learning_rate {
initial_learning_rate: 0.0003
schedule {
step: 0
learning_rate: .0003
}
schedule {
step: 900000
learning_rate: .00003
}
schedule {
step: 1200000
learning_rate: .000003
}
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
from_detection_checkpoint: true
# Note: The below line limits the training process to 200K steps, which we
# empirically found to be sufficient enough to train the pets dataset. This
# effectively bypasses the learning rate schedule (the learning rate will
# never decay). Remove the below line to train indefinitely.
num_steps: 200000
data_augmentation_options {
random_horizontal_flip {
}
}
}
train_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/mscoco_train.record"
}
label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
}
eval_config: {
num_examples: 8000
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 10
}
eval_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/mscoco_val.record"
}
label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
shuffle: false
num_readers: 1
}
......@@ -138,5 +138,4 @@ eval_input_reader: {
label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
shuffle: false
num_readers: 1
num_epochs: 1
}
......@@ -102,12 +102,10 @@ model {
loss {
classification_loss {
weighted_sigmoid {
anchorwise_output: true
}
}
localization_loss {
weighted_smooth_l1 {
anchorwise_output: true
}
}
hard_example_miner {
......@@ -187,5 +185,4 @@ eval_input_reader: {
label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
shuffle: false
num_readers: 1
num_epochs: 1
}
......@@ -102,12 +102,10 @@ model {
loss {
classification_loss {
weighted_sigmoid {
anchorwise_output: true
}
}
localization_loss {
weighted_smooth_l1 {
anchorwise_output: true
}
}
hard_example_miner {
......
# SSD with Inception v2 configured for Oxford-IIIT Pets Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# should be configured.
model {
ssd {
num_classes: 37
box_coder {
faster_rcnn_box_coder {
y_scale: 10.0
x_scale: 10.0
height_scale: 5.0
width_scale: 5.0
}
}
matcher {
argmax_matcher {
matched_threshold: 0.5
unmatched_threshold: 0.5
ignore_thresholds: false
negatives_lower_than_unmatched: true
force_match_for_each_row: true
}
}
similarity_calculator {
iou_similarity {
}
}
anchor_generator {
ssd_anchor_generator {
num_layers: 6
min_scale: 0.2
max_scale: 0.95
aspect_ratios: 1.0
aspect_ratios: 2.0
aspect_ratios: 0.5
aspect_ratios: 3.0
aspect_ratios: 0.3333
reduce_boxes_in_lowest_layer: true
}
}
image_resizer {
fixed_shape_resizer {
height: 300
width: 300
}
}
box_predictor {
convolutional_box_predictor {
min_depth: 0
max_depth: 0
num_layers_before_predictor: 0
use_dropout: false
dropout_keep_probability: 0.8
kernel_size: 3
box_code_size: 4
apply_sigmoid_to_scores: false
conv_hyperparams {
activation: RELU_6,
regularizer {
l2_regularizer {
weight: 0.00004
}
}
initializer {
truncated_normal_initializer {
stddev: 0.03
mean: 0.0
}
}
}
}
}
feature_extractor {
type: 'ssd_inception_v3'
min_depth: 16
depth_multiplier: 1.0
conv_hyperparams {
activation: RELU_6,
regularizer {
l2_regularizer {
weight: 0.00004
}
}
initializer {
truncated_normal_initializer {
stddev: 0.1
mean: 0.0
}
}
batch_norm {
train: true,
scale: true,
center: true,
decay: 0.9997,
epsilon: 0.01,
}
}
}
loss {
classification_loss {
weighted_sigmoid {
}
}
localization_loss {
weighted_smooth_l1 {
}
}
hard_example_miner {
num_hard_examples: 3000
iou_threshold: 0.99
loss_type: CLASSIFICATION
max_negatives_per_positive: 3
min_negatives_per_image: 0
}
classification_weight: 1.0
localization_weight: 1.0
}
normalize_loss_by_num_matches: true
post_processing {
batch_non_max_suppression {
score_threshold: 1e-8
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 100
}
score_converter: SIGMOID
}
}
}
train_config: {
batch_size: 24
optimizer {
rms_prop_optimizer: {
learning_rate: {
exponential_decay_learning_rate {
initial_learning_rate: 0.004
decay_steps: 800720
decay_factor: 0.95
}
}
momentum_optimizer_value: 0.9
decay: 0.9
epsilon: 1.0
}
}
fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
from_detection_checkpoint: true
# Note: The below line limits the training process to 200K steps, which we
# empirically found to be sufficient enough to train the pets dataset. This
# effectively bypasses the learning rate schedule (the learning rate will
# never decay). Remove the below line to train indefinitely.
num_steps: 200000
data_augmentation_options {
random_horizontal_flip {
}
}
data_augmentation_options {
ssd_random_crop {
}
}
}
train_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/pet_train.record"
}
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
}
eval_config: {
num_examples: 2000
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 10
}
eval_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/pet_val.record"
}
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
shuffle: false
num_readers: 1
}
......@@ -108,12 +108,10 @@ model {
loss {
classification_loss {
weighted_sigmoid {
anchorwise_output: true
}
}
localization_loss {
weighted_smooth_l1 {
anchorwise_output: true
}
}
hard_example_miner {
......@@ -193,5 +191,4 @@ eval_input_reader: {
label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
shuffle: false
num_readers: 1
num_epochs: 1
}
......@@ -108,12 +108,10 @@ model {
loss {
classification_loss {
weighted_sigmoid {
anchorwise_output: true
}
}
localization_loss {
weighted_smooth_l1 {
anchorwise_output: true
}
}
hard_example_miner {
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment