Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
ResNet50_tensorflow
Commits
10df8b1d
Commit
10df8b1d
authored
Sep 21, 2021
by
Frederick Liu
Committed by
A. Unique TensorFlower
Sep 21, 2021
Browse files
[docs] Update config doc strings to help users understand how it will be used.
PiperOrigin-RevId: 398154690
parent
0d67f42f
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
6 additions
and
4 deletions
+6
-4
official/core/config_definitions.py
official/core/config_definitions.py
+6
-4
No files found.
official/core/config_definitions.py
View file @
10df8b1d
...
...
@@ -14,9 +14,8 @@
"""Common configuration settings."""
from
typing
import
Optional
,
Sequence
,
Union
import
dataclasses
from
typing
import
Optional
,
Sequence
,
Union
from
official.modeling.hyperparams
import
base_config
from
official.modeling.optimization.configs
import
optimization_config
...
...
@@ -41,7 +40,9 @@ class DataConfig(base_config.Config):
tfds_split: A str indicating which split of the data to load from TFDS. It
is required when above `tfds_name` is specified.
global_batch_size: The global batch size across all replicas.
is_training: Whether this data is used for training or not.
is_training: Whether this data is used for training or not. This flag is
useful for consumers of this object to determine whether the data should
be repeated or shuffled.
drop_remainder: Whether the last batch should be dropped in the case it has
fewer than `global_batch_size` elements.
shuffle_buffer_size: The buffer size used for shuffling training data.
...
...
@@ -178,7 +179,8 @@ class TrainerConfig(base_config.Config):
eval_tf_function: whether or not to use tf_function for eval.
allow_tpu_summary: Whether to allow summary happen inside the XLA program
runs on TPU through automatic outside compilation.
steps_per_loop: number of steps per loop.
steps_per_loop: number of steps per loop to report training metrics. This
can also be used to reduce host worker communication in a TPU setup.
summary_interval: number of steps between each summary.
checkpoint_interval: number of steps between checkpoints.
max_to_keep: max checkpoints to keep.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment