Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
dcuai
dlexamples
Commits
441c8f40
Commit
441c8f40
authored
Aug 01, 2022
by
qianyj
Browse files
update TF code
parent
ec90ad8e
Changes
325
Hide whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
0 additions
and
2513 deletions
+0
-2513
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/logs/hooks_helper.py
...ion/ResNet50_Official/official/utils/logs/hooks_helper.py
+0
-163
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/logs/hooks_test.py
...ation/ResNet50_Official/official/utils/logs/hooks_test.py
+0
-157
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/logs/logger.py
...alidation/ResNet50_Official/official/utils/logs/logger.py
+0
-441
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/logs/logger_test.py
...tion/ResNet50_Official/official/utils/logs/logger_test.py
+0
-364
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/logs/metric_hook.py
...tion/ResNet50_Official/official/utils/logs/metric_hook.py
+0
-97
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/logs/metric_hook_test.py
...ResNet50_Official/official/utils/logs/metric_hook_test.py
+0
-217
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/logs/mlperf_helper.py
...on/ResNet50_Official/official/utils/logs/mlperf_helper.py
+0
-192
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/misc/__init__.py
...idation/ResNet50_Official/official/utils/misc/__init__.py
+0
-0
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/misc/distribution_utils.py
...sNet50_Official/official/utils/misc/distribution_utils.py
+0
-98
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/misc/distribution_utils_test.py
...0_Official/official/utils/misc/distribution_utils_test.py
+0
-65
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/misc/model_helpers.py
...on/ResNet50_Official/official/utils/misc/model_helpers.py
+0
-93
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/misc/model_helpers_test.py
...sNet50_Official/official/utils/misc/model_helpers_test.py
+0
-121
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/testing/__init__.py
...tion/ResNet50_Official/official/utils/testing/__init__.py
+0
-0
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/testing/pylint.rcfile
...on/ResNet50_Official/official/utils/testing/pylint.rcfile
+0
-169
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/testing/reference_data.py
...esNet50_Official/official/utils/testing/reference_data.py
+0
-334
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/testing/reference_data/reference_data_test/dense/expected_graph
...g/reference_data/reference_data_test/dense/expected_graph
+0
-0
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/testing/reference_data/reference_data_test/dense/model.ckpt.data-00000-of-00001
.../reference_data_test/dense/model.ckpt.data-00000-of-00001
+0
-0
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/testing/reference_data/reference_data_test/dense/model.ckpt.index
...reference_data/reference_data_test/dense/model.ckpt.index
+0
-0
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/testing/reference_data/reference_data_test/dense/results.json
...ing/reference_data/reference_data_test/dense/results.json
+0
-1
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/testing/reference_data/reference_data_test/dense/tf_version.json
.../reference_data/reference_data_test/dense/tf_version.json
+0
-1
No files found.
Too many changes to show.
To preserve performance only
325 of 325+
files are displayed.
Plain diff
Email patch
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/logs/hooks_helper.py
deleted
100644 → 0
View file @
ec90ad8e
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Hooks helper to return a list of TensorFlow hooks for training by name.
More hooks can be added to this set. To add a new hook, 1) add the new hook to
the registry in HOOKS, 2) add a corresponding function that parses out necessary
parameters.
"""
from
__future__
import
absolute_import
from
__future__
import
division
from
__future__
import
print_function
import
tensorflow
as
tf
# pylint: disable=g-bad-import-order
from
official.utils.logs
import
hooks
from
official.utils.logs
import
logger
from
official.utils.logs
import
metric_hook
_TENSORS_TO_LOG
=
dict
((
x
,
x
)
for
x
in
[
'learning_rate'
,
'cross_entropy'
,
'train_accuracy'
])
def
get_train_hooks
(
name_list
,
use_tpu
=
False
,
**
kwargs
):
"""Factory for getting a list of TensorFlow hooks for training by name.
Args:
name_list: a list of strings to name desired hook classes. Allowed:
LoggingTensorHook, ProfilerHook, ExamplesPerSecondHook, which are defined
as keys in HOOKS
use_tpu: Boolean of whether computation occurs on a TPU. This will disable
hooks altogether.
**kwargs: a dictionary of arguments to the hooks.
Returns:
list of instantiated hooks, ready to be used in a classifier.train call.
Raises:
ValueError: if an unrecognized name is passed.
"""
if
not
name_list
:
return
[]
if
use_tpu
:
tf
.
logging
.
warning
(
"hooks_helper received name_list `{}`, but a TPU is "
"specified. No hooks will be used."
.
format
(
name_list
))
return
[]
train_hooks
=
[]
for
name
in
name_list
:
hook_name
=
HOOKS
.
get
(
name
.
strip
().
lower
())
if
hook_name
is
None
:
raise
ValueError
(
'Unrecognized training hook requested: {}'
.
format
(
name
))
else
:
train_hooks
.
append
(
hook_name
(
**
kwargs
))
return
train_hooks
def
get_logging_tensor_hook
(
every_n_iter
=
100
,
tensors_to_log
=
None
,
**
kwargs
):
# pylint: disable=unused-argument
"""Function to get LoggingTensorHook.
Args:
every_n_iter: `int`, print the values of `tensors` once every N local
steps taken on the current worker.
tensors_to_log: List of tensor names or dictionary mapping labels to tensor
names. If not set, log _TENSORS_TO_LOG by default.
**kwargs: a dictionary of arguments to LoggingTensorHook.
Returns:
Returns a LoggingTensorHook with a standard set of tensors that will be
printed to stdout.
"""
if
tensors_to_log
is
None
:
tensors_to_log
=
_TENSORS_TO_LOG
return
tf
.
train
.
LoggingTensorHook
(
tensors
=
tensors_to_log
,
every_n_iter
=
every_n_iter
)
def
get_profiler_hook
(
model_dir
,
save_steps
=
1000
,
**
kwargs
):
# pylint: disable=unused-argument
"""Function to get ProfilerHook.
Args:
model_dir: The directory to save the profile traces to.
save_steps: `int`, print profile traces every N steps.
**kwargs: a dictionary of arguments to ProfilerHook.
Returns:
Returns a ProfilerHook that writes out timelines that can be loaded into
profiling tools like chrome://tracing.
"""
return
tf
.
train
.
ProfilerHook
(
save_steps
=
save_steps
,
output_dir
=
model_dir
)
def
get_examples_per_second_hook
(
every_n_steps
=
100
,
batch_size
=
128
,
warm_steps
=
5
,
**
kwargs
):
# pylint: disable=unused-argument
"""Function to get ExamplesPerSecondHook.
Args:
every_n_steps: `int`, print current and average examples per second every
N steps.
batch_size: `int`, total batch size used to calculate examples/second from
global time.
warm_steps: skip this number of steps before logging and running average.
**kwargs: a dictionary of arguments to ExamplesPerSecondHook.
Returns:
Returns a ProfilerHook that writes out timelines that can be loaded into
profiling tools like chrome://tracing.
"""
return
hooks
.
ExamplesPerSecondHook
(
batch_size
=
batch_size
,
every_n_steps
=
every_n_steps
,
warm_steps
=
warm_steps
,
metric_logger
=
logger
.
get_benchmark_logger
())
def
get_logging_metric_hook
(
tensors_to_log
=
None
,
every_n_secs
=
600
,
**
kwargs
):
# pylint: disable=unused-argument
"""Function to get LoggingMetricHook.
Args:
tensors_to_log: List of tensor names or dictionary mapping labels to tensor
names. If not set, log _TENSORS_TO_LOG by default.
every_n_secs: `int`, the frequency for logging the metric. Default to every
10 mins.
Returns:
Returns a LoggingMetricHook that saves tensor values in a JSON format.
"""
if
tensors_to_log
is
None
:
tensors_to_log
=
_TENSORS_TO_LOG
return
metric_hook
.
LoggingMetricHook
(
tensors
=
tensors_to_log
,
metric_logger
=
logger
.
get_benchmark_logger
(),
every_n_secs
=
every_n_secs
)
# A dictionary to map one hook name and its corresponding function
HOOKS
=
{
'loggingtensorhook'
:
get_logging_tensor_hook
,
'profilerhook'
:
get_profiler_hook
,
'examplespersecondhook'
:
get_examples_per_second_hook
,
'loggingmetrichook'
:
get_logging_metric_hook
,
}
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/logs/hooks_test.py
deleted
100644 → 0
View file @
ec90ad8e
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for hooks."""
from
__future__
import
absolute_import
from
__future__
import
division
from
__future__
import
print_function
import
time
import
tensorflow
as
tf
# pylint: disable=g-bad-import-order
from
official.utils.logs
import
hooks
from
official.utils.testing
import
mock_lib
tf
.
logging
.
set_verbosity
(
tf
.
logging
.
DEBUG
)
class
ExamplesPerSecondHookTest
(
tf
.
test
.
TestCase
):
"""Tests for the ExamplesPerSecondHook.
In the test, we explicitly run global_step tensor after train_op in order to
keep the global_step value and the train_op (which increase the glboal_step
by 1) consistent. This is to correct the discrepancies in reported global_step
value when running on GPUs.
"""
def
setUp
(
self
):
"""Mock out logging calls to verify if correct info is being monitored."""
self
.
_logger
=
mock_lib
.
MockBenchmarkLogger
()
self
.
graph
=
tf
.
Graph
()
with
self
.
graph
.
as_default
():
tf
.
train
.
create_global_step
()
self
.
train_op
=
tf
.
assign_add
(
tf
.
train
.
get_global_step
(),
1
)
self
.
global_step
=
tf
.
train
.
get_global_step
()
def
test_raise_in_both_secs_and_steps
(
self
):
with
self
.
assertRaises
(
ValueError
):
hooks
.
ExamplesPerSecondHook
(
batch_size
=
256
,
every_n_steps
=
10
,
every_n_secs
=
20
,
metric_logger
=
self
.
_logger
)
def
test_raise_in_none_secs_and_steps
(
self
):
with
self
.
assertRaises
(
ValueError
):
hooks
.
ExamplesPerSecondHook
(
batch_size
=
256
,
every_n_steps
=
None
,
every_n_secs
=
None
,
metric_logger
=
self
.
_logger
)
def
_validate_log_every_n_steps
(
self
,
every_n_steps
,
warm_steps
):
hook
=
hooks
.
ExamplesPerSecondHook
(
batch_size
=
256
,
every_n_steps
=
every_n_steps
,
warm_steps
=
warm_steps
,
metric_logger
=
self
.
_logger
)
with
tf
.
train
.
MonitoredSession
(
tf
.
train
.
ChiefSessionCreator
(),
[
hook
])
as
mon_sess
:
for
_
in
range
(
every_n_steps
):
# Explicitly run global_step after train_op to get the accurate
# global_step value
mon_sess
.
run
(
self
.
train_op
)
mon_sess
.
run
(
self
.
global_step
)
# Nothing should be in the list yet
self
.
assertFalse
(
self
.
_logger
.
logged_metric
)
mon_sess
.
run
(
self
.
train_op
)
global_step_val
=
mon_sess
.
run
(
self
.
global_step
)
if
global_step_val
>
warm_steps
:
self
.
_assert_metrics
()
else
:
# Nothing should be in the list yet
self
.
assertFalse
(
self
.
_logger
.
logged_metric
)
# Add additional run to verify proper reset when called multiple times.
prev_log_len
=
len
(
self
.
_logger
.
logged_metric
)
mon_sess
.
run
(
self
.
train_op
)
global_step_val
=
mon_sess
.
run
(
self
.
global_step
)
if
every_n_steps
==
1
and
global_step_val
>
warm_steps
:
# Each time, we log two additional metrics. Did exactly 2 get added?
self
.
assertEqual
(
len
(
self
.
_logger
.
logged_metric
),
prev_log_len
+
2
)
else
:
# No change in the size of the metric list.
self
.
assertEqual
(
len
(
self
.
_logger
.
logged_metric
),
prev_log_len
)
def
test_examples_per_sec_every_1_steps
(
self
):
with
self
.
graph
.
as_default
():
self
.
_validate_log_every_n_steps
(
1
,
0
)
def
test_examples_per_sec_every_5_steps
(
self
):
with
self
.
graph
.
as_default
():
self
.
_validate_log_every_n_steps
(
5
,
0
)
def
test_examples_per_sec_every_1_steps_with_warm_steps
(
self
):
with
self
.
graph
.
as_default
():
self
.
_validate_log_every_n_steps
(
1
,
10
)
def
test_examples_per_sec_every_5_steps_with_warm_steps
(
self
):
with
self
.
graph
.
as_default
():
self
.
_validate_log_every_n_steps
(
5
,
10
)
def
_validate_log_every_n_secs
(
self
,
every_n_secs
):
hook
=
hooks
.
ExamplesPerSecondHook
(
batch_size
=
256
,
every_n_steps
=
None
,
every_n_secs
=
every_n_secs
,
metric_logger
=
self
.
_logger
)
with
tf
.
train
.
MonitoredSession
(
tf
.
train
.
ChiefSessionCreator
(),
[
hook
])
as
mon_sess
:
# Explicitly run global_step after train_op to get the accurate
# global_step value
mon_sess
.
run
(
self
.
train_op
)
mon_sess
.
run
(
self
.
global_step
)
# Nothing should be in the list yet
self
.
assertFalse
(
self
.
_logger
.
logged_metric
)
time
.
sleep
(
every_n_secs
)
mon_sess
.
run
(
self
.
train_op
)
mon_sess
.
run
(
self
.
global_step
)
self
.
_assert_metrics
()
def
test_examples_per_sec_every_1_secs
(
self
):
with
self
.
graph
.
as_default
():
self
.
_validate_log_every_n_secs
(
1
)
def
test_examples_per_sec_every_5_secs
(
self
):
with
self
.
graph
.
as_default
():
self
.
_validate_log_every_n_secs
(
5
)
def
_assert_metrics
(
self
):
metrics
=
self
.
_logger
.
logged_metric
self
.
assertEqual
(
metrics
[
-
2
][
"name"
],
"average_examples_per_sec"
)
self
.
assertEqual
(
metrics
[
-
1
][
"name"
],
"current_examples_per_sec"
)
if
__name__
==
"__main__"
:
tf
.
test
.
main
()
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/logs/logger.py
deleted
100644 → 0
View file @
ec90ad8e
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Logging utilities for benchmark.
For collecting local environment metrics like CPU and memory, certain python
packages need be installed. See README for details.
"""
from
__future__
import
absolute_import
from
__future__
import
division
from
__future__
import
print_function
import
contextlib
import
datetime
import
json
import
multiprocessing
import
numbers
import
os
import
threading
import
uuid
from
six.moves
import
_thread
as
thread
from
absl
import
flags
import
tensorflow
as
tf
from
tensorflow.python.client
import
device_lib
from
official.utils.logs
import
cloud_lib
METRIC_LOG_FILE_NAME
=
"metric.log"
BENCHMARK_RUN_LOG_FILE_NAME
=
"benchmark_run.log"
_DATE_TIME_FORMAT_PATTERN
=
"%Y-%m-%dT%H:%M:%S.%fZ"
GCP_TEST_ENV
=
"GCP"
RUN_STATUS_SUCCESS
=
"success"
RUN_STATUS_FAILURE
=
"failure"
RUN_STATUS_RUNNING
=
"running"
FLAGS
=
flags
.
FLAGS
# Don't use it directly. Use get_benchmark_logger to access a logger.
_benchmark_logger
=
None
_logger_lock
=
threading
.
Lock
()
def
config_benchmark_logger
(
flag_obj
=
None
):
"""Config the global benchmark logger."""
_logger_lock
.
acquire
()
try
:
global
_benchmark_logger
if
not
flag_obj
:
flag_obj
=
FLAGS
if
(
not
hasattr
(
flag_obj
,
"benchmark_logger_type"
)
or
flag_obj
.
benchmark_logger_type
==
"BaseBenchmarkLogger"
):
_benchmark_logger
=
BaseBenchmarkLogger
()
elif
flag_obj
.
benchmark_logger_type
==
"BenchmarkFileLogger"
:
_benchmark_logger
=
BenchmarkFileLogger
(
flag_obj
.
benchmark_log_dir
)
elif
flag_obj
.
benchmark_logger_type
==
"BenchmarkBigQueryLogger"
:
from
official.benchmark
import
benchmark_uploader
as
bu
# pylint: disable=g-import-not-at-top
bq_uploader
=
bu
.
BigQueryUploader
(
gcp_project
=
flag_obj
.
gcp_project
)
_benchmark_logger
=
BenchmarkBigQueryLogger
(
bigquery_uploader
=
bq_uploader
,
bigquery_data_set
=
flag_obj
.
bigquery_data_set
,
bigquery_run_table
=
flag_obj
.
bigquery_run_table
,
bigquery_run_status_table
=
flag_obj
.
bigquery_run_status_table
,
bigquery_metric_table
=
flag_obj
.
bigquery_metric_table
,
run_id
=
str
(
uuid
.
uuid4
()))
else
:
raise
ValueError
(
"Unrecognized benchmark_logger_type: %s"
%
flag_obj
.
benchmark_logger_type
)
finally
:
_logger_lock
.
release
()
return
_benchmark_logger
def
get_benchmark_logger
():
if
not
_benchmark_logger
:
config_benchmark_logger
()
return
_benchmark_logger
@
contextlib
.
contextmanager
def
benchmark_context
(
flag_obj
):
"""Context of benchmark, which will update status of the run accordingly."""
benchmark_logger
=
config_benchmark_logger
(
flag_obj
)
try
:
yield
benchmark_logger
.
on_finish
(
RUN_STATUS_SUCCESS
)
except
Exception
:
# pylint: disable=broad-except
# Catch all the exception, update the run status to be failure, and re-raise
benchmark_logger
.
on_finish
(
RUN_STATUS_FAILURE
)
raise
class
BaseBenchmarkLogger
(
object
):
"""Class to log the benchmark information to STDOUT."""
def
log_evaluation_result
(
self
,
eval_results
):
"""Log the evaluation result.
The evaluate result is a dictionary that contains metrics defined in
model_fn. It also contains a entry for global_step which contains the value
of the global step when evaluation was performed.
Args:
eval_results: dict, the result of evaluate.
"""
if
not
isinstance
(
eval_results
,
dict
):
tf
.
logging
.
warning
(
"eval_results should be dictionary for logging. "
"Got %s"
,
type
(
eval_results
))
return
global_step
=
eval_results
[
tf
.
GraphKeys
.
GLOBAL_STEP
]
for
key
in
sorted
(
eval_results
):
if
key
!=
tf
.
GraphKeys
.
GLOBAL_STEP
:
self
.
log_metric
(
key
,
eval_results
[
key
],
global_step
=
global_step
)
def
log_metric
(
self
,
name
,
value
,
unit
=
None
,
global_step
=
None
,
extras
=
None
):
"""Log the benchmark metric information to local file.
Currently the logging is done in a synchronized way. This should be updated
to log asynchronously.
Args:
name: string, the name of the metric to log.
value: number, the value of the metric. The value will not be logged if it
is not a number type.
unit: string, the unit of the metric, E.g "image per second".
global_step: int, the global_step when the metric is logged.
extras: map of string:string, the extra information about the metric.
"""
metric
=
_process_metric_to_json
(
name
,
value
,
unit
,
global_step
,
extras
)
if
metric
:
tf
.
logging
.
info
(
"Benchmark metric: %s"
,
metric
)
def
log_run_info
(
self
,
model_name
,
dataset_name
,
run_params
,
test_id
=
None
):
tf
.
logging
.
info
(
"Benchmark run: %s"
,
_gather_run_info
(
model_name
,
dataset_name
,
run_params
,
test_id
))
def
on_finish
(
self
,
status
):
pass
class
BenchmarkFileLogger
(
BaseBenchmarkLogger
):
"""Class to log the benchmark information to local disk."""
def
__init__
(
self
,
logging_dir
):
super
(
BenchmarkFileLogger
,
self
).
__init__
()
self
.
_logging_dir
=
logging_dir
if
not
tf
.
gfile
.
IsDirectory
(
self
.
_logging_dir
):
tf
.
gfile
.
MakeDirs
(
self
.
_logging_dir
)
self
.
_metric_file_handler
=
tf
.
gfile
.
GFile
(
os
.
path
.
join
(
self
.
_logging_dir
,
METRIC_LOG_FILE_NAME
),
"a"
)
def
log_metric
(
self
,
name
,
value
,
unit
=
None
,
global_step
=
None
,
extras
=
None
):
"""Log the benchmark metric information to local file.
Currently the logging is done in a synchronized way. This should be updated
to log asynchronously.
Args:
name: string, the name of the metric to log.
value: number, the value of the metric. The value will not be logged if it
is not a number type.
unit: string, the unit of the metric, E.g "image per second".
global_step: int, the global_step when the metric is logged.
extras: map of string:string, the extra information about the metric.
"""
metric
=
_process_metric_to_json
(
name
,
value
,
unit
,
global_step
,
extras
)
if
metric
:
try
:
json
.
dump
(
metric
,
self
.
_metric_file_handler
)
self
.
_metric_file_handler
.
write
(
"
\n
"
)
self
.
_metric_file_handler
.
flush
()
except
(
TypeError
,
ValueError
)
as
e
:
tf
.
logging
.
warning
(
"Failed to dump metric to log file: "
"name %s, value %s, error %s"
,
name
,
value
,
e
)
def
log_run_info
(
self
,
model_name
,
dataset_name
,
run_params
,
test_id
=
None
):
"""Collect most of the TF runtime information for the local env.
The schema of the run info follows official/benchmark/datastore/schema.
Args:
model_name: string, the name of the model.
dataset_name: string, the name of dataset for training and evaluation.
run_params: dict, the dictionary of parameters for the run, it could
include hyperparameters or other params that are important for the run.
test_id: string, the unique name of the test run by the combination of key
parameters, eg batch size, num of GPU. It is hardware independent.
"""
run_info
=
_gather_run_info
(
model_name
,
dataset_name
,
run_params
,
test_id
)
with
tf
.
gfile
.
GFile
(
os
.
path
.
join
(
self
.
_logging_dir
,
BENCHMARK_RUN_LOG_FILE_NAME
),
"w"
)
as
f
:
try
:
json
.
dump
(
run_info
,
f
)
f
.
write
(
"
\n
"
)
except
(
TypeError
,
ValueError
)
as
e
:
tf
.
logging
.
warning
(
"Failed to dump benchmark run info to log file: %s"
,
e
)
def
on_finish
(
self
,
status
):
self
.
_metric_file_handler
.
flush
()
self
.
_metric_file_handler
.
close
()
class
BenchmarkBigQueryLogger
(
BaseBenchmarkLogger
):
"""Class to log the benchmark information to BigQuery data store."""
def
__init__
(
self
,
bigquery_uploader
,
bigquery_data_set
,
bigquery_run_table
,
bigquery_run_status_table
,
bigquery_metric_table
,
run_id
):
super
(
BenchmarkBigQueryLogger
,
self
).
__init__
()
self
.
_bigquery_uploader
=
bigquery_uploader
self
.
_bigquery_data_set
=
bigquery_data_set
self
.
_bigquery_run_table
=
bigquery_run_table
self
.
_bigquery_run_status_table
=
bigquery_run_status_table
self
.
_bigquery_metric_table
=
bigquery_metric_table
self
.
_run_id
=
run_id
def
log_metric
(
self
,
name
,
value
,
unit
=
None
,
global_step
=
None
,
extras
=
None
):
"""Log the benchmark metric information to bigquery.
Args:
name: string, the name of the metric to log.
value: number, the value of the metric. The value will not be logged if it
is not a number type.
unit: string, the unit of the metric, E.g "image per second".
global_step: int, the global_step when the metric is logged.
extras: map of string:string, the extra information about the metric.
"""
metric
=
_process_metric_to_json
(
name
,
value
,
unit
,
global_step
,
extras
)
if
metric
:
# Starting new thread for bigquery upload in case it might take long time
# and impact the benchmark and performance measurement. Starting a new
# thread might have potential performance impact for model that run on
# CPU.
thread
.
start_new_thread
(
self
.
_bigquery_uploader
.
upload_benchmark_metric_json
,
(
self
.
_bigquery_data_set
,
self
.
_bigquery_metric_table
,
self
.
_run_id
,
[
metric
]))
def
log_run_info
(
self
,
model_name
,
dataset_name
,
run_params
,
test_id
=
None
):
"""Collect most of the TF runtime information for the local env.
The schema of the run info follows official/benchmark/datastore/schema.
Args:
model_name: string, the name of the model.
dataset_name: string, the name of dataset for training and evaluation.
run_params: dict, the dictionary of parameters for the run, it could
include hyperparameters or other params that are important for the run.
test_id: string, the unique name of the test run by the combination of key
parameters, eg batch size, num of GPU. It is hardware independent.
"""
run_info
=
_gather_run_info
(
model_name
,
dataset_name
,
run_params
,
test_id
)
# Starting new thread for bigquery upload in case it might take long time
# and impact the benchmark and performance measurement. Starting a new
# thread might have potential performance impact for model that run on CPU.
thread
.
start_new_thread
(
self
.
_bigquery_uploader
.
upload_benchmark_run_json
,
(
self
.
_bigquery_data_set
,
self
.
_bigquery_run_table
,
self
.
_run_id
,
run_info
))
thread
.
start_new_thread
(
self
.
_bigquery_uploader
.
insert_run_status
,
(
self
.
_bigquery_data_set
,
self
.
_bigquery_run_status_table
,
self
.
_run_id
,
RUN_STATUS_RUNNING
))
def
on_finish
(
self
,
status
):
self
.
_bigquery_uploader
.
update_run_status
(
self
.
_bigquery_data_set
,
self
.
_bigquery_run_status_table
,
self
.
_run_id
,
status
)
def
_gather_run_info
(
model_name
,
dataset_name
,
run_params
,
test_id
):
"""Collect the benchmark run information for the local environment."""
run_info
=
{
"model_name"
:
model_name
,
"dataset"
:
{
"name"
:
dataset_name
},
"machine_config"
:
{},
"test_id"
:
test_id
,
"run_date"
:
datetime
.
datetime
.
utcnow
().
strftime
(
_DATE_TIME_FORMAT_PATTERN
)}
session_config
=
None
if
"session_config"
in
run_params
:
session_config
=
run_params
[
"session_config"
]
_collect_tensorflow_info
(
run_info
)
_collect_tensorflow_environment_variables
(
run_info
)
_collect_run_params
(
run_info
,
run_params
)
#_collect_cpu_info(run_info)
_collect_gpu_info
(
run_info
,
session_config
)
_collect_memory_info
(
run_info
)
_collect_test_environment
(
run_info
)
return
run_info
def
_process_metric_to_json
(
name
,
value
,
unit
=
None
,
global_step
=
None
,
extras
=
None
):
"""Validate the metric data and generate JSON for insert."""
if
not
isinstance
(
value
,
numbers
.
Number
):
tf
.
logging
.
warning
(
"Metric value to log should be a number. Got %s"
,
type
(
value
))
return
None
extras
=
_convert_to_json_dict
(
extras
)
return
{
"name"
:
name
,
"value"
:
float
(
value
),
"unit"
:
unit
,
"global_step"
:
global_step
,
"timestamp"
:
datetime
.
datetime
.
utcnow
().
strftime
(
_DATE_TIME_FORMAT_PATTERN
),
"extras"
:
extras
}
def
_collect_tensorflow_info
(
run_info
):
run_info
[
"tensorflow_version"
]
=
{
"version"
:
tf
.
VERSION
,
"git_hash"
:
tf
.
GIT_VERSION
}
def
_collect_run_params
(
run_info
,
run_params
):
"""Log the parameter information for the benchmark run."""
def
process_param
(
name
,
value
):
type_check
=
{
str
:
{
"name"
:
name
,
"string_value"
:
value
},
int
:
{
"name"
:
name
,
"long_value"
:
value
},
bool
:
{
"name"
:
name
,
"bool_value"
:
str
(
value
)},
float
:
{
"name"
:
name
,
"float_value"
:
value
},
}
return
type_check
.
get
(
type
(
value
),
{
"name"
:
name
,
"string_value"
:
str
(
value
)})
if
run_params
:
run_info
[
"run_parameters"
]
=
[
process_param
(
k
,
v
)
for
k
,
v
in
sorted
(
run_params
.
items
())]
def
_collect_tensorflow_environment_variables
(
run_info
):
run_info
[
"tensorflow_environment_variables"
]
=
[
{
"name"
:
k
,
"value"
:
v
}
for
k
,
v
in
sorted
(
os
.
environ
.
items
())
if
k
.
startswith
(
"TF_"
)]
# The following code is mirrored from tensorflow/tools/test/system_info_lib
# which is not exposed for import.
def
_collect_cpu_info
(
run_info
):
"""Collect the CPU information for the local environment."""
cpu_info
=
{}
cpu_info
[
"num_cores"
]
=
multiprocessing
.
cpu_count
()
try
:
# Note: cpuinfo is not installed in the TensorFlow OSS tree.
# It is installable via pip.
import
cpuinfo
# pylint: disable=g-import-not-at-top
info
=
cpuinfo
.
get_cpu_info
()
cpu_info
[
"cpu_info"
]
=
info
[
"brand"
]
cpu_info
[
"mhz_per_cpu"
]
=
info
[
"hz_advertised_raw"
][
0
]
/
1.0e6
run_info
[
"machine_config"
][
"cpu_info"
]
=
cpu_info
except
ImportError
:
tf
.
logging
.
warn
(
"'cpuinfo' not imported. CPU info will not be logged."
)
def
_collect_gpu_info
(
run_info
,
session_config
=
None
):
"""Collect local GPU information by TF device library."""
gpu_info
=
{}
local_device_protos
=
device_lib
.
list_local_devices
(
session_config
)
gpu_info
[
"count"
]
=
len
([
d
for
d
in
local_device_protos
if
d
.
device_type
==
"GPU"
])
# The device description usually is a JSON string, which contains the GPU
# model info, eg:
# "device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:04.0"
for
d
in
local_device_protos
:
if
d
.
device_type
==
"GPU"
:
gpu_info
[
"model"
]
=
_parse_gpu_model
(
d
.
physical_device_desc
)
# Assume all the GPU connected are same model
break
run_info
[
"machine_config"
][
"gpu_info"
]
=
gpu_info
def
_collect_memory_info
(
run_info
):
try
:
# Note: psutil is not installed in the TensorFlow OSS tree.
# It is installable via pip.
import
psutil
# pylint: disable=g-import-not-at-top
vmem
=
psutil
.
virtual_memory
()
run_info
[
"machine_config"
][
"memory_total"
]
=
vmem
.
total
run_info
[
"machine_config"
][
"memory_available"
]
=
vmem
.
available
except
ImportError
:
tf
.
logging
.
warn
(
"'psutil' not imported. Memory info will not be logged."
)
def
_collect_test_environment
(
run_info
):
"""Detect the local environment, eg GCE, AWS or DGX, etc."""
if
cloud_lib
.
on_gcp
():
run_info
[
"test_environment"
]
=
GCP_TEST_ENV
# TODO(scottzhu): Add more testing env detection for other platform
def
_parse_gpu_model
(
physical_device_desc
):
# Assume all the GPU connected are same model
for
kv
in
physical_device_desc
.
split
(
","
):
k
,
_
,
v
=
kv
.
partition
(
":"
)
if
k
.
strip
()
==
"name"
:
return
v
.
strip
()
return
None
def
_convert_to_json_dict
(
input_dict
):
if
input_dict
:
return
[{
"name"
:
k
,
"value"
:
v
}
for
k
,
v
in
sorted
(
input_dict
.
items
())]
else
:
return
[]
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/logs/logger_test.py
deleted
100644 → 0
View file @
ec90ad8e
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for benchmark logger."""
from
__future__
import
absolute_import
from
__future__
import
division
from
__future__
import
print_function
import
json
import
os
import
tempfile
import
time
import
unittest
import
mock
from
absl.testing
import
flagsaver
import
tensorflow
as
tf
# pylint: disable=g-bad-import-order
try
:
from
google.cloud
import
bigquery
except
ImportError
:
bigquery
=
None
from
official.utils.flags
import
core
as
flags_core
from
official.utils.logs
import
logger
class
BenchmarkLoggerTest
(
tf
.
test
.
TestCase
):
@
classmethod
def
setUpClass
(
cls
):
# pylint: disable=invalid-name
super
(
BenchmarkLoggerTest
,
cls
).
setUpClass
()
flags_core
.
define_benchmark
()
def
test_get_default_benchmark_logger
(
self
):
with
flagsaver
.
flagsaver
(
benchmark_logger_type
=
'foo'
):
self
.
assertIsInstance
(
logger
.
get_benchmark_logger
(),
logger
.
BaseBenchmarkLogger
)
def
test_config_base_benchmark_logger
(
self
):
with
flagsaver
.
flagsaver
(
benchmark_logger_type
=
'BaseBenchmarkLogger'
):
logger
.
config_benchmark_logger
()
self
.
assertIsInstance
(
logger
.
get_benchmark_logger
(),
logger
.
BaseBenchmarkLogger
)
def
test_config_benchmark_file_logger
(
self
):
# Set the benchmark_log_dir first since the benchmark_logger_type will need
# the value to be set when it does the validation.
with
flagsaver
.
flagsaver
(
benchmark_log_dir
=
'/tmp'
):
with
flagsaver
.
flagsaver
(
benchmark_logger_type
=
'BenchmarkFileLogger'
):
logger
.
config_benchmark_logger
()
self
.
assertIsInstance
(
logger
.
get_benchmark_logger
(),
logger
.
BenchmarkFileLogger
)
@
unittest
.
skipIf
(
bigquery
is
None
,
'Bigquery dependency is not installed.'
)
@
mock
.
patch
.
object
(
bigquery
,
"Client"
)
def
test_config_benchmark_bigquery_logger
(
self
,
mock_bigquery_client
):
with
flagsaver
.
flagsaver
(
benchmark_logger_type
=
'BenchmarkBigQueryLogger'
):
logger
.
config_benchmark_logger
()
self
.
assertIsInstance
(
logger
.
get_benchmark_logger
(),
logger
.
BenchmarkBigQueryLogger
)
@
mock
.
patch
(
"official.utils.logs.logger.config_benchmark_logger"
)
def
test_benchmark_context
(
self
,
mock_config_benchmark_logger
):
mock_logger
=
mock
.
MagicMock
()
mock_config_benchmark_logger
.
return_value
=
mock_logger
with
logger
.
benchmark_context
(
None
):
tf
.
logging
.
info
(
"start benchmarking"
)
mock_logger
.
on_finish
.
assert_called_once_with
(
logger
.
RUN_STATUS_SUCCESS
)
@
mock
.
patch
(
"official.utils.logs.logger.config_benchmark_logger"
)
def
test_benchmark_context_failure
(
self
,
mock_config_benchmark_logger
):
mock_logger
=
mock
.
MagicMock
()
mock_config_benchmark_logger
.
return_value
=
mock_logger
with
self
.
assertRaises
(
RuntimeError
):
with
logger
.
benchmark_context
(
None
):
raise
RuntimeError
(
"training error"
)
mock_logger
.
on_finish
.
assert_called_once_with
(
logger
.
RUN_STATUS_FAILURE
)
class
BaseBenchmarkLoggerTest
(
tf
.
test
.
TestCase
):
def
setUp
(
self
):
super
(
BaseBenchmarkLoggerTest
,
self
).
setUp
()
self
.
_actual_log
=
tf
.
logging
.
info
self
.
logged_message
=
None
def
mock_log
(
*
args
,
**
kwargs
):
self
.
logged_message
=
args
self
.
_actual_log
(
*
args
,
**
kwargs
)
tf
.
logging
.
info
=
mock_log
def
tearDown
(
self
):
super
(
BaseBenchmarkLoggerTest
,
self
).
tearDown
()
tf
.
logging
.
info
=
self
.
_actual_log
def
test_log_metric
(
self
):
log
=
logger
.
BaseBenchmarkLogger
()
log
.
log_metric
(
"accuracy"
,
0.999
,
global_step
=
1e4
,
extras
=
{
"name"
:
"value"
})
expected_log_prefix
=
"Benchmark metric:"
self
.
assertRegexpMatches
(
str
(
self
.
logged_message
),
expected_log_prefix
)
class
BenchmarkFileLoggerTest
(
tf
.
test
.
TestCase
):
def
setUp
(
self
):
super
(
BenchmarkFileLoggerTest
,
self
).
setUp
()
# Avoid pulling extra env vars from test environment which affects the test
# result, eg. Kokoro test has a TF_PKG env which affect the test case
# test_collect_tensorflow_environment_variables()
self
.
original_environ
=
dict
(
os
.
environ
)
os
.
environ
.
clear
()
def
tearDown
(
self
):
super
(
BenchmarkFileLoggerTest
,
self
).
tearDown
()
tf
.
gfile
.
DeleteRecursively
(
self
.
get_temp_dir
())
os
.
environ
.
clear
()
os
.
environ
.
update
(
self
.
original_environ
)
def
test_create_logging_dir
(
self
):
non_exist_temp_dir
=
os
.
path
.
join
(
self
.
get_temp_dir
(),
"unknown_dir"
)
self
.
assertFalse
(
tf
.
gfile
.
IsDirectory
(
non_exist_temp_dir
))
logger
.
BenchmarkFileLogger
(
non_exist_temp_dir
)
self
.
assertTrue
(
tf
.
gfile
.
IsDirectory
(
non_exist_temp_dir
))
def
test_log_metric
(
self
):
log_dir
=
tempfile
.
mkdtemp
(
dir
=
self
.
get_temp_dir
())
log
=
logger
.
BenchmarkFileLogger
(
log_dir
)
log
.
log_metric
(
"accuracy"
,
0.999
,
global_step
=
1e4
,
extras
=
{
"name"
:
"value"
})
metric_log
=
os
.
path
.
join
(
log_dir
,
"metric.log"
)
self
.
assertTrue
(
tf
.
gfile
.
Exists
(
metric_log
))
with
tf
.
gfile
.
GFile
(
metric_log
)
as
f
:
metric
=
json
.
loads
(
f
.
readline
())
self
.
assertEqual
(
metric
[
"name"
],
"accuracy"
)
self
.
assertEqual
(
metric
[
"value"
],
0.999
)
self
.
assertEqual
(
metric
[
"unit"
],
None
)
self
.
assertEqual
(
metric
[
"global_step"
],
1e4
)
self
.
assertEqual
(
metric
[
"extras"
],
[{
"name"
:
"name"
,
"value"
:
"value"
}])
def
test_log_multiple_metrics
(
self
):
log_dir
=
tempfile
.
mkdtemp
(
dir
=
self
.
get_temp_dir
())
log
=
logger
.
BenchmarkFileLogger
(
log_dir
)
log
.
log_metric
(
"accuracy"
,
0.999
,
global_step
=
1e4
,
extras
=
{
"name"
:
"value"
})
log
.
log_metric
(
"loss"
,
0.02
,
global_step
=
1e4
)
metric_log
=
os
.
path
.
join
(
log_dir
,
"metric.log"
)
self
.
assertTrue
(
tf
.
gfile
.
Exists
(
metric_log
))
with
tf
.
gfile
.
GFile
(
metric_log
)
as
f
:
accuracy
=
json
.
loads
(
f
.
readline
())
self
.
assertEqual
(
accuracy
[
"name"
],
"accuracy"
)
self
.
assertEqual
(
accuracy
[
"value"
],
0.999
)
self
.
assertEqual
(
accuracy
[
"unit"
],
None
)
self
.
assertEqual
(
accuracy
[
"global_step"
],
1e4
)
self
.
assertEqual
(
accuracy
[
"extras"
],
[{
"name"
:
"name"
,
"value"
:
"value"
}])
loss
=
json
.
loads
(
f
.
readline
())
self
.
assertEqual
(
loss
[
"name"
],
"loss"
)
self
.
assertEqual
(
loss
[
"value"
],
0.02
)
self
.
assertEqual
(
loss
[
"unit"
],
None
)
self
.
assertEqual
(
loss
[
"global_step"
],
1e4
)
self
.
assertEqual
(
loss
[
"extras"
],
[])
def
test_log_non_number_value
(
self
):
log_dir
=
tempfile
.
mkdtemp
(
dir
=
self
.
get_temp_dir
())
log
=
logger
.
BenchmarkFileLogger
(
log_dir
)
const
=
tf
.
constant
(
1
)
log
.
log_metric
(
"accuracy"
,
const
)
metric_log
=
os
.
path
.
join
(
log_dir
,
"metric.log"
)
self
.
assertFalse
(
tf
.
gfile
.
Exists
(
metric_log
))
def
test_log_evaluation_result
(
self
):
eval_result
=
{
"loss"
:
0.46237424
,
"global_step"
:
207082
,
"accuracy"
:
0.9285
}
log_dir
=
tempfile
.
mkdtemp
(
dir
=
self
.
get_temp_dir
())
log
=
logger
.
BenchmarkFileLogger
(
log_dir
)
log
.
log_evaluation_result
(
eval_result
)
metric_log
=
os
.
path
.
join
(
log_dir
,
"metric.log"
)
self
.
assertTrue
(
tf
.
gfile
.
Exists
(
metric_log
))
with
tf
.
gfile
.
GFile
(
metric_log
)
as
f
:
accuracy
=
json
.
loads
(
f
.
readline
())
self
.
assertEqual
(
accuracy
[
"name"
],
"accuracy"
)
self
.
assertEqual
(
accuracy
[
"value"
],
0.9285
)
self
.
assertEqual
(
accuracy
[
"unit"
],
None
)
self
.
assertEqual
(
accuracy
[
"global_step"
],
207082
)
loss
=
json
.
loads
(
f
.
readline
())
self
.
assertEqual
(
loss
[
"name"
],
"loss"
)
self
.
assertEqual
(
loss
[
"value"
],
0.46237424
)
self
.
assertEqual
(
loss
[
"unit"
],
None
)
self
.
assertEqual
(
loss
[
"global_step"
],
207082
)
def
test_log_evaluation_result_with_invalid_type
(
self
):
eval_result
=
"{'loss': 0.46237424, 'global_step': 207082}"
log_dir
=
tempfile
.
mkdtemp
(
dir
=
self
.
get_temp_dir
())
log
=
logger
.
BenchmarkFileLogger
(
log_dir
)
log
.
log_evaluation_result
(
eval_result
)
metric_log
=
os
.
path
.
join
(
log_dir
,
"metric.log"
)
self
.
assertFalse
(
tf
.
gfile
.
Exists
(
metric_log
))
@
mock
.
patch
(
"official.utils.logs.logger._gather_run_info"
)
def
test_log_run_info
(
self
,
mock_gather_run_info
):
log_dir
=
tempfile
.
mkdtemp
(
dir
=
self
.
get_temp_dir
())
log
=
logger
.
BenchmarkFileLogger
(
log_dir
)
run_info
=
{
"model_name"
:
"model_name"
,
"dataset"
:
"dataset_name"
,
"run_info"
:
"run_value"
}
mock_gather_run_info
.
return_value
=
run_info
log
.
log_run_info
(
"model_name"
,
"dataset_name"
,
{})
run_log
=
os
.
path
.
join
(
log_dir
,
"benchmark_run.log"
)
self
.
assertTrue
(
tf
.
gfile
.
Exists
(
run_log
))
with
tf
.
gfile
.
GFile
(
run_log
)
as
f
:
run_info
=
json
.
loads
(
f
.
readline
())
self
.
assertEqual
(
run_info
[
"model_name"
],
"model_name"
)
self
.
assertEqual
(
run_info
[
"dataset"
],
"dataset_name"
)
self
.
assertEqual
(
run_info
[
"run_info"
],
"run_value"
)
def
test_collect_tensorflow_info
(
self
):
run_info
=
{}
logger
.
_collect_tensorflow_info
(
run_info
)
self
.
assertNotEqual
(
run_info
[
"tensorflow_version"
],
{})
self
.
assertEqual
(
run_info
[
"tensorflow_version"
][
"version"
],
tf
.
VERSION
)
self
.
assertEqual
(
run_info
[
"tensorflow_version"
][
"git_hash"
],
tf
.
GIT_VERSION
)
def
test_collect_run_params
(
self
):
run_info
=
{}
run_parameters
=
{
"batch_size"
:
32
,
"synthetic_data"
:
True
,
"train_epochs"
:
100.00
,
"dtype"
:
"fp16"
,
"resnet_size"
:
50
,
"random_tensor"
:
tf
.
constant
(
2.0
)
}
logger
.
_collect_run_params
(
run_info
,
run_parameters
)
self
.
assertEqual
(
len
(
run_info
[
"run_parameters"
]),
6
)
self
.
assertEqual
(
run_info
[
"run_parameters"
][
0
],
{
"name"
:
"batch_size"
,
"long_value"
:
32
})
self
.
assertEqual
(
run_info
[
"run_parameters"
][
1
],
{
"name"
:
"dtype"
,
"string_value"
:
"fp16"
})
self
.
assertEqual
(
run_info
[
"run_parameters"
][
2
],
{
"name"
:
"random_tensor"
,
"string_value"
:
"Tensor(
\"
Const:0
\"
, shape=(), dtype=float32)"
})
self
.
assertEqual
(
run_info
[
"run_parameters"
][
3
],
{
"name"
:
"resnet_size"
,
"long_value"
:
50
})
self
.
assertEqual
(
run_info
[
"run_parameters"
][
4
],
{
"name"
:
"synthetic_data"
,
"bool_value"
:
"True"
})
self
.
assertEqual
(
run_info
[
"run_parameters"
][
5
],
{
"name"
:
"train_epochs"
,
"float_value"
:
100.00
})
def
test_collect_tensorflow_environment_variables
(
self
):
os
.
environ
[
"TF_ENABLE_WINOGRAD_NONFUSED"
]
=
"1"
os
.
environ
[
"TF_OTHER"
]
=
"2"
os
.
environ
[
"OTHER"
]
=
"3"
run_info
=
{}
logger
.
_collect_tensorflow_environment_variables
(
run_info
)
self
.
assertIsNotNone
(
run_info
[
"tensorflow_environment_variables"
])
expected_tf_envs
=
[
{
"name"
:
"TF_ENABLE_WINOGRAD_NONFUSED"
,
"value"
:
"1"
},
{
"name"
:
"TF_OTHER"
,
"value"
:
"2"
},
]
self
.
assertEqual
(
run_info
[
"tensorflow_environment_variables"
],
expected_tf_envs
)
@
unittest
.
skipUnless
(
tf
.
test
.
is_built_with_cuda
(),
"requires GPU"
)
def
test_collect_gpu_info
(
self
):
run_info
=
{
"machine_config"
:
{}}
logger
.
_collect_gpu_info
(
run_info
)
self
.
assertNotEqual
(
run_info
[
"machine_config"
][
"gpu_info"
],
{})
def
test_collect_memory_info
(
self
):
run_info
=
{
"machine_config"
:
{}}
logger
.
_collect_memory_info
(
run_info
)
self
.
assertIsNotNone
(
run_info
[
"machine_config"
][
"memory_total"
])
self
.
assertIsNotNone
(
run_info
[
"machine_config"
][
"memory_available"
])
@
unittest
.
skipIf
(
bigquery
is
None
,
'Bigquery dependency is not installed.'
)
class
BenchmarkBigQueryLoggerTest
(
tf
.
test
.
TestCase
):
def
setUp
(
self
):
super
(
BenchmarkBigQueryLoggerTest
,
self
).
setUp
()
# Avoid pulling extra env vars from test environment which affects the test
# result, eg. Kokoro test has a TF_PKG env which affect the test case
# test_collect_tensorflow_environment_variables()
self
.
original_environ
=
dict
(
os
.
environ
)
os
.
environ
.
clear
()
self
.
mock_bq_uploader
=
mock
.
MagicMock
()
self
.
logger
=
logger
.
BenchmarkBigQueryLogger
(
self
.
mock_bq_uploader
,
"dataset"
,
"run_table"
,
"run_status_table"
,
"metric_table"
,
"run_id"
)
def
tearDown
(
self
):
super
(
BenchmarkBigQueryLoggerTest
,
self
).
tearDown
()
tf
.
gfile
.
DeleteRecursively
(
self
.
get_temp_dir
())
os
.
environ
.
clear
()
os
.
environ
.
update
(
self
.
original_environ
)
def
test_log_metric
(
self
):
self
.
logger
.
log_metric
(
"accuracy"
,
0.999
,
global_step
=
1e4
,
extras
=
{
"name"
:
"value"
})
expected_metric_json
=
[{
"name"
:
"accuracy"
,
"value"
:
0.999
,
"unit"
:
None
,
"global_step"
:
1e4
,
"timestamp"
:
mock
.
ANY
,
"extras"
:
[{
"name"
:
"name"
,
"value"
:
"value"
}]
}]
# log_metric will call upload_benchmark_metric_json in a separate thread.
# Give it some grace period for the new thread before assert.
time
.
sleep
(
1
)
self
.
mock_bq_uploader
.
upload_benchmark_metric_json
.
assert_called_once_with
(
"dataset"
,
"metric_table"
,
"run_id"
,
expected_metric_json
)
@
mock
.
patch
(
"official.utils.logs.logger._gather_run_info"
)
def
test_log_run_info
(
self
,
mock_gather_run_info
):
run_info
=
{
"model_name"
:
"model_name"
,
"dataset"
:
"dataset_name"
,
"run_info"
:
"run_value"
}
mock_gather_run_info
.
return_value
=
run_info
self
.
logger
.
log_run_info
(
"model_name"
,
"dataset_name"
,
{})
# log_metric will call upload_benchmark_metric_json in a separate thread.
# Give it some grace period for the new thread before assert.
time
.
sleep
(
1
)
self
.
mock_bq_uploader
.
upload_benchmark_run_json
.
assert_called_once_with
(
"dataset"
,
"run_table"
,
"run_id"
,
run_info
)
self
.
mock_bq_uploader
.
insert_run_status
.
assert_called_once_with
(
"dataset"
,
"run_status_table"
,
"run_id"
,
"running"
)
def
test_on_finish
(
self
):
self
.
logger
.
on_finish
(
logger
.
RUN_STATUS_SUCCESS
)
# log_metric will call upload_benchmark_metric_json in a separate thread.
# Give it some grace period for the new thread before assert.
time
.
sleep
(
1
)
self
.
mock_bq_uploader
.
update_run_status
.
assert_called_once_with
(
"dataset"
,
"run_status_table"
,
"run_id"
,
logger
.
RUN_STATUS_SUCCESS
)
if
__name__
==
"__main__"
:
tf
.
test
.
main
()
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/logs/metric_hook.py
deleted
100644 → 0
View file @
ec90ad8e
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Session hook for logging benchmark metric."""
from
__future__
import
absolute_import
from
__future__
import
division
from
__future__
import
print_function
import
tensorflow
as
tf
# pylint: disable=g-bad-import-order
class
LoggingMetricHook
(
tf
.
compat
.
v1
.
train
.
LoggingTensorHook
):
"""Hook to log benchmark metric information.
This hook is very similar as tf.train.LoggingTensorHook, which logs given
tensors every N local steps, every N seconds, or at the end. The metric
information will be logged to given log_dir or via metric_logger in JSON
format, which can be consumed by data analysis pipeline later.
Note that if `at_end` is True, `tensors` should not include any tensor
whose evaluation produces a side effect such as consuming additional inputs.
"""
def
__init__
(
self
,
tensors
,
metric_logger
=
None
,
every_n_iter
=
None
,
every_n_secs
=
None
,
at_end
=
False
):
"""Initializer for LoggingMetricHook.
Args:
tensors: `dict` that maps string-valued tags to tensors/tensor names,
or `iterable` of tensors/tensor names.
metric_logger: instance of `BenchmarkLogger`, the benchmark logger that
hook should use to write the log.
every_n_iter: `int`, print the values of `tensors` once every N local
steps taken on the current worker.
every_n_secs: `int` or `float`, print the values of `tensors` once every N
seconds. Exactly one of `every_n_iter` and `every_n_secs` should be
provided.
at_end: `bool` specifying whether to print the values of `tensors` at the
end of the run.
Raises:
ValueError:
1. `every_n_iter` is non-positive, or
2. Exactly one of every_n_iter and every_n_secs should be provided.
3. Exactly one of log_dir and metric_logger should be provided.
"""
super
(
LoggingMetricHook
,
self
).
__init__
(
tensors
=
tensors
,
every_n_iter
=
every_n_iter
,
every_n_secs
=
every_n_secs
,
at_end
=
at_end
)
if
metric_logger
is
None
:
raise
ValueError
(
"metric_logger should be provided."
)
self
.
_logger
=
metric_logger
def
begin
(
self
):
super
(
LoggingMetricHook
,
self
).
begin
()
self
.
_global_step_tensor
=
tf
.
train
.
get_global_step
()
if
self
.
_global_step_tensor
is
None
:
raise
RuntimeError
(
"Global step should be created to use LoggingMetricHook."
)
if
self
.
_global_step_tensor
.
name
not
in
self
.
_current_tensors
:
self
.
_current_tensors
[
self
.
_global_step_tensor
.
name
]
=
(
self
.
_global_step_tensor
)
def
after_run
(
self
,
unused_run_context
,
run_values
):
# should_trigger is a internal state that populated at before_run, and it is
# using self_timer to determine whether it should trigger.
if
self
.
_should_trigger
:
self
.
_log_metric
(
run_values
.
results
)
self
.
_iter_count
+=
1
def
end
(
self
,
session
):
if
self
.
_log_at_end
:
values
=
session
.
run
(
self
.
_current_tensors
)
self
.
_log_metric
(
values
)
def
_log_metric
(
self
,
tensor_values
):
self
.
_timer
.
update_last_triggered_step
(
self
.
_iter_count
)
global_step
=
tensor_values
[
self
.
_global_step_tensor
.
name
]
# self._tag_order is populated during the init of LoggingTensorHook
for
tag
in
self
.
_tag_order
:
self
.
_logger
.
log_metric
(
tag
,
tensor_values
[
tag
],
global_step
=
global_step
)
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/logs/metric_hook_test.py
deleted
100644 → 0
View file @
ec90ad8e
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for metric_hook."""
from
__future__
import
absolute_import
from
__future__
import
division
from
__future__
import
print_function
import
tempfile
import
time
import
tensorflow
as
tf
# pylint: disable=g-bad-import-order
from
tensorflow.python.training
import
monitored_session
# pylint: disable=g-bad-import-order
from
official.utils.logs
import
metric_hook
from
official.utils.testing
import
mock_lib
class
LoggingMetricHookTest
(
tf
.
test
.
TestCase
):
"""Tests for LoggingMetricHook."""
def
setUp
(
self
):
super
(
LoggingMetricHookTest
,
self
).
setUp
()
self
.
_log_dir
=
tempfile
.
mkdtemp
(
dir
=
self
.
get_temp_dir
())
self
.
_logger
=
mock_lib
.
MockBenchmarkLogger
()
def
tearDown
(
self
):
super
(
LoggingMetricHookTest
,
self
).
tearDown
()
tf
.
gfile
.
DeleteRecursively
(
self
.
get_temp_dir
())
def
test_illegal_args
(
self
):
with
self
.
assertRaisesRegexp
(
ValueError
,
"nvalid every_n_iter"
):
metric_hook
.
LoggingMetricHook
(
tensors
=
[
"t"
],
every_n_iter
=
0
)
with
self
.
assertRaisesRegexp
(
ValueError
,
"nvalid every_n_iter"
):
metric_hook
.
LoggingMetricHook
(
tensors
=
[
"t"
],
every_n_iter
=-
10
)
with
self
.
assertRaisesRegexp
(
ValueError
,
"xactly one of"
):
metric_hook
.
LoggingMetricHook
(
tensors
=
[
"t"
],
every_n_iter
=
5
,
every_n_secs
=
5
)
with
self
.
assertRaisesRegexp
(
ValueError
,
"xactly one of"
):
metric_hook
.
LoggingMetricHook
(
tensors
=
[
"t"
])
with
self
.
assertRaisesRegexp
(
ValueError
,
"metric_logger"
):
metric_hook
.
LoggingMetricHook
(
tensors
=
[
"t"
],
every_n_iter
=
5
)
def
test_print_at_end_only
(
self
):
with
tf
.
Graph
().
as_default
(),
tf
.
Session
()
as
sess
:
tf
.
train
.
get_or_create_global_step
()
t
=
tf
.
constant
(
42.0
,
name
=
"foo"
)
train_op
=
tf
.
constant
(
3
)
hook
=
metric_hook
.
LoggingMetricHook
(
tensors
=
[
t
.
name
],
at_end
=
True
,
metric_logger
=
self
.
_logger
)
hook
.
begin
()
mon_sess
=
monitored_session
.
_HookedSession
(
sess
,
[
hook
])
# pylint: disable=protected-access
sess
.
run
(
tf
.
global_variables_initializer
())
for
_
in
range
(
3
):
mon_sess
.
run
(
train_op
)
self
.
assertEqual
(
self
.
_logger
.
logged_metric
,
[])
hook
.
end
(
sess
)
self
.
assertEqual
(
len
(
self
.
_logger
.
logged_metric
),
1
)
metric
=
self
.
_logger
.
logged_metric
[
0
]
self
.
assertRegexpMatches
(
metric
[
"name"
],
"foo"
)
self
.
assertEqual
(
metric
[
"value"
],
42.0
)
self
.
assertEqual
(
metric
[
"unit"
],
None
)
self
.
assertEqual
(
metric
[
"global_step"
],
0
)
def
test_global_step_not_found
(
self
):
with
tf
.
Graph
().
as_default
():
t
=
tf
.
constant
(
42.0
,
name
=
"foo"
)
hook
=
metric_hook
.
LoggingMetricHook
(
tensors
=
[
t
.
name
],
at_end
=
True
,
metric_logger
=
self
.
_logger
)
with
self
.
assertRaisesRegexp
(
RuntimeError
,
"should be created to use LoggingMetricHook."
):
hook
.
begin
()
def
test_log_tensors
(
self
):
with
tf
.
Graph
().
as_default
(),
tf
.
Session
()
as
sess
:
tf
.
train
.
get_or_create_global_step
()
t1
=
tf
.
constant
(
42.0
,
name
=
"foo"
)
t2
=
tf
.
constant
(
43.0
,
name
=
"bar"
)
train_op
=
tf
.
constant
(
3
)
hook
=
metric_hook
.
LoggingMetricHook
(
tensors
=
[
t1
,
t2
],
at_end
=
True
,
metric_logger
=
self
.
_logger
)
hook
.
begin
()
mon_sess
=
monitored_session
.
_HookedSession
(
sess
,
[
hook
])
# pylint: disable=protected-access
sess
.
run
(
tf
.
global_variables_initializer
())
for
_
in
range
(
3
):
mon_sess
.
run
(
train_op
)
self
.
assertEqual
(
self
.
_logger
.
logged_metric
,
[])
hook
.
end
(
sess
)
self
.
assertEqual
(
len
(
self
.
_logger
.
logged_metric
),
2
)
metric1
=
self
.
_logger
.
logged_metric
[
0
]
self
.
assertRegexpMatches
(
str
(
metric1
[
"name"
]),
"foo"
)
self
.
assertEqual
(
metric1
[
"value"
],
42.0
)
self
.
assertEqual
(
metric1
[
"unit"
],
None
)
self
.
assertEqual
(
metric1
[
"global_step"
],
0
)
metric2
=
self
.
_logger
.
logged_metric
[
1
]
self
.
assertRegexpMatches
(
str
(
metric2
[
"name"
]),
"bar"
)
self
.
assertEqual
(
metric2
[
"value"
],
43.0
)
self
.
assertEqual
(
metric2
[
"unit"
],
None
)
self
.
assertEqual
(
metric2
[
"global_step"
],
0
)
def
_validate_print_every_n_steps
(
self
,
sess
,
at_end
):
t
=
tf
.
constant
(
42.0
,
name
=
"foo"
)
train_op
=
tf
.
constant
(
3
)
hook
=
metric_hook
.
LoggingMetricHook
(
tensors
=
[
t
.
name
],
every_n_iter
=
10
,
at_end
=
at_end
,
metric_logger
=
self
.
_logger
)
hook
.
begin
()
mon_sess
=
monitored_session
.
_HookedSession
(
sess
,
[
hook
])
# pylint: disable=protected-access
sess
.
run
(
tf
.
global_variables_initializer
())
mon_sess
.
run
(
train_op
)
self
.
assertRegexpMatches
(
str
(
self
.
_logger
.
logged_metric
),
t
.
name
)
for
_
in
range
(
3
):
self
.
_logger
.
logged_metric
=
[]
for
_
in
range
(
9
):
mon_sess
.
run
(
train_op
)
# assertNotRegexpMatches is not supported by python 3.1 and later
self
.
assertEqual
(
str
(
self
.
_logger
.
logged_metric
).
find
(
t
.
name
),
-
1
)
mon_sess
.
run
(
train_op
)
self
.
assertRegexpMatches
(
str
(
self
.
_logger
.
logged_metric
),
t
.
name
)
# Add additional run to verify proper reset when called multiple times.
self
.
_logger
.
logged_metric
=
[]
mon_sess
.
run
(
train_op
)
# assertNotRegexpMatches is not supported by python 3.1 and later
self
.
assertEqual
(
str
(
self
.
_logger
.
logged_metric
).
find
(
t
.
name
),
-
1
)
self
.
_logger
.
logged_metric
=
[]
hook
.
end
(
sess
)
if
at_end
:
self
.
assertRegexpMatches
(
str
(
self
.
_logger
.
logged_metric
),
t
.
name
)
else
:
# assertNotRegexpMatches is not supported by python 3.1 and later
self
.
assertEqual
(
str
(
self
.
_logger
.
logged_metric
).
find
(
t
.
name
),
-
1
)
def
test_print_every_n_steps
(
self
):
with
tf
.
Graph
().
as_default
(),
tf
.
Session
()
as
sess
:
tf
.
train
.
get_or_create_global_step
()
self
.
_validate_print_every_n_steps
(
sess
,
at_end
=
False
)
# Verify proper reset.
self
.
_validate_print_every_n_steps
(
sess
,
at_end
=
False
)
def
test_print_every_n_steps_and_end
(
self
):
with
tf
.
Graph
().
as_default
(),
tf
.
Session
()
as
sess
:
tf
.
train
.
get_or_create_global_step
()
self
.
_validate_print_every_n_steps
(
sess
,
at_end
=
True
)
# Verify proper reset.
self
.
_validate_print_every_n_steps
(
sess
,
at_end
=
True
)
def
_validate_print_every_n_secs
(
self
,
sess
,
at_end
):
t
=
tf
.
constant
(
42.0
,
name
=
"foo"
)
train_op
=
tf
.
constant
(
3
)
hook
=
metric_hook
.
LoggingMetricHook
(
tensors
=
[
t
.
name
],
every_n_secs
=
1.0
,
at_end
=
at_end
,
metric_logger
=
self
.
_logger
)
hook
.
begin
()
mon_sess
=
monitored_session
.
_HookedSession
(
sess
,
[
hook
])
# pylint: disable=protected-access
sess
.
run
(
tf
.
global_variables_initializer
())
mon_sess
.
run
(
train_op
)
self
.
assertRegexpMatches
(
str
(
self
.
_logger
.
logged_metric
),
t
.
name
)
# assertNotRegexpMatches is not supported by python 3.1 and later
self
.
_logger
.
logged_metric
=
[]
mon_sess
.
run
(
train_op
)
self
.
assertEqual
(
str
(
self
.
_logger
.
logged_metric
).
find
(
t
.
name
),
-
1
)
time
.
sleep
(
1.0
)
self
.
_logger
.
logged_metric
=
[]
mon_sess
.
run
(
train_op
)
self
.
assertRegexpMatches
(
str
(
self
.
_logger
.
logged_metric
),
t
.
name
)
self
.
_logger
.
logged_metric
=
[]
hook
.
end
(
sess
)
if
at_end
:
self
.
assertRegexpMatches
(
str
(
self
.
_logger
.
logged_metric
),
t
.
name
)
else
:
# assertNotRegexpMatches is not supported by python 3.1 and later
self
.
assertEqual
(
str
(
self
.
_logger
.
logged_metric
).
find
(
t
.
name
),
-
1
)
def
test_print_every_n_secs
(
self
):
with
tf
.
Graph
().
as_default
(),
tf
.
Session
()
as
sess
:
tf
.
train
.
get_or_create_global_step
()
self
.
_validate_print_every_n_secs
(
sess
,
at_end
=
False
)
# Verify proper reset.
self
.
_validate_print_every_n_secs
(
sess
,
at_end
=
False
)
def
test_print_every_n_secs_and_end
(
self
):
with
tf
.
Graph
().
as_default
(),
tf
.
Session
()
as
sess
:
tf
.
train
.
get_or_create_global_step
()
self
.
_validate_print_every_n_secs
(
sess
,
at_end
=
True
)
# Verify proper reset.
self
.
_validate_print_every_n_secs
(
sess
,
at_end
=
True
)
if
__name__
==
"__main__"
:
tf
.
test
.
main
()
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/logs/mlperf_helper.py
deleted
100644 → 0
View file @
ec90ad8e
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Wrapper for the mlperf logging utils.
MLPerf compliance logging is only desired under a limited set of circumstances.
This module is intended to keep users from needing to consider logging (or
install the module) unless they are performing mlperf runs.
"""
from
__future__
import
absolute_import
from
__future__
import
division
from
__future__
import
print_function
from
collections
import
namedtuple
import
json
import
os
import
re
import
subprocess
import
sys
import
typing
import
tensorflow
as
tf
_MIN_VERSION
=
(
0
,
0
,
10
)
_STACK_OFFSET
=
2
SUDO
=
"sudo"
if
os
.
geteuid
()
else
""
# This indirection is used in docker.
DROP_CACHE_LOC
=
os
.
getenv
(
"DROP_CACHE_LOC"
,
"/proc/sys/vm/drop_caches"
)
_NCF_PREFIX
=
"NCF_RAW_"
# TODO(robieta): move line parsing to mlperf util
_PREFIX
=
r
"(?:{})?:::MLPv([0-9]+).([0-9]+).([0-9]+)"
.
format
(
_NCF_PREFIX
)
_BENCHMARK
=
r
"([a-zA-Z0-9_]+)"
_TIMESTAMP
=
r
"([0-9]+\.[0-9]+)"
_CALLSITE
=
r
"\((.+):([0-9]+)\)"
_TAG
=
r
"([a-zA-Z0-9_]+)"
_VALUE
=
r
"(.*)"
ParsedLine
=
namedtuple
(
"ParsedLine"
,
[
"version"
,
"benchmark"
,
"timestamp"
,
"callsite"
,
"tag"
,
"value"
])
LINE_PATTERN
=
re
.
compile
(
"^{prefix} {benchmark} {timestamp} {callsite} {tag}(: |$){value}?$"
.
format
(
prefix
=
_PREFIX
,
benchmark
=
_BENCHMARK
,
timestamp
=
_TIMESTAMP
,
callsite
=
_CALLSITE
,
tag
=
_TAG
,
value
=
_VALUE
))
def
parse_line
(
line
):
# type: (str) -> typing.Optional[ParsedLine]
match
=
LINE_PATTERN
.
match
(
line
.
strip
())
if
not
match
:
return
major
,
minor
,
micro
,
benchmark
,
timestamp
=
match
.
groups
()[:
5
]
call_file
,
call_line
,
tag
,
_
,
value
=
match
.
groups
()[
5
:]
return
ParsedLine
(
version
=
(
int
(
major
),
int
(
minor
),
int
(
micro
)),
benchmark
=
benchmark
,
timestamp
=
timestamp
,
callsite
=
(
call_file
,
call_line
),
tag
=
tag
,
value
=
value
)
def
unparse_line
(
parsed_line
):
# type: (ParsedLine) -> str
version_str
=
"{}.{}.{}"
.
format
(
*
parsed_line
.
version
)
callsite_str
=
"({}:{})"
.
format
(
*
parsed_line
.
callsite
)
value_str
=
": {}"
.
format
(
parsed_line
.
value
)
if
parsed_line
.
value
else
""
return
":::MLPv{} {} {} {} {} {}"
.
format
(
version_str
,
parsed_line
.
benchmark
,
parsed_line
.
timestamp
,
callsite_str
,
parsed_line
.
tag
,
value_str
)
def
get_mlperf_log
():
"""Shielded import of mlperf_log module."""
try
:
import
mlperf_compliance
def
test_mlperf_log_pip_version
():
"""Check that mlperf_compliance is up to date."""
import
pkg_resources
version
=
pkg_resources
.
get_distribution
(
"mlperf_compliance"
)
version
=
tuple
(
int
(
i
)
for
i
in
version
.
version
.
split
(
"."
))
if
version
<
_MIN_VERSION
:
tf
.
logging
.
warning
(
"mlperf_compliance is version {}, must be >= {}"
.
format
(
"."
.
join
([
str
(
i
)
for
i
in
version
]),
"."
.
join
([
str
(
i
)
for
i
in
_MIN_VERSION
])))
raise
ImportError
return
mlperf_compliance
.
mlperf_log
mlperf_log
=
test_mlperf_log_pip_version
()
except
ImportError
:
mlperf_log
=
None
return
mlperf_log
class
Logger
(
object
):
"""MLPerf logger indirection class.
This logger only logs for MLPerf runs, and prevents various errors associated
with not having the mlperf_compliance package installed.
"""
class
Tags
(
object
):
def
__init__
(
self
,
mlperf_log
):
self
.
_enabled
=
False
self
.
_mlperf_log
=
mlperf_log
def
__getattr__
(
self
,
item
):
if
self
.
_mlperf_log
is
None
or
not
self
.
_enabled
:
return
return
getattr
(
self
.
_mlperf_log
,
item
)
def
__init__
(
self
):
self
.
_enabled
=
False
self
.
_mlperf_log
=
get_mlperf_log
()
self
.
tags
=
self
.
Tags
(
self
.
_mlperf_log
)
def
__call__
(
self
,
enable
=
False
):
if
enable
and
self
.
_mlperf_log
is
None
:
raise
ImportError
(
"MLPerf logging was requested, but mlperf_compliance "
"module could not be loaded."
)
self
.
_enabled
=
enable
self
.
tags
.
_enabled
=
enable
return
self
def
__enter__
(
self
):
pass
def
__exit__
(
self
,
exc_type
,
exc_val
,
exc_tb
):
self
.
_enabled
=
False
self
.
tags
.
_enabled
=
False
@
property
def
log_file
(
self
):
if
self
.
_mlperf_log
is
None
:
return
return
self
.
_mlperf_log
.
LOG_FILE
@
property
def
enabled
(
self
):
return
self
.
_enabled
def
ncf_print
(
self
,
key
,
value
=
None
,
stack_offset
=
_STACK_OFFSET
,
deferred
=
False
,
extra_print
=
False
,
prefix
=
_NCF_PREFIX
):
if
self
.
_mlperf_log
is
None
or
not
self
.
enabled
:
return
self
.
_mlperf_log
.
ncf_print
(
key
=
key
,
value
=
value
,
stack_offset
=
stack_offset
,
deferred
=
deferred
,
extra_print
=
extra_print
,
prefix
=
prefix
)
def
set_ncf_root
(
self
,
path
):
if
self
.
_mlperf_log
is
None
:
return
self
.
_mlperf_log
.
ROOT_DIR_NCF
=
path
LOGGER
=
Logger
()
ncf_print
,
set_ncf_root
=
LOGGER
.
ncf_print
,
LOGGER
.
set_ncf_root
TAGS
=
LOGGER
.
tags
def
clear_system_caches
():
if
not
LOGGER
.
enabled
:
return
ret_code
=
subprocess
.
call
(
[
"sync && echo 3 | {} tee {}"
.
format
(
SUDO
,
DROP_CACHE_LOC
)],
shell
=
True
)
if
ret_code
:
raise
ValueError
(
"Failed to clear caches"
)
if
__name__
==
"__main__"
:
tf
.
logging
.
set_verbosity
(
tf
.
logging
.
INFO
)
with
LOGGER
(
True
):
ncf_print
(
key
=
TAGS
.
RUN_START
)
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/misc/__init__.py
deleted
100644 → 0
View file @
ec90ad8e
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/misc/distribution_utils.py
deleted
100644 → 0
View file @
ec90ad8e
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Helper functions for running models in a distributed setting."""
from
__future__
import
absolute_import
from
__future__
import
division
from
__future__
import
print_function
import
tensorflow
as
tf
def
get_distribution_strategy
(
num_gpus
,
all_reduce_alg
=
None
,
turn_off_distribution_strategy
=
False
):
"""Return a DistributionStrategy for running the model.
Args:
num_gpus: Number of GPUs to run this model.
all_reduce_alg: Specify which algorithm to use when performing all-reduce.
See tf.contrib.distribute.AllReduceCrossDeviceOps for available
algorithms. If None, DistributionStrategy will choose based on device
topology.
turn_off_distribution_strategy: when set to True, do not use any
distribution strategy. Note that when it is True, and num_gpus is
larger than 1, it will raise a ValueError.
Returns:
tf.contrib.distribute.DistibutionStrategy object.
Raises:
ValueError: if turn_off_distribution_strategy is True and num_gpus is
larger than 1
"""
if
num_gpus
==
0
:
if
turn_off_distribution_strategy
:
return
None
else
:
return
tf
.
contrib
.
distribute
.
OneDeviceStrategy
(
"device:CPU:0"
)
elif
num_gpus
==
1
:
if
turn_off_distribution_strategy
:
return
None
else
:
return
tf
.
contrib
.
distribute
.
OneDeviceStrategy
(
"device:GPU:0"
)
elif
turn_off_distribution_strategy
:
raise
ValueError
(
"When {} GPUs are specified, "
"turn_off_distribution_strategy flag cannot be set to"
"True."
.
format
(
num_gpus
))
else
:
# num_gpus > 1 and not turn_off_distribution_strategy
devices
=
[
"device:GPU:%d"
%
i
for
i
in
range
(
num_gpus
)]
if
all_reduce_alg
:
return
tf
.
distribute
.
MirroredStrategy
(
devices
=
devices
,
cross_device_ops
=
tf
.
contrib
.
distribute
.
AllReduceCrossDeviceOps
(
all_reduce_alg
,
num_packs
=
2
))
else
:
return
tf
.
distribute
.
MirroredStrategy
(
devices
=
devices
)
def
per_device_batch_size
(
batch_size
,
num_gpus
):
"""For multi-gpu, batch-size must be a multiple of the number of GPUs.
Note that distribution strategy handles this automatically when used with
Keras. For using with Estimator, we need to get per GPU batch.
Args:
batch_size: Global batch size to be divided among devices. This should be
equal to num_gpus times the single-GPU batch_size for multi-gpu training.
num_gpus: How many GPUs are used with DistributionStrategies.
Returns:
Batch size per device.
Raises:
ValueError: if batch_size is not divisible by number of devices
"""
if
num_gpus
<=
1
:
return
batch_size
remainder
=
batch_size
%
num_gpus
if
remainder
:
err
=
(
"When running with multiple GPUs, batch size "
"must be a multiple of the number of available GPUs. Found {} "
"GPUs with a batch size of {}; try --batch_size={} instead."
).
format
(
num_gpus
,
batch_size
,
batch_size
-
remainder
)
raise
ValueError
(
err
)
return
int
(
batch_size
/
num_gpus
)
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/misc/distribution_utils_test.py
deleted
100644 → 0
View file @
ec90ad8e
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
""" Tests for distribution util functions."""
from
__future__
import
absolute_import
from
__future__
import
division
from
__future__
import
print_function
import
tensorflow
as
tf
# pylint: disable=g-bad-import-order
from
official.utils.misc
import
distribution_utils
class
GetDistributionStrategyTest
(
tf
.
test
.
TestCase
):
"""Tests for get_distribution_strategy."""
def
test_one_device_strategy_cpu
(
self
):
ds
=
distribution_utils
.
get_distribution_strategy
(
0
)
self
.
assertEquals
(
ds
.
num_replicas_in_sync
,
1
)
self
.
assertEquals
(
len
(
ds
.
extended
.
worker_devices
),
1
)
self
.
assertIn
(
'CPU'
,
ds
.
extended
.
worker_devices
[
0
])
def
test_one_device_strategy_gpu
(
self
):
ds
=
distribution_utils
.
get_distribution_strategy
(
1
)
self
.
assertEquals
(
ds
.
num_replicas_in_sync
,
1
)
self
.
assertEquals
(
len
(
ds
.
extended
.
worker_devices
),
1
)
self
.
assertIn
(
'GPU'
,
ds
.
extended
.
worker_devices
[
0
])
def
test_mirrored_strategy
(
self
):
ds
=
distribution_utils
.
get_distribution_strategy
(
5
)
self
.
assertEquals
(
ds
.
num_replicas_in_sync
,
5
)
self
.
assertEquals
(
len
(
ds
.
extended
.
worker_devices
),
5
)
for
device
in
ds
.
extended
.
worker_devices
:
self
.
assertIn
(
'GPU'
,
device
)
class
PerDeviceBatchSizeTest
(
tf
.
test
.
TestCase
):
"""Tests for per_device_batch_size."""
def
test_batch_size
(
self
):
self
.
assertEquals
(
distribution_utils
.
per_device_batch_size
(
147
,
num_gpus
=
0
),
147
)
self
.
assertEquals
(
distribution_utils
.
per_device_batch_size
(
147
,
num_gpus
=
1
),
147
)
self
.
assertEquals
(
distribution_utils
.
per_device_batch_size
(
147
,
num_gpus
=
7
),
21
)
def
test_batch_size_with_remainder
(
self
):
with
self
.
assertRaises
(
ValueError
):
distribution_utils
.
per_device_batch_size
(
147
,
num_gpus
=
5
)
if
__name__
==
"__main__"
:
tf
.
test
.
main
()
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/misc/model_helpers.py
deleted
100644 → 0
View file @
ec90ad8e
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Miscellaneous functions that can be called by models."""
from
__future__
import
absolute_import
from
__future__
import
division
from
__future__
import
print_function
import
numbers
import
tensorflow
as
tf
from
tensorflow.python.util
import
nest
def
past_stop_threshold
(
stop_threshold
,
eval_metric
):
"""Return a boolean representing whether a model should be stopped.
Args:
stop_threshold: float, the threshold above which a model should stop
training.
eval_metric: float, the current value of the relevant metric to check.
Returns:
True if training should stop, False otherwise.
Raises:
ValueError: if either stop_threshold or eval_metric is not a number
"""
if
stop_threshold
is
None
:
return
False
if
not
isinstance
(
stop_threshold
,
numbers
.
Number
):
raise
ValueError
(
"Threshold for checking stop conditions must be a number."
)
if
not
isinstance
(
eval_metric
,
numbers
.
Number
):
raise
ValueError
(
"Eval metric being checked against stop conditions "
"must be a number."
)
if
eval_metric
>=
stop_threshold
:
tf
.
logging
.
info
(
"Stop threshold of {} was passed with metric value {}."
.
format
(
stop_threshold
,
eval_metric
))
return
True
return
False
def
generate_synthetic_data
(
input_shape
,
input_value
=
0
,
input_dtype
=
None
,
label_shape
=
None
,
label_value
=
0
,
label_dtype
=
None
):
"""Create a repeating dataset with constant values.
Args:
input_shape: a tf.TensorShape object or nested tf.TensorShapes. The shape of
the input data.
input_value: Value of each input element.
input_dtype: Input dtype. If None, will be inferred by the input value.
label_shape: a tf.TensorShape object or nested tf.TensorShapes. The shape of
the label data.
label_value: Value of each input element.
label_dtype: Input dtype. If None, will be inferred by the target value.
Returns:
Dataset of tensors or tuples of tensors (if label_shape is set).
"""
# TODO(kathywu): Replace with SyntheticDataset once it is in contrib.
element
=
input_element
=
nest
.
map_structure
(
lambda
s
:
tf
.
constant
(
input_value
,
input_dtype
,
s
),
input_shape
)
if
label_shape
:
label_element
=
nest
.
map_structure
(
lambda
s
:
tf
.
constant
(
label_value
,
label_dtype
,
s
),
label_shape
)
element
=
(
input_element
,
label_element
)
return
tf
.
data
.
Dataset
.
from_tensors
(
element
).
repeat
()
def
apply_clean
(
flags_obj
):
if
flags_obj
.
clean
and
tf
.
gfile
.
Exists
(
flags_obj
.
model_dir
):
tf
.
logging
.
info
(
"--clean flag set. Removing existing model dir: {}"
.
format
(
flags_obj
.
model_dir
))
tf
.
gfile
.
DeleteRecursively
(
flags_obj
.
model_dir
)
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/misc/model_helpers_test.py
deleted
100644 → 0
View file @
ec90ad8e
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
""" Tests for Model Helper functions."""
from
__future__
import
absolute_import
from
__future__
import
division
from
__future__
import
print_function
import
tensorflow
as
tf
# pylint: disable=g-bad-import-order
from
official.utils.misc
import
model_helpers
class
PastStopThresholdTest
(
tf
.
test
.
TestCase
):
"""Tests for past_stop_threshold."""
def
test_past_stop_threshold
(
self
):
"""Tests for normal operating conditions."""
self
.
assertTrue
(
model_helpers
.
past_stop_threshold
(
0.54
,
1
))
self
.
assertTrue
(
model_helpers
.
past_stop_threshold
(
54
,
100
))
self
.
assertFalse
(
model_helpers
.
past_stop_threshold
(
0.54
,
0.1
))
self
.
assertFalse
(
model_helpers
.
past_stop_threshold
(
-
0.54
,
-
1.5
))
self
.
assertTrue
(
model_helpers
.
past_stop_threshold
(
-
0.54
,
0
))
self
.
assertTrue
(
model_helpers
.
past_stop_threshold
(
0
,
0
))
self
.
assertTrue
(
model_helpers
.
past_stop_threshold
(
0.54
,
0.54
))
def
test_past_stop_threshold_none_false
(
self
):
"""Tests that check None returns false."""
self
.
assertFalse
(
model_helpers
.
past_stop_threshold
(
None
,
-
1.5
))
self
.
assertFalse
(
model_helpers
.
past_stop_threshold
(
None
,
None
))
self
.
assertFalse
(
model_helpers
.
past_stop_threshold
(
None
,
1.5
))
# Zero should be okay, though.
self
.
assertTrue
(
model_helpers
.
past_stop_threshold
(
0
,
1.5
))
def
test_past_stop_threshold_not_number
(
self
):
"""Tests for error conditions."""
with
self
.
assertRaises
(
ValueError
):
model_helpers
.
past_stop_threshold
(
"str"
,
1
)
with
self
.
assertRaises
(
ValueError
):
model_helpers
.
past_stop_threshold
(
"str"
,
tf
.
constant
(
5
))
with
self
.
assertRaises
(
ValueError
):
model_helpers
.
past_stop_threshold
(
"str"
,
"another"
)
with
self
.
assertRaises
(
ValueError
):
model_helpers
.
past_stop_threshold
(
0
,
None
)
with
self
.
assertRaises
(
ValueError
):
model_helpers
.
past_stop_threshold
(
0.7
,
"str"
)
with
self
.
assertRaises
(
ValueError
):
model_helpers
.
past_stop_threshold
(
tf
.
constant
(
4
),
None
)
class
SyntheticDataTest
(
tf
.
test
.
TestCase
):
"""Tests for generate_synthetic_data."""
def
test_generate_synethetic_data
(
self
):
input_element
,
label_element
=
model_helpers
.
generate_synthetic_data
(
input_shape
=
tf
.
TensorShape
([
5
]),
input_value
=
123
,
input_dtype
=
tf
.
float32
,
label_shape
=
tf
.
TensorShape
([]),
label_value
=
456
,
label_dtype
=
tf
.
int32
).
make_one_shot_iterator
().
get_next
()
with
self
.
test_session
()
as
sess
:
for
n
in
range
(
5
):
inp
,
lab
=
sess
.
run
((
input_element
,
label_element
))
self
.
assertAllClose
(
inp
,
[
123.
,
123.
,
123.
,
123.
,
123.
])
self
.
assertEquals
(
lab
,
456
)
def
test_generate_only_input_data
(
self
):
d
=
model_helpers
.
generate_synthetic_data
(
input_shape
=
tf
.
TensorShape
([
4
]),
input_value
=
43.5
,
input_dtype
=
tf
.
float32
)
element
=
d
.
make_one_shot_iterator
().
get_next
()
self
.
assertFalse
(
isinstance
(
element
,
tuple
))
with
self
.
test_session
()
as
sess
:
inp
=
sess
.
run
(
element
)
self
.
assertAllClose
(
inp
,
[
43.5
,
43.5
,
43.5
,
43.5
])
def
test_generate_nested_data
(
self
):
d
=
model_helpers
.
generate_synthetic_data
(
input_shape
=
{
'a'
:
tf
.
TensorShape
([
2
]),
'b'
:
{
'c'
:
tf
.
TensorShape
([
3
]),
'd'
:
tf
.
TensorShape
([])}},
input_value
=
1.1
)
element
=
d
.
make_one_shot_iterator
().
get_next
()
self
.
assertIn
(
'a'
,
element
)
self
.
assertIn
(
'b'
,
element
)
self
.
assertEquals
(
len
(
element
[
'b'
]),
2
)
self
.
assertIn
(
'c'
,
element
[
'b'
])
self
.
assertIn
(
'd'
,
element
[
'b'
])
self
.
assertNotIn
(
'c'
,
element
)
with
self
.
test_session
()
as
sess
:
inp
=
sess
.
run
(
element
)
self
.
assertAllClose
(
inp
[
'a'
],
[
1.1
,
1.1
])
self
.
assertAllClose
(
inp
[
'b'
][
'c'
],
[
1.1
,
1.1
,
1.1
])
self
.
assertAllClose
(
inp
[
'b'
][
'd'
],
1.1
)
if
__name__
==
"__main__"
:
tf
.
test
.
main
()
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/testing/__init__.py
deleted
100644 → 0
View file @
ec90ad8e
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/testing/pylint.rcfile
deleted
100644 → 0
View file @
ec90ad8e
[MESSAGES CONTROL]
disable=R,W,
bad-option-value
[REPORTS]
# Tells whether to display a full report or only the messages
reports=no
# Activate the evaluation score.
score=no
[BASIC]
# Regular expression matching correct argument names
argument-rgx=^[a-z][a-z0-9_]*$
# Regular expression matching correct attribute names
attr-rgx=^_{0,2}[a-z][a-z0-9_]*$
# Regular expression matching correct class attribute names
class-attribute-rgx=^(_?[A-Z][A-Z0-9_]*|__[a-z0-9_]+__|_?[a-z][a-z0-9_]*)$
# Regular expression matching correct class names
class-rgx=^_?[A-Z][a-zA-Z0-9]*$
# Regular expression matching correct constant names
const-rgx=^(_?[A-Z][A-Z0-9_]*|__[a-z0-9_]+__|_?[a-z][a-z0-9_]*)$
# Minimum line length for functions/classes that require docstrings, shorter
# ones are exempt.
docstring-min-length=10
# Regular expression matching correct function names
function-rgx=^(?:(?P<camel_case>_?[A-Z][a-zA-Z0-9]*)|(?P<snake_case>_?[a-z][a-z0-9_]*))$
# Good variable names which should always be accepted, separated by a comma
good-names=main,_
# Regular expression matching correct inline iteration names
inlinevar-rgx=^[a-z][a-z0-9_]*$
# Regular expression matching correct method names
method-rgx=^(?:(?P<exempt>__[a-z0-9_]+__|next)|(?P<camel_case>_{0,2}[A-Z][a-zA-Z0-9]*)|(?P<snake_case>_{0,2}[a-z][a-z0-9_]*)|(setUp|tearDown))$
# Regular expression matching correct module names
module-rgx=^(_?[a-z][a-z0-9_]*)|__init__|PRESUBMIT|PRESUBMIT_unittest$
# Regular expression which should only match function or class names that do
# not require a docstring.
no-docstring-rgx=(__.*__|main|.*ArgParser)
# Naming hint for variable names
variable-name-hint=[a-z_][a-z0-9_]{2,30}$
# Regular expression matching correct variable names
variable-rgx=^[a-z][a-z0-9_]*$
[TYPECHECK]
# List of module names for which member attributes should not be checked
# (useful for modules/projects where namespaces are manipulated during runtime
# and thus existing member attributes cannot be deduced by static analysis. It
# supports qualified module names, as well as Unix pattern matching.
ignored-modules=absl, absl.*, official, official.*, tensorflow, tensorflow.*, LazyLoader, google, google.cloud.*
[CLASSES]
# List of method names used to declare (i.e. assign) instance attributes.
defining-attr-methods=__init__,__new__,setUp
# List of member names, which should be excluded from the protected access
# warning.
exclude-protected=_asdict,_fields,_replace,_source,_make
# This is deprecated, because it is not used anymore.
#ignore-iface-methods=
# List of valid names for the first argument in a class method.
valid-classmethod-first-arg=cls,class_
# List of valid names for the first argument in a metaclass class method.
valid-metaclass-classmethod-first-arg=mcs
[DESIGN]
# Argument names that match this expression will be ignored. Default to name
# with leading underscore
ignored-argument-names=_.*
# Maximum number of arguments for function / method
max-args=5
# Maximum number of attributes for a class (see R0902).
max-attributes=7
# Maximum number of branch for function / method body
max-branches=12
# Maximum number of locals for function / method body
max-locals=15
# Maximum number of parents for a class (see R0901).
max-parents=7
# Maximum number of public methods for a class (see R0904).
max-public-methods=20
# Maximum number of return / yield for function / method body
max-returns=6
# Maximum number of statements in function / method body
max-statements=50
# Minimum number of public methods for a class (see R0903).
min-public-methods=2
[EXCEPTIONS]
# Exceptions that will emit a warning when being caught. Defaults to
# "Exception"
overgeneral-exceptions=StandardError,Exception,BaseException
[FORMAT]
# Number of spaces of indent required inside a hanging or continued line.
indent-after-paren=4
# String used as indentation unit. This is usually " " (4 spaces) or "\t" (1
# tab).
indent-string=' '
# Maximum number of characters on a single line.
max-line-length=80
# Maximum number of lines in a module
max-module-lines=99999
# List of optional constructs for which whitespace checking is disabled
no-space-check=
# Allow the body of an if to be on the same line as the test if there is no
# else.
single-line-if-stmt=yes
# Allow URLs and comment type annotations to exceed the max line length as neither can be easily
# split across lines.
ignore-long-lines=^\s*(?:(# )?<?https?://\S+>?$|# type:)
[VARIABLES]
# List of additional names supposed to be defined in builtins. Remember that
# you should avoid to define new builtins when possible.
additional-builtins=
# List of strings which can identify a callback function by name. A callback
# name must start or end with one of those strings.
callbacks=cb_,_cb
# A regular expression matching the name of dummy variables (i.e. expectedly
# not used).
dummy-variables-rgx=^\*{0,2}(_$|unused_|dummy_)
# Tells whether we should check for unused import in __init__ files.
init-import=no
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/testing/reference_data.py
deleted
100644 → 0
View file @
ec90ad8e
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""TensorFlow testing subclass to automate numerical testing.
Reference tests determine when behavior deviates from some "gold standard," and
are useful for determining when layer definitions have changed without
performing full regression testing, which is generally prohibitive. This class
handles the symbolic graph comparison as well as loading weights to avoid
relying on random number generation, which can change.
The tests performed by this class are:
1) Compare a generated graph against a reference graph. Differences are not
necessarily fatal.
2) Attempt to load known weights for the graph. If this step succeeds but
changes are present in the graph, a warning is issued but does not raise
an exception.
3) Perform a calculation and compare the result to a reference value.
This class also provides a method to generate reference data.
Note:
The test class is responsible for fixing the random seed during graph
definition. A convenience method name_to_seed() is provided to make this
process easier.
The test class should also define a .regenerate() class method which (usually)
just calls the op definition function with test=False for all relevant tests.
A concise example of this class in action is provided in:
official/utils/testing/reference_data_test.py
"""
from
__future__
import
absolute_import
from
__future__
import
division
from
__future__
import
print_function
import
argparse
import
hashlib
import
json
import
os
import
shutil
import
sys
import
numpy
as
np
import
tensorflow
as
tf
from
tensorflow.python
import
pywrap_tensorflow
class
BaseTest
(
tf
.
test
.
TestCase
):
"""TestCase subclass for performing reference data tests."""
def
regenerate
(
self
):
"""Subclasses should override this function to generate a new reference."""
raise
NotImplementedError
@
property
def
test_name
(
self
):
"""Subclass should define its own name."""
raise
NotImplementedError
@
property
def
data_root
(
self
):
"""Use the subclass directory rather than the parent directory.
Returns:
The path prefix for reference data.
"""
return
os
.
path
.
join
(
os
.
path
.
split
(
os
.
path
.
abspath
(
__file__
))[
0
],
"reference_data"
,
self
.
test_name
)
ckpt_prefix
=
"model.ckpt"
@
staticmethod
def
name_to_seed
(
name
):
"""Convert a string into a 32 bit integer.
This function allows test cases to easily generate random fixed seeds by
hashing the name of the test. The hash string is in hex rather than base 10
which is why there is a 16 in the int call, and the modulo projects the
seed from a 128 bit int to 32 bits for readability.
Args:
name: A string containing the name of a test.
Returns:
A pseudo-random 32 bit integer derived from name.
"""
seed
=
hashlib
.
md5
(
name
.
encode
(
"utf-8"
)).
hexdigest
()
return
int
(
seed
,
16
)
%
(
2
**
32
-
1
)
@
staticmethod
def
common_tensor_properties
(
input_array
):
"""Convenience function for matrix testing.
In tests we wish to determine whether a result has changed. However storing
an entire n-dimensional array is impractical. A better approach is to
calculate several values from that array and test that those derived values
are unchanged. The properties themselves are arbitrary and should be chosen
to be good proxies for a full equality test.
Args:
input_array: A numpy array from which key values are extracted.
Returns:
A list of values derived from the input_array for equality tests.
"""
output
=
list
(
input_array
.
shape
)
flat_array
=
input_array
.
flatten
()
output
.
extend
([
float
(
i
)
for
i
in
[
flat_array
[
0
],
flat_array
[
-
1
],
np
.
sum
(
flat_array
)]])
return
output
def
default_correctness_function
(
self
,
*
args
):
"""Returns a vector with the concatenation of common properties.
This function simply calls common_tensor_properties() for every element.
It is useful as it allows one to easily construct tests of layers without
having to worry about the details of result checking.
Args:
*args: A list of numpy arrays corresponding to tensors which have been
evaluated.
Returns:
A list of values containing properties for every element in args.
"""
output
=
[]
for
arg
in
args
:
output
.
extend
(
self
.
common_tensor_properties
(
arg
))
return
output
def
_construct_and_save_reference_files
(
self
,
name
,
graph
,
ops_to_eval
,
correctness_function
):
"""Save reference data files.
Constructs a serialized graph_def, layer weights, and computation results.
It then saves them to files which are read at test time.
Args:
name: String defining the run. This will be used to define folder names
and will be used for random seed construction.
graph: The graph in which the test is conducted.
ops_to_eval: Ops which the user wishes to be evaluated under a controlled
session.
correctness_function: This function accepts the evaluated results of
ops_to_eval, and returns a list of values. This list must be JSON
serializable; in particular it is up to the user to convert numpy
dtypes into builtin dtypes.
"""
data_dir
=
os
.
path
.
join
(
self
.
data_root
,
name
)
# Make sure there is a clean space for results.
if
os
.
path
.
exists
(
data_dir
):
shutil
.
rmtree
(
data_dir
)
os
.
makedirs
(
data_dir
)
# Serialize graph for comparison.
graph_bytes
=
graph
.
as_graph_def
().
SerializeToString
()
expected_file
=
os
.
path
.
join
(
data_dir
,
"expected_graph"
)
with
tf
.
gfile
.
Open
(
expected_file
,
"wb"
)
as
f
:
f
.
write
(
graph_bytes
)
with
graph
.
as_default
():
init
=
tf
.
global_variables_initializer
()
saver
=
tf
.
train
.
Saver
()
with
self
.
test_session
(
graph
=
graph
)
as
sess
:
sess
.
run
(
init
)
saver
.
save
(
sess
=
sess
,
save_path
=
os
.
path
.
join
(
data_dir
,
self
.
ckpt_prefix
))
# These files are not needed for this test.
os
.
remove
(
os
.
path
.
join
(
data_dir
,
"checkpoint"
))
os
.
remove
(
os
.
path
.
join
(
data_dir
,
self
.
ckpt_prefix
+
".meta"
))
# ops are evaluated even if there is no correctness function to ensure
# that they can be evaluated.
eval_results
=
[
op
.
eval
()
for
op
in
ops_to_eval
]
if
correctness_function
is
not
None
:
results
=
correctness_function
(
*
eval_results
)
with
tf
.
gfile
.
Open
(
os
.
path
.
join
(
data_dir
,
"results.json"
),
"w"
)
as
f
:
json
.
dump
(
results
,
f
)
with
tf
.
gfile
.
Open
(
os
.
path
.
join
(
data_dir
,
"tf_version.json"
),
"w"
)
as
f
:
json
.
dump
([
tf
.
VERSION
,
tf
.
GIT_VERSION
],
f
)
def
_evaluate_test_case
(
self
,
name
,
graph
,
ops_to_eval
,
correctness_function
):
"""Determine if a graph agrees with the reference data.
Args:
name: String defining the run. This will be used to define folder names
and will be used for random seed construction.
graph: The graph in which the test is conducted.
ops_to_eval: Ops which the user wishes to be evaluated under a controlled
session.
correctness_function: This function accepts the evaluated results of
ops_to_eval, and returns a list of values. This list must be JSON
serializable; in particular it is up to the user to convert numpy
dtypes into builtin dtypes.
"""
data_dir
=
os
.
path
.
join
(
self
.
data_root
,
name
)
# Serialize graph for comparison.
graph_bytes
=
graph
.
as_graph_def
().
SerializeToString
()
expected_file
=
os
.
path
.
join
(
data_dir
,
"expected_graph"
)
with
tf
.
gfile
.
Open
(
expected_file
,
"rb"
)
as
f
:
expected_graph_bytes
=
f
.
read
()
# The serialization is non-deterministic byte-for-byte. Instead there is
# a utility which evaluates the semantics of the two graphs to test for
# equality. This has the added benefit of providing some information on
# what changed.
# Note: The summary only show the first difference detected. It is not
# an exhaustive summary of differences.
differences
=
pywrap_tensorflow
.
EqualGraphDefWrapper
(
graph_bytes
,
expected_graph_bytes
).
decode
(
"utf-8"
)
with
graph
.
as_default
():
init
=
tf
.
global_variables_initializer
()
saver
=
tf
.
train
.
Saver
()
with
tf
.
gfile
.
Open
(
os
.
path
.
join
(
data_dir
,
"tf_version.json"
),
"r"
)
as
f
:
tf_version_reference
,
tf_git_version_reference
=
json
.
load
(
f
)
# pylint: disable=unpacking-non-sequence
tf_version_comparison
=
""
if
tf
.
GIT_VERSION
!=
tf_git_version_reference
:
tf_version_comparison
=
(
"Test was built using: {} (git = {})
\n
"
"Local TensorFlow version: {} (git = {})"
.
format
(
tf_version_reference
,
tf_git_version_reference
,
tf
.
VERSION
,
tf
.
GIT_VERSION
)
)
with
self
.
test_session
(
graph
=
graph
)
as
sess
:
sess
.
run
(
init
)
try
:
saver
.
restore
(
sess
=
sess
,
save_path
=
os
.
path
.
join
(
data_dir
,
self
.
ckpt_prefix
))
if
differences
:
tf
.
logging
.
warn
(
"The provided graph is different than expected:
\n
{}
\n
"
"However the weights were still able to be loaded.
\n
{}"
.
format
(
differences
,
tf_version_comparison
)
)
except
:
# pylint: disable=bare-except
raise
self
.
failureException
(
"Weight load failed. Graph comparison:
\n
{}{}"
.
format
(
differences
,
tf_version_comparison
))
eval_results
=
[
op
.
eval
()
for
op
in
ops_to_eval
]
if
correctness_function
is
not
None
:
results
=
correctness_function
(
*
eval_results
)
with
tf
.
gfile
.
Open
(
os
.
path
.
join
(
data_dir
,
"results.json"
),
"r"
)
as
f
:
expected_results
=
json
.
load
(
f
)
self
.
assertAllClose
(
results
,
expected_results
)
def
_save_or_test_ops
(
self
,
name
,
graph
,
ops_to_eval
=
None
,
test
=
True
,
correctness_function
=
None
):
"""Utility function to automate repeated work of graph checking and saving.
The philosophy of this function is that the user need only define ops on
a graph and specify which results should be validated. The actual work of
managing snapshots and calculating results should be automated away.
Args:
name: String defining the run. This will be used to define folder names
and will be used for random seed construction.
graph: The graph in which the test is conducted.
ops_to_eval: Ops which the user wishes to be evaluated under a controlled
session.
test: Boolean. If True this function will test graph correctness, load
weights, and compute numerical values. If False the necessary test data
will be generated and saved.
correctness_function: This function accepts the evaluated results of
ops_to_eval, and returns a list of values. This list must be JSON
serializable; in particular it is up to the user to convert numpy
dtypes into builtin dtypes.
"""
ops_to_eval
=
ops_to_eval
or
[]
if
test
:
try
:
self
.
_evaluate_test_case
(
name
=
name
,
graph
=
graph
,
ops_to_eval
=
ops_to_eval
,
correctness_function
=
correctness_function
)
except
:
tf
.
logging
.
error
(
"Failed unittest {}"
.
format
(
name
))
raise
else
:
self
.
_construct_and_save_reference_files
(
name
=
name
,
graph
=
graph
,
ops_to_eval
=
ops_to_eval
,
correctness_function
=
correctness_function
)
class
ReferenceDataActionParser
(
argparse
.
ArgumentParser
):
"""Minimal arg parser so that test regeneration can be called from the CLI."""
def
__init__
(
self
):
super
(
ReferenceDataActionParser
,
self
).
__init__
()
self
.
add_argument
(
"--regenerate"
,
"-regen"
,
action
=
"store_true"
,
help
=
"Enable this flag to regenerate test data. If not set unit tests"
"will be run."
)
def
main
(
argv
,
test_class
):
"""Simple switch function to allow test regeneration from the CLI."""
flags
=
ReferenceDataActionParser
().
parse_args
(
argv
[
1
:])
if
flags
.
regenerate
:
if
sys
.
version_info
[
0
]
==
2
:
raise
NameError
(
"
\n
Python2 unittest does not support being run as a "
"standalone class.
\n
As a result tests must be "
"regenerated using Python3.
\n
"
"Tests can be run under 2 or 3."
)
test_class
().
regenerate
()
else
:
tf
.
test
.
main
()
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/testing/reference_data/reference_data_test/dense/expected_graph
deleted
100644 → 0
View file @
ec90ad8e
File deleted
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/testing/reference_data/reference_data_test/dense/model.ckpt.data-00000-of-00001
deleted
100644 → 0
View file @
ec90ad8e
File deleted
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/testing/reference_data/reference_data_test/dense/model.ckpt.index
deleted
100644 → 0
View file @
ec90ad8e
File deleted
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/testing/reference_data/reference_data_test/dense/results.json
deleted
100644 → 0
View file @
ec90ad8e
[
1
,
1
,
0.4701630473136902
,
0.4701630473136902
,
0.4701630473136902
]
\ No newline at end of file
TensorFlow/ComputeVision/Accuracy_Validation/ResNet50_Official/official/utils/testing/reference_data/reference_data_test/dense/tf_version.json
deleted
100644 → 0
View file @
ec90ad8e
[
"1.8.0-dev20180325"
,
"v1.7.0-rc1-750-g6c1737e6c8"
]
\ No newline at end of file
Prev
1
…
3
4
5
6
7
8
9
10
11
…
17
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment