# Tools to check Submissions Please follow the [official submission automation page](https://docs.mlcommons.org/inference/submission/) for doing a submission. It wraps all the submission related files listed below. ## `truncate_accuracy_log.py` (Mandatory) ### Inputs **input**: Path to the directory containing your submission. `closed`, `open` directories must be inside this directory.
**output**: Path to the directory to output the submission with truncated files
**submitter**: Organization name
**backup**: Path to the directory to store an unmodified copy of the truncated files ### Summary Takes a directory containing a submission and truncates `mlperf_log_accuracy.json` files. There are two ways to use this script. First, we could create a new submission directory with the truncated files by running: ``` python truncate_accuracy_log.py --input --submitter --output ``` Second, we could truncate the desired files and place and store a copy of the unmodified files in the backup repository. ``` python tools/submission/truncate_accuracy_log.py --input --submitter --backup ``` ### Outputs Output directory with submission with truncated `mlperf_log_accuracy.json` files ## `preprocess_submission.py` (Optional) ### Inputs **input**: Path to directory containing your submission
**submitter**: Organization name
### Summary The input submission directory is modified with empty directories removed and low accuracy results inferred. Multistream and offline scenario results are also wherever possible. The original input directory is saved in a timestamped directory. ## `submission_checker.py` (Mandatory) ### Inputs **input**: Path to the directory containing one or several submissions.
**version**: Checker version. E.g v1.1, v2.0, v2.1, v3.0, v3.1.
**submitter**: Filter submitters and only run the checks for some specific submitter.
**csv**: Output path where the csv with the results will be stored. E.g `results/summary.csv`.
**skip_compliance**: Flag to skip compliance checks.
**extra-model-benchmark-map**: Extra mapping for model name to benchmarks. E.g `retinanet:ssd-large;efficientnet:ssd-small`
**submission-exceptions**: Flag to ignore errors in submissions
The below input fields are off by default since v3.1 and are mandatory but can be turned on for debugging purposes **skip-power-check**: Flag to skip the extra power checks. This flag has no effect on non-power submission results **skip-meaningful-fields-emptiness-check**: Flag to avoid checking if mandatory system description fields are empty **skip-empty-files-check**: Flag to avoid checking if mandatory measurement files are empty **skip-check-power-measure-files**: Flag to avoid checking if the required power measurement files are present ### Summary Checks a directory that contains one or several submissions. This script can be used by running the following command: ``` python3 submission_checker.py --input [--version ] [--submitter ] [--csv ] [--skip_compliance] [--extra-model-benchmark-map ] [--submission-exceptions] ``` ### Outputs - CSV file containing all the valid results in the directory. - It raises several errors and logs invalid results. ## `generate_final_report.py` (Optional) ### Inputs **input**: Path to .csv output file of the [submission checker](#submissioncheckerpy) ### Summary Generates the spreadsheet with the format in which the final results will be published. This script can be used by running the following command: ``` python3 generate_final_report.py --input ``` ### Outputs Spreadsheet with the results. ## `log_parser.py` ### Summary Helper module for the submission checker. It parses the logs containing the results of the benchmark. ## `filter_errors.py` (Deprecated) ### Summary Tool to remove manually verified ERRORs from the log file in the v0.7 submission. ## `pack_submission.sh` (Deprecated) ### Summary Creates an encrypted tarball and generates the SHA1 of the tarball. Currently submissions do not need to be encrypted. ## `repository_checks.sh` (Deprecated) ### Inputs Takes as input the path of the directory to run the checks on. ### Summary Checks that a directory containing one or several submissions can be uploaded to github. This script can be used by running the following command: ``` ./repository_checks.sh ``` ### Outputs Logs in the console the errors that could cause problems uploading the submission to github.