1. 18 Apr, 2026 1 commit
    • one's avatar
      Benchmark: Model benchmark - deterministic training support (#731) (#2) · 47d4a79d
      one authored
      
      
      Adds opt-in deterministic training mode to SuperBench's PyTorch model
      benchmarks. When enabled --enable-determinism. PyTorch deterministic
      algorithms are enforced, and per-step numerical fingerprints (loss,
      activation means) are recorded as metrics. These can be compared across
      runs using the existing sb result diagnosis pipeline to verify bit-exact
      reproducibility — useful for hardware validation and platform
      comparison.
       
      Flags added - 
      
      --enable-determinism
      --check-frequency: Number of steps after which you want the metrics to
      be recorded
      --deterministic-seed
      
      Changes - 
      
      Updated pytorch_base.py to handle deterministic settings, logging.
      Added a new example script: pytorch_deterministic_example.py
      Added a test file: test_pytorch_determinism_all.py to verify everything
      works as expected.
      
      Usage - 
      
      Step 1: Run 1 - Run with --enable-determinism and the necessary metrics
      will be recorded in the results-summary.jsonl file
      Step 2: Generate the baseline file from the Run 1 results using - sb
      result generate-baseline
      Step 3: Run 2 - Run with --enable-determinism and the necessary metrics
      will be recorded in the results-summary.jsonl file on a different
      machine (or the same machine)
      Step 4: Run diagnosis on the results generated from the 2 runs using the
      - sb result diagnosis command
      
      Note - 
      1. Make sure all the parameters are constant between the 2 runs 
      2. Running the diagnosis command requires the rules.yaml file
      
      ---------
      Co-authored-by: default avatarAishwarya Tonpe <aishwarya.tonpe25@gmail.com>
      Co-authored-by: default avatarUbuntu <rdadmin@HPCPLTNODE0.n3kgq4m0lhoednrx3hxtad2nha.cdmx.internal.cloudapp.net>
      47d4a79d
  2. 28 Jan, 2026 1 commit
  3. 28 Nov, 2024 1 commit
    • pdr's avatar
      Benchmarks - Add LLaMA-2 Models (#668) · 249e21c1
      pdr authored
      Added llama benchmark - training and inference in accordance with the
      existing pytorch models implementation like gpt2, lstm etc.
      
      - added llama fp8 unit test for better code coverage, to reduce memory
      required
      - updated transformers version >= 4.28.0 for LLamaConfig
      - set tokenizers version <= 0.20.3 to avoid 0.20.4 version
      [issues](https://github.com/huggingface/tokenizers/issues/1691
      
      ) with
      py3.8
      - added llama2 to tensorrt
      - llama2 tests not added to test_tensorrt_inference_performance.py due
      to large memory requirement for worker gpu. tests validated separately
      on gh200
      
      ---------
      Co-authored-by: default avatardpatlolla <dpatlolla@microsoft.com>
      249e21c1
  4. 27 Nov, 2024 1 commit
  5. 30 Dec, 2022 1 commit
    • Yuting Jiang's avatar
      Executor - Add stdout logging util module and enable real-time logging flushing in executor (#445) · 9dfefce3
      Yuting Jiang authored
      **Description**
      Add stdout logging util module and enable real-time logging flushing in executor
      
      **Major Revision**
      - Add stdout logging util module to redirect stdout into file log
      - enable stdout logging in executor to write benchmark output into both stdout and file `sb-bench.log`
      - enable real-time log flushing in run_command of microbenchmarks through config `log_flushing`
      
      **Minor Revision**
      - add log_n_step args to enable regular step time log in model benchmarks 
      - udpate related docs
      9dfefce3
  6. 06 Sep, 2022 1 commit
    • Yifan Xiong's avatar
      Release - SuperBench v0.6.0 (#409) · 63e9b2d1
      Yifan Xiong authored
      
      
      **Description**
      
      Cherry-pick bug fixes from v0.6.0 to main.
      
      **Major Revisions**
      
      * Enable latency test in ib traffic validation distributed benchmark (#396)
      * Enhance parameter parsing to allow spaces in value (#397)
      * Update apt packages in dockerfile (#398)
      * Upgrade colorlog for NO_COLOR support (#404)
      * Analyzer - Update error handling to support exit code of sb result diagnosis (#403)
      * Analyzer - Make baseline file optional in data diagnosis and fix bugs (#399)
      * Enhance timeout cleanup to avoid possible hanging (#405)
      * Auto generate ibstat file by pssh (#402)
      * Analyzer - Format int type and unify empty value to N/A in diagnosis output file (#406)
      * Docs - Upgrade version and release note (#407)
      * Docs - Fix issues in document (#408)
      Co-authored-by: default avatarYang Wang <yangwang1@microsoft.com>
      Co-authored-by: default avatarYuting Jiang <yutingjiang@microsoft.com>
      63e9b2d1
  7. 04 Aug, 2022 1 commit
  8. 01 Apr, 2022 1 commit
  9. 19 Jan, 2022 1 commit
  10. 18 Jan, 2022 1 commit
    • Yifan Xiong's avatar
      CLI - Add command sb benchmark [list,list-parameters] (#279) · f7ffc545
      Yifan Xiong authored
      __Description__
      
      Add command `sb benchmark list` and `sb benchmark list-parameters` to support listing all optional parameters for benchmarks.
      
      <details>
      <summary>Examples</summary>
      <pre>
      $ sb benchmark list -n [a-z]+-bw -o table
      Result
      --------
      mem-bw
      nccl-bw
      rccl-bw
      </pre>
      <pre>
      $ sb benchmark list-parameters -n mem-bw
      === mem-bw ===
      optional arguments:
        --bin_dir str         Specify the directory of the benchmark binary.
        --duration int        The elapsed time of benchmark in seconds.
        --mem_type str [str ...]
                              Memory types to benchmark. E.g. htod dtoh dtod.
        --memory str          Memory argument for bandwidthtest. E.g. pinned unpinned.
        --run_count int       The run count of benchmark.
        --shmoo_mode          Enable shmoo mode for bandwidthtest.
      default values:
      {'bin_dir': None,
       'duration': 0,
       'mem_type': ['htod', 'dtoh'],
       'memory': 'pinned',
       'run_count': 1}
      </pre>
      </details>
      
      __Major Revisions__
      * Add `sb benchmark list` to list benchmarks matching given name.
      * Add `sb benchmark list-parameters` to list parameters for benchmarks which match given name.
      
      __Minor Revisions__
      * Sort format help text for argparse.
      f7ffc545
  11. 07 Dec, 2021 1 commit
  12. 02 Dec, 2021 1 commit
  13. 07 Jun, 2021 1 commit
  14. 14 Apr, 2021 1 commit
  15. 12 Apr, 2021 1 commit
  16. 09 Apr, 2021 1 commit
  17. 08 Apr, 2021 1 commit
  18. 24 Feb, 2021 1 commit