1. 25 Mar, 2026 1 commit
    • Aishwarya Tonpe's avatar
      Benchmark: Model benchmark - deterministic training support (#731) · 036c4712
      Aishwarya Tonpe authored
      
      
      Adds opt-in deterministic training mode to SuperBench's PyTorch model
      benchmarks. When enabled --enable-determinism. PyTorch deterministic
      algorithms are enforced, and per-step numerical fingerprints (loss,
      activation means) are recorded as metrics. These can be compared across
      runs using the existing sb result diagnosis pipeline to verify bit-exact
      reproducibility — useful for hardware validation and platform
      comparison.
       
      Flags added - 
      
      --enable-determinism
      --check-frequency: Number of steps after which you want the metrics to
      be recorded
      --deterministic-seed
      
      Changes - 
      
      Updated pytorch_base.py to handle deterministic settings, logging.
      Added a new example script: pytorch_deterministic_example.py
      Added a test file: test_pytorch_determinism_all.py to verify everything
      works as expected.
      
      Usage - 
      
      Step 1: Run 1 - Run with --enable-determinism and the necessary metrics
      will be recorded in the results-summary.jsonl file
      Step 2: Generate the baseline file from the Run 1 results using - sb
      result generate-baseline
      Step 3: Run 2 - Run with --enable-determinism and the necessary metrics
      will be recorded in the results-summary.jsonl file on a different
      machine (or the same machine)
      Step 4: Run diagnosis on the results generated from the 2 runs using the
      - sb result diagnosis command
      
      Note - 
      1. Make sure all the parameters are constant between the 2 runs 
      2. Running the diagnosis command requires the rules.yaml file
      
      ---------
      Co-authored-by: default avatarUbuntu <rdadmin@HPCPLTNODE0.n3kgq4m0lhoednrx3hxtad2nha.cdmx.internal.cloudapp.net>
      036c4712
  2. 28 Jan, 2026 1 commit
  3. 04 Dec, 2025 1 commit
  4. 17 Nov, 2025 1 commit
    • Yuting Jiang's avatar
      Benchmarks: micro benchmarks - add --set_ib_devices option to auto-select IB... · c65ae567
      Yuting Jiang authored
      Benchmarks: micro benchmarks - add --set_ib_devices option to auto-select IB device by MPI local rank in ib validation (#733)
      
      **Description**
      add --set_ib_devices option to auto-select IB device by MPI local rank 
      
      
      **Major Revision**
      - Add a new CLI flag --set_ib_devices to automatically select irregular
      IB devices based on the MPI local rank.
      - When enabled, the benchmark queries available IB devices via
      network.get_ib_devices() and selects the device corresponding to
      OMPI_COMM_WORLD_LOCAL_RANK.
      - Fall back to existing --ib_dev behavior when the flag is not provided.
      
      **Minor Revision**
      - Add an env in network.get_ib_devices() to allow user to set the device
      name
      c65ae567
  5. 23 Oct, 2025 1 commit
    • Yuting Jiang's avatar
      Benchmarks: Micro benchmark - add ncu profile support in cublaslt-gemm (#740) · f6e65a98
      Yuting Jiang authored
      **Description**
      This PR adds NCU (NVIDIA Nsight Compute) profiling support to the
      cublaslt-gemm micro benchmark, enabling detailed kernel analysis
      including DRAM throughput, compute throughput, and launch arguments.
      
      **Major Revision**
      - Add --enable_ncu_profiling and --profiling_metrics for ncu profiling
      - Modifies command execution to use NCU when profiling is enabled
      - Updates result parsing to handle both standard and NCU profiled output
      formats
      f6e65a98
  6. 22 Oct, 2025 1 commit
  7. 08 Oct, 2025 1 commit
    • Hongtao Zhang's avatar
      Enhancement: Add nsys and pytorch profiler debug trace support (#744) · d804dbb6
      Hongtao Zhang authored
      
      
      To improve benchmark debugging, the following debug methods were added:
      
      pytorch profiler in model benchmark
      
      - SB_ENABLE_PYTORCH_PROFILER: switch to enable/disable
      - SB_TORCH_PROFILER_TRACE_DIR: log path
      These 2 runtime variables need to be configured in SB config file.
      
      nsys in SB runner
      
      - SB_ENABLE_NSYS: switch to enable/disable 
      - SB_NSYS_TRACE_DIR: log path
      These 2 runtime variables need to be configured in runner's ENV
      
      ---------
      Co-authored-by: default avatarHongtao Zhang <hongtaozhang@microsoft.com>
      d804dbb6
  8. 01 Oct, 2025 1 commit
  9. 29 Sep, 2025 2 commits
  10. 19 Sep, 2025 1 commit
  11. 12 Aug, 2025 1 commit
  12. 30 Jun, 2025 1 commit
  13. 26 Jun, 2025 1 commit
  14. 25 Jun, 2025 1 commit
  15. 24 Jun, 2025 1 commit
  16. 20 Jun, 2025 2 commits
    • Babak Hejazi's avatar
      Benchmark - Support autotuning in cublaslt gemm (#706) · 60b13256
      Babak Hejazi authored
      **Description**
      Enable autotuning as an opt-in mode when benchmarking cublasLt via
      `cublaslt_gemm`
      
      The implementation is based on
      https://github.com/NVIDIA/CUDALibrarySamples/blob/master/cuBLASLt/LtSgemmSimpleAutoTuning/sample_cublasLt_LtSgemmSimpleAutoTuning.cu
      
      The behavior of original benchmark command remains unchanged, e.g.:
      - `cublaslt_gemm -m 2048 -n 12288 -k 1536 -w10000 -i 1000 -t fp8e4m3`
      
      The new opt-in options are `-a` (for autotune) and `-I` (for autotune
      iterations, default is 50, same as the default for `-i`) and `-W` (for
      autotune warmups, default=20, same as the default for `-w`), e.g.:
      - `cublaslt_gemm -m 2048 -n 12288 -k 1536 -w 10000 -i 1000 -t fp8e4m3
      -a`
      - `cublaslt_gemm -m 2048 -n 12288 -k 1536 -w 10000 -i 1000 -t fp8e4m3 -a
      -I 10 -W 10`
      
      **Note:** This PR also changes the default `gemm_compute_type` for BF16
      and FP16 to `CUBLAS_COMPUTE_32F`.
      
      **Further observations:** 
      1. The support matrix of the `cublaslt_gemm` could be further extended
      in the future to support non-FP16 output as well for FP8 inputs.
      2. Currently, the input matrices are initialized with values of 1.0 and
      2.0 which makes them less demanding in terms of power. Another future
      extension could be to enable another fill mode for, say, uniform random
      numbers between -1 and 1.
      3. cuBLAS workspace recommendations are listed under
      https://docs.nvidia.com/cuda/cublas/#cublassetworkspace
      
      
      
      Update (June 10, 2025): verified using higher level test driver with
      these commands:
      
      1. inline:
      ```
      python3 -c "                                                                            
      from superbench.benchmarks import BenchmarkRegistry, Platform
      from superbench.common.utils import logger
      
      parameters = (
          '--num_warmup 10 --num_steps 50 '
          '--shapes 512,512,512 1024,1024,1024 --in_types fp16 fp32 '
          '--enable_autotune --num_warmup_autotune 20 --num_steps_autotune 50'
      )
      context = BenchmarkRegistry.create_benchmark_context(
          'cublaslt-gemm', platform=Platform.CUDA, parameters=parameters
      )
      benchmark = BenchmarkRegistry.launch_benchmark(context)
      logger.info('Result: {}'.format(benchmark.result))
      "
      ```
      
      2. newly added script: 
      `python3 examples/benchmarks/cublaslt_function.py`
      
      ---------
      Co-authored-by: default avatarBabak Hejazi <babakh@nvidia.com>
      60b13256
    • WenqingLan1's avatar
      Benchmark - Add Grace CPU support for CPU Stream (#719) · 0b8d1fd4
      WenqingLan1 authored
      
      
      **Description**
      Added support for Grace CPU neo2 architecture in CPU Stream. Now CPU
      Stream supports dual socket benchmarking.
      
      Example config for this arch support:
      ```yaml
          cpu-stream:numa0:
            timeout: *default_timeout
            modes:
            - name: local
              parallel: no
            parameters:
              cpu_arch: neo2
              numa_mem_nodes: 0
              cores: 0 1 2 3 4 5 6 7 8
          cpu-stream:numa1:
            timeout: *default_timeout
            modes:
            - name: local
              parallel: no
            parameters:
              cpu_arch: neo2
              numa_mem_nodes: 1
              cores: 64 65 66 67 68 69 70 71 72
          cpu-stream:numa-spread:
            timeout: *default_timeout
            modes:
            - name: local
              parallel: no
            parameters:
              cpu_arch: neo2
              numa_mem_nodes: 0 1
              cores: 0 1 2 3 4 5 6 7 8 64 65 66 67 68 69 70 71 72
      ```
      
      ---------
      Co-authored-by: default avatardpower4 <dilipreddi@gmail.com>
      0b8d1fd4
  17. 18 Jun, 2025 1 commit
    • WenqingLan1's avatar
      Benchmarks - Add GPU Stream Micro Benchmark (#697) · 4eddd50a
      WenqingLan1 authored
      Added GPU Stream benchmark - measures the GPU memory bandwidth and
      efficiency for double datatype through various memory operations
      including copy, scale, add, and triad.
      - added documentation for `gpu-stream` detailing its introduction,
      metrics, and descriptions.
      - added unit tests for `gpu-stream`. Example output is in
      `superbenchmark/tests/data/gpu_stream.log`.
      4eddd50a
  18. 14 Jun, 2025 1 commit
    • Hongtao Zhang's avatar
      microbenchmark - CPU Stream Benchmark Revise (#712) · 991c0051
      Hongtao Zhang authored
      
      
      In the current implementation, the CPU‑stream benchmark code renames the
      binary before the microbench base class can verify its existence,
      causing the default‐binary check to fail.
      
      This PR adds a “default” binary—built with the standard compile
      parameters—so that the base class can always find and validate it. Once
      the default binary is in place, the CPU‑stream code will rename it as
      needed and re‑check its presence before running the benchmark.
      
      The PR also enable CPU stream in the default settings.
      
      ---------
      Co-authored-by: default avatarHongtao Zhang <hongtaozhang@microsoft.com>
      991c0051
  19. 01 May, 2025 1 commit
  20. 21 Mar, 2025 1 commit
  21. 04 Mar, 2025 1 commit
  22. 25 Feb, 2025 1 commit
  23. 15 Feb, 2025 1 commit
  24. 05 Feb, 2025 2 commits
    • Hongtao Zhang's avatar
      Bugfix - nvbandwidth benchmark need to handle N/A value (#675) · 45d06647
      Hongtao Zhang authored
      
      
      **Description**
      
      1. Fixed the bug that nvbandwidth benchmark need to handle 'N/A' values
      in nvbandwidth cmd output.
      2. Replaced the input format of test cases with a list.
      3. Add nvbandwidth configuration example in default config files.
      
      ---------
      Co-authored-by: default avatarhongtaozhang <hongtaozhang@microsoft.com>
      Co-authored-by: default avatarYifan Xiong <yifan.xiong@microsoft.com>
      45d06647
    • Kirill Prosvirov's avatar
      Bug - Fix tensorrt-inference parsing (#674) · 7af7c0b7
      Kirill Prosvirov authored
      **Description**
      Today I was running a benchmark on my machine. And encountered a fancy
      issue with tensorrt-inference.
      I got code 33, which according to the source code is:
      ```
      MICROBENCHMARK_RESULT_PARSING_FAILURE = 33
      ```
      I dived deep into the code and found out the following problem. The
      parser stumbled upon getting to the following line:
      ```
      [11/28/2024-17:03:11] [I] Latency: min = 7.2793 ms, max = 10.1606 ms, mean = 7.41642 ms, median = 7.39551 ms, percentile(99%) = 8 ms
      ```
      I ran it separately on the code and found out that the regular
      expression was not suitable for the cases like this, when you encounter
      an INT as a result in milliseconds.
      That's why this pull request is created.
      I came up with the closest possible regular expression to fix this issue
      and not to introduce any other bug.
      
      **Major Revision**
      - 0.11.0
      7af7c0b7
  25. 04 Feb, 2025 1 commit
  26. 28 Nov, 2024 2 commits
  27. 27 Nov, 2024 1 commit
  28. 22 Nov, 2024 1 commit
  29. 20 Nov, 2024 1 commit
  30. 06 Nov, 2024 1 commit
    • pdr's avatar
      Dockerfile - Add support for arm64 build (#660) · 47949127
      pdr authored
      Add support for arm64 build:
      
      - Updated dockerfile for arm64 build
      - extend cpu stream compilation for neoverse 
      - handle onnxruntime-gpu installation
      - third party builds filtering based on arch
      - disable cuda decode perf build for non x86
      47949127
  31. 05 Nov, 2024 1 commit
    • pdr's avatar
      Bug Fix - Fix numa error on grace cpu in gpu-copy (#658) · 59d36f7f
      pdr authored
      The current GPU Copy BW Performance fails on Nvidia Grace systems. This
      is due to the memory only numa node and thus the numa_run_on_node fails
      for such nodes and halts completely.
      
      This fix checks for the presence of assigned CPU cores for the numa
      node, on checking if it has no cpu cores assigned, it skips that
      specific node during the args creation and continues.
      59d36f7f
  32. 10 Oct, 2024 1 commit
  33. 20 Aug, 2024 1 commit
  34. 16 Aug, 2024 1 commit
  35. 13 Aug, 2024 1 commit
  36. 26 Jul, 2024 1 commit