I1023 02:23:35.217113 4724 params.h:114] Reading onnx model
2023-10-23 02:23:36.156021162 [W:onnxruntime:, session_state.cc:1030 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2023-10-23 02:23:36.156074381 [W:onnxruntime:, session_state.cc:1032 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
2023-10-23 02:23:40.334596254 [W:onnxruntime:, session_state.cc:1030 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2023-10-23 02:23:40.334637713 [W:onnxruntime:, session_state.cc:1032 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
I1023 02:23:40.419196 4724 onnx_asr_model.cc:126] Onnx Model Info:
HIP Warning: kernel (_ZN11onnxruntime4rocm37_BinaryElementWiseRhsPerChannelBatchNIfffNS0_6OP_SubIfffEELi512ELi2EEEvPKT0_PKT1_NS0_11fast_divmodESA_PT_T2_i) launch (512) threads out of range (256), add __launch_bounds__ to kernel define or use --gpu-max-threads-per-block recompile program !