W1021 17:44:40.018000 23116516677440 torch/distributed/run.py:779] W1021 17:44:40.018000 23116516677440 torch/distributed/run.py:779] ***************************************** W1021 17:44:40.018000 23116516677440 torch/distributed/run.py:779] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. W1021 17:44:40.018000 23116516677440 torch/distributed/run.py:779] ***************************************** [2025-10-21 17:44:44,366] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2025-10-21 17:44:44,402] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2025-10-21 17:44:44,412] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2025-10-21 17:44:44,429] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2025-10-21 17:44:44,452] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2025-10-21 17:44:44,462] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2025-10-21 17:44:44,479] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2025-10-21 17:44:44,588] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect) INFO 10-21 17:44:45 __init__.py:193] Automatically detected platform rocm. INFO 10-21 17:44:45 __init__.py:193] Automatically detected platform rocm. INFO 10-21 17:44:45 __init__.py:193] Automatically detected platform rocm. INFO 10-21 17:44:45 __init__.py:193] Automatically detected platform rocm. INFO 10-21 17:44:45 __init__.py:193] Automatically detected platform rocm. INFO 10-21 17:44:45 __init__.py:193] Automatically detected platform rocm. INFO 10-21 17:44:45 __init__.py:193] Automatically detected platform rocm. INFO 10-21 17:44:45 __init__.py:193] Automatically detected platform rocm. Could not load Sliding Tile Attention. Could not load Sliding Tile Attention. Could not load Sliding Tile Attention. Could not load Sliding Tile Attention. Could not load Sliding Tile Attention. Could not load Sliding Tile Attention. Could not load Sliding Tile Attention. Could not load Sliding Tile Attention. WARNING: Logging before InitGoogleLogging() is written to STDERR I1021 17:44:47.873466 22255 ProcessGroupNCCL.cpp:881] [PG 0 Rank 6] ProcessGroupNCCL initialization options: size: 8, global rank: 6, TIMEOUT(ms): 600000, USE_HIGH_PRIORITY_STREAM: 0, SPLIT_FROM: 0, SPLIT_COLOR: 0, PG Name: 0 I1021 17:44:47.873585 22255 ProcessGroupNCCL.cpp:890] [PG 0 Rank 6] ProcessGroupNCCL environments: NCCL version: 2.18.3, TORCH_NCCL_ASYNC_ERROR_HANDLING: 1, TORCH_NCCL_DUMP_ON_TIMEOUT: 0, TORCH_NCCL_WAIT_TIMEOUT_DUMP_MILSEC: 60000, TORCH_NCCL_DESYNC_DEBUG: 0, TORCH_NCCL_ENABLE_TIMING: 0, TORCH_NCCL_BLOCKING_WAIT: 0, TORCH_DISTRIBUTED_DEBUG: OFF, TORCH_NCCL_ENABLE_MONITORING: 1, TORCH_NCCL_HEARTBEAT_TIMEOUT_SEC: 600, TORCH_NCCL_TRACE_BUFFER_SIZE: 0, TORCH_NCCL_COORD_CHECK_MILSEC: 1000, TORCH_NCCL_NAN_CHECK: 0 I1021 17:44:47.874271 22255 ProcessGroupNCCL.cpp:881] [PG 2 Rank 2] ProcessGroupNCCL initialization options: size: 4, global rank: 6, TIMEOUT(ms): 600000, USE_HIGH_PRIORITY_STREAM: 0, SPLIT_FROM: 0, SPLIT_COLOR: 0, PG Name: 2 I1021 17:44:47.874296 22255 ProcessGroupNCCL.cpp:890] [PG 2 Rank 2] ProcessGroupNCCL environments: NCCL version: 2.18.3, TORCH_NCCL_ASYNC_ERROR_HANDLING: 1, TORCH_NCCL_DUMP_ON_TIMEOUT: 0, TORCH_NCCL_WAIT_TIMEOUT_DUMP_MILSEC: 60000, TORCH_NCCL_DESYNC_DEBUG: 0, TORCH_NCCL_ENABLE_TIMING: 0, TORCH_NCCL_BLOCKING_WAIT: 0, TORCH_DISTRIBUTED_DEBUG: OFF, TORCH_NCCL_ENABLE_MONITORING: 1, TORCH_NCCL_HEARTBEAT_TIMEOUT_SEC: 600, TORCH_NCCL_TRACE_BUFFER_SIZE: 0, TORCH_NCCL_COORD_CHECK_MILSEC: 1000, TORCH_NCCL_NAN_CHECK: 0 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< WARNING: Logging before InitGoogleLogging() is written to STDERR I1021 17:44:48.093997 22256 ProcessGroupNCCL.cpp:881] [PG 0 Rank 7] ProcessGroupNCCL initialization options: size: 8, global rank: 7, TIMEOUT(ms): 600000, USE_HIGH_PRIORITY_STREAM: 0, SPLIT_FROM: 0, SPLIT_COLOR: 0, PG Name: 0 I1021 17:44:48.094165 22256 ProcessGroupNCCL.cpp:890] [PG 0 Rank 7] ProcessGroupNCCL environments: NCCL version: 2.18.3, TORCH_NCCL_ASYNC_ERROR_HANDLING: 1, TORCH_NCCL_DUMP_ON_TIMEOUT: 0, TORCH_NCCL_WAIT_TIMEOUT_DUMP_MILSEC: 60000, TORCH_NCCL_DESYNC_DEBUG: 0, TORCH_NCCL_ENABLE_TIMING: 0, TORCH_NCCL_BLOCKING_WAIT: 0, TORCH_DISTRIBUTED_DEBUG: OFF, TORCH_NCCL_ENABLE_MONITORING: 1, TORCH_NCCL_HEARTBEAT_TIMEOUT_SEC: 600, TORCH_NCCL_TRACE_BUFFER_SIZE: 0, TORCH_NCCL_COORD_CHECK_MILSEC: 1000, TORCH_NCCL_NAN_CHECK: 0 I1021 17:44:48.094836 22256 ProcessGroupNCCL.cpp:881] [PG 2 Rank 3] ProcessGroupNCCL initialization options: size: 4, global rank: 7, TIMEOUT(ms): 600000, USE_HIGH_PRIORITY_STREAM: 0, SPLIT_FROM: 0, SPLIT_COLOR: 0, PG Name: 2 I1021 17:44:48.094864 22256 ProcessGroupNCCL.cpp:890] [PG 2 Rank 3] ProcessGroupNCCL environments: NCCL version: 2.18.3, TORCH_NCCL_ASYNC_ERROR_HANDLING: 1, TORCH_NCCL_DUMP_ON_TIMEOUT: 0, TORCH_NCCL_WAIT_TIMEOUT_DUMP_MILSEC: 60000, TORCH_NCCL_DESYNC_DEBUG: 0, TORCH_NCCL_ENABLE_TIMING: 0, TORCH_NCCL_BLOCKING_WAIT: 0, TORCH_DISTRIBUTED_DEBUG: OFF, TORCH_NCCL_ENABLE_MONITORING: 1, TORCH_NCCL_HEARTBEAT_TIMEOUT_SEC: 600, TORCH_NCCL_TRACE_BUFFER_SIZE: 0, TORCH_NCCL_COORD_CHECK_MILSEC: 1000, TORCH_NCCL_NAN_CHECK: 0 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< WARNING: Logging before InitGoogleLogging() is written to STDERR I1021 17:44:48.416230 22249 ProcessGroupNCCL.cpp:881] [PG 0 Rank 0] ProcessGroupNCCL initialization options: size: 8, global rank: 0, TIMEOUT(ms): 600000, USE_HIGH_PRIORITY_STREAM: 0, SPLIT_FROM: 0, SPLIT_COLOR: 0, PG Name: 0 I1021 17:44:48.416380 22249 ProcessGroupNCCL.cpp:890] [PG 0 Rank 0] ProcessGroupNCCL environments: NCCL version: 2.18.3, TORCH_NCCL_ASYNC_ERROR_HANDLING: 1, TORCH_NCCL_DUMP_ON_TIMEOUT: 0, TORCH_NCCL_WAIT_TIMEOUT_DUMP_MILSEC: 60000, TORCH_NCCL_DESYNC_DEBUG: 0, TORCH_NCCL_ENABLE_TIMING: 0, TORCH_NCCL_BLOCKING_WAIT: 0, TORCH_DISTRIBUTED_DEBUG: OFF, TORCH_NCCL_ENABLE_MONITORING: 1, TORCH_NCCL_HEARTBEAT_TIMEOUT_SEC: 600, TORCH_NCCL_TRACE_BUFFER_SIZE: 0, TORCH_NCCL_COORD_CHECK_MILSEC: 1000, TORCH_NCCL_NAN_CHECK: 0 I1021 17:44:48.417071 22249 ProcessGroupNCCL.cpp:881] [PG 1 Rank 0] ProcessGroupNCCL initialization options: size: 4, global rank: 0, TIMEOUT(ms): 600000, USE_HIGH_PRIORITY_STREAM: 0, SPLIT_FROM: 0, SPLIT_COLOR: 0, PG Name: 1 I1021 17:44:48.417095 22249 ProcessGroupNCCL.cpp:890] [PG 1 Rank 0] ProcessGroupNCCL environments: NCCL version: 2.18.3, TORCH_NCCL_ASYNC_ERROR_HANDLING: 1, TORCH_NCCL_DUMP_ON_TIMEOUT: 0, TORCH_NCCL_WAIT_TIMEOUT_DUMP_MILSEC: 60000, TORCH_NCCL_DESYNC_DEBUG: 0, TORCH_NCCL_ENABLE_TIMING: 0, TORCH_NCCL_BLOCKING_WAIT: 0, TORCH_DISTRIBUTED_DEBUG: OFF, TORCH_NCCL_ENABLE_MONITORING: 1, TORCH_NCCL_HEARTBEAT_TIMEOUT_SEC: 600, TORCH_NCCL_TRACE_BUFFER_SIZE: 0, TORCH_NCCL_COORD_CHECK_MILSEC: 1000, TORCH_NCCL_NAN_CHECK: 0 --> loading model from /public/tengcent-hy/model/HunyuanVideo/hunyuan-video-t2v-720p <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< WARNING: Logging before InitGoogleLogging() is written to STDERR I1021 17:44:48.530696 22250 ProcessGroupNCCL.cpp:881] [PG 0 Rank 1] ProcessGroupNCCL initialization options: size: 8, global rank: 1, TIMEOUT(ms): 600000, USE_HIGH_PRIORITY_STREAM: 0, SPLIT_FROM: 0, SPLIT_COLOR: 0, PG Name: 0 I1021 17:44:48.530862 22250 ProcessGroupNCCL.cpp:890] [PG 0 Rank 1] ProcessGroupNCCL environments: NCCL version: 2.18.3, TORCH_NCCL_ASYNC_ERROR_HANDLING: 1, TORCH_NCCL_DUMP_ON_TIMEOUT: 0, TORCH_NCCL_WAIT_TIMEOUT_DUMP_MILSEC: 60000, TORCH_NCCL_DESYNC_DEBUG: 0, TORCH_NCCL_ENABLE_TIMING: 0, TORCH_NCCL_BLOCKING_WAIT: 0, TORCH_DISTRIBUTED_DEBUG: OFF, TORCH_NCCL_ENABLE_MONITORING: 1, TORCH_NCCL_HEARTBEAT_TIMEOUT_SEC: 600, TORCH_NCCL_TRACE_BUFFER_SIZE: 0, TORCH_NCCL_COORD_CHECK_MILSEC: 1000, TORCH_NCCL_NAN_CHECK: 0 I1021 17:44:48.531555 22250 ProcessGroupNCCL.cpp:881] [PG 1 Rank 1] ProcessGroupNCCL initialization options: size: 4, global rank: 1, TIMEOUT(ms): 600000, USE_HIGH_PRIORITY_STREAM: 0, SPLIT_FROM: 0, SPLIT_COLOR: 0, PG Name: 1 I1021 17:44:48.531594 22250 ProcessGroupNCCL.cpp:890] [PG 1 Rank 1] ProcessGroupNCCL environments: NCCL version: 2.18.3, TORCH_NCCL_ASYNC_ERROR_HANDLING: 1, TORCH_NCCL_DUMP_ON_TIMEOUT: 0, TORCH_NCCL_WAIT_TIMEOUT_DUMP_MILSEC: 60000, TORCH_NCCL_DESYNC_DEBUG: 0, TORCH_NCCL_ENABLE_TIMING: 0, TORCH_NCCL_BLOCKING_WAIT: 0, TORCH_DISTRIBUTED_DEBUG: OFF, TORCH_NCCL_ENABLE_MONITORING: 1, TORCH_NCCL_HEARTBEAT_TIMEOUT_SEC: 600, TORCH_NCCL_TRACE_BUFFER_SIZE: 0, TORCH_NCCL_COORD_CHECK_MILSEC: 1000, TORCH_NCCL_NAN_CHECK: 0 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< WARNING: Logging before InitGoogleLogging() is written to STDERR I1021 17:44:48.566972 22254 ProcessGroupNCCL.cpp:881] [PG 0 Rank 5] ProcessGroupNCCL initialization options: size: 8, global rank: 5, TIMEOUT(ms): 600000, USE_HIGH_PRIORITY_STREAM: 0, SPLIT_FROM: 0, SPLIT_COLOR: 0, PG Name: 0 I1021 17:44:48.567128 22254 ProcessGroupNCCL.cpp:890] [PG 0 Rank 5] ProcessGroupNCCL environments: NCCL version: 2.18.3, TORCH_NCCL_ASYNC_ERROR_HANDLING: 1, TORCH_NCCL_DUMP_ON_TIMEOUT: 0, TORCH_NCCL_WAIT_TIMEOUT_DUMP_MILSEC: 60000, TORCH_NCCL_DESYNC_DEBUG: 0, TORCH_NCCL_ENABLE_TIMING: 0, TORCH_NCCL_BLOCKING_WAIT: 0, TORCH_DISTRIBUTED_DEBUG: OFF, TORCH_NCCL_ENABLE_MONITORING: 1, TORCH_NCCL_HEARTBEAT_TIMEOUT_SEC: 600, TORCH_NCCL_TRACE_BUFFER_SIZE: 0, TORCH_NCCL_COORD_CHECK_MILSEC: 1000, TORCH_NCCL_NAN_CHECK: 0 I1021 17:44:48.567896 22254 ProcessGroupNCCL.cpp:881] [PG 2 Rank 1] ProcessGroupNCCL initialization options: size: 4, global rank: 5, TIMEOUT(ms): 600000, USE_HIGH_PRIORITY_STREAM: 0, SPLIT_FROM: 0, SPLIT_COLOR: 0, PG Name: 2 I1021 17:44:48.567921 22254 ProcessGroupNCCL.cpp:890] [PG 2 Rank 1] ProcessGroupNCCL environments: NCCL version: 2.18.3, TORCH_NCCL_ASYNC_ERROR_HANDLING: 1, TORCH_NCCL_DUMP_ON_TIMEOUT: 0, TORCH_NCCL_WAIT_TIMEOUT_DUMP_MILSEC: 60000, TORCH_NCCL_DESYNC_DEBUG: 0, TORCH_NCCL_ENABLE_TIMING: 0, TORCH_NCCL_BLOCKING_WAIT: 0, TORCH_DISTRIBUTED_DEBUG: OFF, TORCH_NCCL_ENABLE_MONITORING: 1, TORCH_NCCL_HEARTBEAT_TIMEOUT_SEC: 600, TORCH_NCCL_TRACE_BUFFER_SIZE: 0, TORCH_NCCL_COORD_CHECK_MILSEC: 1000, TORCH_NCCL_NAN_CHECK: 0 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< WARNING: Logging before InitGoogleLogging() is written to STDERR I1021 17:44:48.571375 22253 ProcessGroupNCCL.cpp:881] [PG 0 Rank 4] ProcessGroupNCCL initialization options: size: 8, global rank: 4, TIMEOUT(ms): 600000, USE_HIGH_PRIORITY_STREAM: 0, SPLIT_FROM: 0, SPLIT_COLOR: 0, PG Name: 0 I1021 17:44:48.571499 22253 ProcessGroupNCCL.cpp:890] [PG 0 Rank 4] ProcessGroupNCCL environments: NCCL version: 2.18.3, TORCH_NCCL_ASYNC_ERROR_HANDLING: 1, TORCH_NCCL_DUMP_ON_TIMEOUT: 0, TORCH_NCCL_WAIT_TIMEOUT_DUMP_MILSEC: 60000, TORCH_NCCL_DESYNC_DEBUG: 0, TORCH_NCCL_ENABLE_TIMING: 0, TORCH_NCCL_BLOCKING_WAIT: 0, TORCH_DISTRIBUTED_DEBUG: OFF, TORCH_NCCL_ENABLE_MONITORING: 1, TORCH_NCCL_HEARTBEAT_TIMEOUT_SEC: 600, TORCH_NCCL_TRACE_BUFFER_SIZE: 0, TORCH_NCCL_COORD_CHECK_MILSEC: 1000, TORCH_NCCL_NAN_CHECK: 0 I1021 17:44:48.572191 22253 ProcessGroupNCCL.cpp:881] [PG 2 Rank 0] ProcessGroupNCCL initialization options: size: 4, global rank: 4, TIMEOUT(ms): 600000, USE_HIGH_PRIORITY_STREAM: 0, SPLIT_FROM: 0, SPLIT_COLOR: 0, PG Name: 2 I1021 17:44:48.572235 22253 ProcessGroupNCCL.cpp:890] [PG 2 Rank 0] ProcessGroupNCCL environments: NCCL version: 2.18.3, TORCH_NCCL_ASYNC_ERROR_HANDLING: 1, TORCH_NCCL_DUMP_ON_TIMEOUT: 0, TORCH_NCCL_WAIT_TIMEOUT_DUMP_MILSEC: 60000, TORCH_NCCL_DESYNC_DEBUG: 0, TORCH_NCCL_ENABLE_TIMING: 0, TORCH_NCCL_BLOCKING_WAIT: 0, TORCH_DISTRIBUTED_DEBUG: OFF, TORCH_NCCL_ENABLE_MONITORING: 1, TORCH_NCCL_HEARTBEAT_TIMEOUT_SEC: 600, TORCH_NCCL_TRACE_BUFFER_SIZE: 0, TORCH_NCCL_COORD_CHECK_MILSEC: 1000, TORCH_NCCL_NAN_CHECK: 0 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< WARNING: Logging before InitGoogleLogging() is written to STDERR I1021 17:44:48.607012 22251 ProcessGroupNCCL.cpp:881] [PG 0 Rank 2] ProcessGroupNCCL initialization options: size: 8, global rank: 2, TIMEOUT(ms): 600000, USE_HIGH_PRIORITY_STREAM: 0, SPLIT_FROM: 0, SPLIT_COLOR: 0, PG Name: 0 I1021 17:44:48.607267 22251 ProcessGroupNCCL.cpp:890] [PG 0 Rank 2] ProcessGroupNCCL environments: NCCL version: 2.18.3, TORCH_NCCL_ASYNC_ERROR_HANDLING: 1, TORCH_NCCL_DUMP_ON_TIMEOUT: 0, TORCH_NCCL_WAIT_TIMEOUT_DUMP_MILSEC: 60000, TORCH_NCCL_DESYNC_DEBUG: 0, TORCH_NCCL_ENABLE_TIMING: 0, TORCH_NCCL_BLOCKING_WAIT: 0, TORCH_DISTRIBUTED_DEBUG: OFF, TORCH_NCCL_ENABLE_MONITORING: 1, TORCH_NCCL_HEARTBEAT_TIMEOUT_SEC: 600, TORCH_NCCL_TRACE_BUFFER_SIZE: 0, TORCH_NCCL_COORD_CHECK_MILSEC: 1000, TORCH_NCCL_NAN_CHECK: 0 I1021 17:44:48.607848 22251 ProcessGroupNCCL.cpp:881] [PG 1 Rank 2] ProcessGroupNCCL initialization options: size: 4, global rank: 2, TIMEOUT(ms): 600000, USE_HIGH_PRIORITY_STREAM: 0, SPLIT_FROM: 0, SPLIT_COLOR: 0, PG Name: 1 I1021 17:44:48.607894 22251 ProcessGroupNCCL.cpp:890] [PG 1 Rank 2] ProcessGroupNCCL environments: NCCL version: 2.18.3, TORCH_NCCL_ASYNC_ERROR_HANDLING: 1, TORCH_NCCL_DUMP_ON_TIMEOUT: 0, TORCH_NCCL_WAIT_TIMEOUT_DUMP_MILSEC: 60000, TORCH_NCCL_DESYNC_DEBUG: 0, TORCH_NCCL_ENABLE_TIMING: 0, TORCH_NCCL_BLOCKING_WAIT: 0, TORCH_DISTRIBUTED_DEBUG: OFF, TORCH_NCCL_ENABLE_MONITORING: 1, TORCH_NCCL_HEARTBEAT_TIMEOUT_SEC: 600, TORCH_NCCL_TRACE_BUFFER_SIZE: 0, TORCH_NCCL_COORD_CHECK_MILSEC: 1000, TORCH_NCCL_NAN_CHECK: 0 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< WARNING: Logging before InitGoogleLogging() is written to STDERR I1021 17:44:48.706211 22252 ProcessGroupNCCL.cpp:881] [PG 0 Rank 3] ProcessGroupNCCL initialization options: size: 8, global rank: 3, TIMEOUT(ms): 600000, USE_HIGH_PRIORITY_STREAM: 0, SPLIT_FROM: 0, SPLIT_COLOR: 0, PG Name: 0 I1021 17:44:48.706324 22252 ProcessGroupNCCL.cpp:890] [PG 0 Rank 3] ProcessGroupNCCL environments: NCCL version: 2.18.3, TORCH_NCCL_ASYNC_ERROR_HANDLING: 1, TORCH_NCCL_DUMP_ON_TIMEOUT: 0, TORCH_NCCL_WAIT_TIMEOUT_DUMP_MILSEC: 60000, TORCH_NCCL_DESYNC_DEBUG: 0, TORCH_NCCL_ENABLE_TIMING: 0, TORCH_NCCL_BLOCKING_WAIT: 0, TORCH_DISTRIBUTED_DEBUG: OFF, TORCH_NCCL_ENABLE_MONITORING: 1, TORCH_NCCL_HEARTBEAT_TIMEOUT_SEC: 600, TORCH_NCCL_TRACE_BUFFER_SIZE: 0, TORCH_NCCL_COORD_CHECK_MILSEC: 1000, TORCH_NCCL_NAN_CHECK: 0 I1021 17:44:48.706854 22252 ProcessGroupNCCL.cpp:881] [PG 1 Rank 3] ProcessGroupNCCL initialization options: size: 4, global rank: 3, TIMEOUT(ms): 600000, USE_HIGH_PRIORITY_STREAM: 0, SPLIT_FROM: 0, SPLIT_COLOR: 0, PG Name: 1 I1021 17:44:48.706879 22252 ProcessGroupNCCL.cpp:890] [PG 1 Rank 3] ProcessGroupNCCL environments: NCCL version: 2.18.3, TORCH_NCCL_ASYNC_ERROR_HANDLING: 1, TORCH_NCCL_DUMP_ON_TIMEOUT: 0, TORCH_NCCL_WAIT_TIMEOUT_DUMP_MILSEC: 60000, TORCH_NCCL_DESYNC_DEBUG: 0, TORCH_NCCL_ENABLE_TIMING: 0, TORCH_NCCL_BLOCKING_WAIT: 0, TORCH_DISTRIBUTED_DEBUG: OFF, TORCH_NCCL_ENABLE_MONITORING: 1, TORCH_NCCL_HEARTBEAT_TIMEOUT_SEC: 600, TORCH_NCCL_TRACE_BUFFER_SIZE: 0, TORCH_NCCL_COORD_CHECK_MILSEC: 1000, TORCH_NCCL_NAN_CHECK: 0 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Total training parameters = 12821.012544 M --> Initializing FSDP with sharding strategy: full >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --> applying fdsp activation checkpointing... --> applying fdsp activation checkpointing... --> applying fdsp activation checkpointing... I1021 17:46:17.736408 22253 ProcessGroupNCCL.cpp:2086] [PG 2 Rank 0] ProcessGroupNCCL broadcast unique ID through store took 0.123121 ms --> model loaded --> applying fdsp activation checkpointing... FullyShardedDataParallel( (_fsdp_wrapped_module): HYVideoDiffusionTransformer( (img_in): PatchEmbed( (proj): Conv3d(16, 3072, kernel_size=(1, 2, 2), stride=(1, 2, 2)) (norm): Identity() ) (txt_in): SingleTokenRefiner( (input_embedder): Linear(in_features=4096, out_features=3072, bias=True) (t_embedder): TimestepEmbedder( (mlp): Sequential( (0): Linear(in_features=256, out_features=3072, bias=True) (1): SiLU() (2): Linear(in_features=3072, out_features=3072, bias=True) ) ) (c_embedder): TextProjection( (linear_1): Linear(in_features=4096, out_features=3072, bias=True) (act_1): SiLU() (linear_2): Linear(in_features=3072, out_features=3072, bias=True) ) (individual_token_refiner): IndividualTokenRefiner( (blocks): ModuleList( (0-1): 2 x IndividualTokenRefinerBlock( (norm1): LayerNorm((3072,), eps=1e-06, elementwise_affine=True) (self_attn_qkv): Linear(in_features=3072, out_features=9216, bias=True) (self_attn_q_norm): Identity() (self_attn_k_norm): Identity() (self_attn_proj): Linear(in_features=3072, out_features=3072, bias=True) (norm2): LayerNorm((3072,), eps=1e-06, elementwise_affine=True) (mlp): MLP( (fc1): Linear(in_features=3072, out_features=12288, bias=True) (act): SiLU() (drop1): Dropout(p=0.0, inplace=False) (norm): Identity() (fc2): Linear(in_features=12288, out_features=3072, bias=True) (drop2): Dropout(p=0.0, inplace=False) ) (adaLN_modulation): Sequential( (0): SiLU() (1): Linear(in_features=3072, out_features=6144, bias=True) ) ) ) ) ) (time_in): TimestepEmbedder( (mlp): Sequential( (0): Linear(in_features=256, out_features=3072, bias=True) (1): SiLU() (2): Linear(in_features=3072, out_features=3072, bias=True) ) ) (vector_in): MLPEmbedder( (in_layer): Linear(in_features=768, out_features=3072, bias=True) (silu): SiLU() (out_layer): Linear(in_features=3072, out_features=3072, bias=True) ) (guidance_in): TimestepEmbedder( (mlp): Sequential( (0): Linear(in_features=256, out_features=3072, bias=True) (1): SiLU() (2): Linear(in_features=3072, out_features=3072, bias=True) ) ) (double_blocks): ModuleList( (0-19): 20 x FullyShardedDataParallel( (_fsdp_wrapped_module): CheckpointWrapper( (_checkpoint_wrapped_module): MMDoubleStreamBlock( (img_mod): ModulateDiT( (act): SiLU() (linear): Linear(in_features=3072, out_features=18432, bias=True) ) (img_norm1): LayerNorm((3072,), eps=1e-06, elementwise_affine=False) (img_attn_qkv): Linear(in_features=3072, out_features=9216, bias=True) (img_attn_q_norm): RMSNorm() (img_attn_k_norm): RMSNorm() (img_attn_proj): Linear(in_features=3072, out_features=3072, bias=True) (img_norm2): LayerNorm((3072,), eps=1e-06, elementwise_affine=False) (img_mlp): MLP( (fc1): Linear(in_features=3072, out_features=12288, bias=True) (act): GELU(approximate='tanh') (drop1): Dropout(p=0.0, inplace=False) (norm): Identity() (fc2): Linear(in_features=12288, out_features=3072, bias=True) (drop2): Dropout(p=0.0, inplace=False) ) (txt_mod): ModulateDiT( (act): SiLU() (linear): Linear(in_features=3072, out_features=18432, bias=True) ) (txt_norm1): LayerNorm((3072,), eps=1e-06, elementwise_affine=False) (txt_attn_qkv): Linear(in_features=3072, out_features=9216, bias=True) (txt_attn_q_norm): RMSNorm() (txt_attn_k_norm): RMSNorm() (txt_attn_proj): Linear(in_features=3072, out_features=3072, bias=True) (txt_norm2): LayerNorm((3072,), eps=1e-06, elementwise_affine=False) (txt_mlp): MLP( (fc1): Linear(in_features=3072, out_features=12288, bias=True) (act): GELU(approximate='tanh') (drop1): Dropout(p=0.0, inplace=False) (norm): Identity() (fc2): Linear(in_features=12288, out_features=3072, bias=True) (drop2): Dropout(p=0.0, inplace=False) ) ) ) ) ) (single_blocks): ModuleList( (0-39): 40 x FullyShardedDataParallel( (_fsdp_wrapped_module): CheckpointWrapper( (_checkpoint_wrapped_module): MMSingleStreamBlock( (linear1): Linear(in_features=3072, out_features=21504, bias=True) (linear2): Linear(in_features=15360, out_features=3072, bias=True) (q_norm): RMSNorm() (k_norm): RMSNorm() (pre_norm): LayerNorm((3072,), eps=1e-06, elementwise_affine=False) (mlp_act): GELU(approximate='tanh') (modulation): ModulateDiT( (act): SiLU() (linear): Linear(in_features=3072, out_features=9216, bias=True) ) ) ) ) ) (final_layer): FinalLayer( (norm_final): LayerNorm((3072,), eps=1e-06, elementwise_affine=False) (linear): Linear(in_features=3072, out_features=64, bias=True) (adaLN_modulation): Sequential( (0): SiLU() (1): Linear(in_features=3072, out_features=6144, bias=True) ) ) ) ) optimizer: AdamW ( Parameter Group 0 amsgrad: False betas: (0.9, 0.999) capturable: False differentiable: False eps: 1e-08 foreach: None fused: None lr: 1e-05 maximize: False weight_decay: 0.01 ) ***** Running training ***** Num examples = 101 Dataloader size = 13 Num Epochs = 1 Resume training from step 0 Instantaneous batch size per device = 1 Total train batch size (w. data & sequence parallel, accumulation) = 2.0 Gradient Accumulation steps = 1 Total optimization steps = 20 Total training parameters per FSDP shard = 1.602626568 B Master weight dtype: torch.float32 Steps: 0%| | 0/20 [00:00 applying fdsp activation checkpointing... I1021 17:46:19.245332 22249 ProcessGroupNCCL.cpp:2086] [PG 1 Rank 0] ProcessGroupNCCL broadcast unique ID through store took 0.087221 ms I1021 17:46:19.245529 22250 ProcessGroupNCCL.cpp:2086] [PG 1 Rank 1] ProcessGroupNCCL broadcast unique ID through store took 1727.46 ms I1021 17:46:19.245554 22251 ProcessGroupNCCL.cpp:2086] [PG 1 Rank 2] ProcessGroupNCCL broadcast unique ID through store took 1140.69 ms I1021 17:46:19.261360 22254 ProcessGroupNCCL.cpp:2086] [PG 2 Rank 1] ProcessGroupNCCL broadcast unique ID through store took 0.230022 ms --> applying fdsp activation checkpointing... I1021 17:46:19.609362 22255 ProcessGroupNCCL.cpp:2086] [PG 2 Rank 2] ProcessGroupNCCL broadcast unique ID through store took 0.297193 ms --> applying fdsp activation checkpointing... I1021 17:46:21.452391 22256 ProcessGroupNCCL.cpp:2086] [PG 2 Rank 3] ProcessGroupNCCL broadcast unique ID through store took 0.323033 ms I1021 17:46:22.152138 22253 ProcessGroupNCCL.cpp:2195] [PG 2 Rank 0] ProcessGroupNCCL created ncclComm_ 0x563ea1c12c70 on CUDA device:  I1021 17:46:22.152235 22255 ProcessGroupNCCL.cpp:2195] [PG 2 Rank 2] ProcessGroupNCCL created ncclComm_ 0x56425453b310 on CUDA device:  I1021 17:46:22.152259 22253 ProcessGroupNCCL.cpp:2200] [PG 2 Rank 0] NCCL_DEBUG: N/A I1021 17:46:22.152341 22255 ProcessGroupNCCL.cpp:2200] [PG 2 Rank 2] NCCL_DEBUG: N/A I1021 17:46:22.152369 22254 ProcessGroupNCCL.cpp:2195] [PG 2 Rank 1] ProcessGroupNCCL created ncclComm_ 0x564cb48b2410 on CUDA device:  I1021 17:46:22.152446 22256 ProcessGroupNCCL.cpp:2195] [PG 2 Rank 3] ProcessGroupNCCL created ncclComm_ 0x55a0af0c90f0 on CUDA device:  I1021 17:46:22.152556 22254 ProcessGroupNCCL.cpp:2200] [PG 2 Rank 1] NCCL_DEBUG: N/A I1021 17:46:22.152618 22256 ProcessGroupNCCL.cpp:2200] [PG 2 Rank 3] NCCL_DEBUG: N/A >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --> applying fdsp activation checkpointing... I1021 17:46:56.935648 22252 ProcessGroupNCCL.cpp:2086] [PG 1 Rank 3] ProcessGroupNCCL broadcast unique ID through store took 0.285722 ms I1021 17:46:57.674961 22249 ProcessGroupNCCL.cpp:2195] [PG 1 Rank 0] ProcessGroupNCCL created ncclComm_ 0x56523a496d40 on CUDA device: I1021 17:46:57.674990 22251 ProcessGroupNCCL.cpp:2195] [PG 1 Rank 2] ProcessGroupNCCL created ncclComm_ 0x564552606740 on CUDA device:  I1021 17:46:57.675118 22249 ProcessGroupNCCL.cpp:2200] [PG 1 Rank 0] NCCL_DEBUG: N/A I1021 17:46:57.675202 22251 ProcessGroupNCCL.cpp:2200] [PG 1 Rank 2] NCCL_DEBUG: N/A I1021 17:46:57.675406 22252 ProcessGroupNCCL.cpp:2195] [PG 1 Rank 3] ProcessGroupNCCL created ncclComm_ 0x55ba21077060 on CUDA device:  I1021 17:46:57.675484 22252 ProcessGroupNCCL.cpp:2200] [PG 1 Rank 3] NCCL_DEBUG: N/A I1021 17:46:57.675513 22250 ProcessGroupNCCL.cpp:2195] [PG 1 Rank 1] ProcessGroupNCCL created ncclComm_ 0x55f0b5601150 on CUDA device:  I1021 17:46:57.675675 22250 ProcessGroupNCCL.cpp:2200] [PG 1 Rank 1] NCCL_DEBUG: N/A I1021 17:46:57.929488 22249 ProcessGroupNCCL.cpp:2086] [PG 0 (default_pg) Rank 0] ProcessGroupNCCL broadcast unique ID through store took 0.090321 ms I1021 17:46:57.929977 22256 ProcessGroupNCCL.cpp:2086] [PG 0 (default_pg) Rank 7] ProcessGroupNCCL broadcast unique ID through store took 35333.7 ms I1021 17:46:57.929921 22254 ProcessGroupNCCL.cpp:2086] [PG 0 (default_pg) Rank 5] ProcessGroupNCCL broadcast unique ID through store took 35333.9 ms I1021 17:46:57.929981 22253 ProcessGroupNCCL.cpp:2086] [PG 0 (default_pg) Rank 4] ProcessGroupNCCL broadcast unique ID through store took 35350.2 ms I1021 17:46:57.930998 22255 ProcessGroupNCCL.cpp:2086] [PG 0 (default_pg) Rank 6] ProcessGroupNCCL broadcast unique ID through store took 35331 ms I1021 17:46:57.931465 22250 ProcessGroupNCCL.cpp:2086] [PG 0 (default_pg) Rank 1] ProcessGroupNCCL broadcast unique ID through store took 0.230182 ms I1021 17:46:57.937568 22251 ProcessGroupNCCL.cpp:2086] [PG 0 (default_pg) Rank 2] ProcessGroupNCCL broadcast unique ID through store took 0.06595 ms I1021 17:46:57.941761 22252 ProcessGroupNCCL.cpp:2086] [PG 0 (default_pg) Rank 3] ProcessGroupNCCL broadcast unique ID through store took 0.177721 ms I1021 17:46:58.191497 22249 ProcessGroupNCCL.cpp:2195] [PG 0 (default_pg) Rank 0] ProcessGroupNCCL created ncclComm_ 0x565239e5e480 on CUDA device: I1021 17:46:58.191538 22249 ProcessGroupNCCL.cpp:2200] [PG 0 (default_pg) Rank 0] NCCL_DEBUG: N/A I1021 17:46:58.191735 22255 ProcessGroupNCCL.cpp:2195] [PG 0 (default_pg) Rank 6] ProcessGroupNCCL created ncclComm_ 0x564254eabca0 on CUDA device:  I1021 17:46:58.191730 22256 ProcessGroupNCCL.cpp:2195] [PG 0 (default_pg) Rank 7] ProcessGroupNCCL created ncclComm_ 0x55a0af866d30 on CUDA device:  I1021 17:46:58.191771 22253 ProcessGroupNCCL.cpp:2195] [PG 0 (default_pg) Rank 4] ProcessGroupNCCL created ncclComm_ 0x563ea23dc030 on CUDA device:  I1021 17:46:58.191802 22255 ProcessGroupNCCL.cpp:2200] [PG 0 (default_pg) Rank 6] NCCL_DEBUG: N/A I1021 17:46:58.191815 22251 ProcessGroupNCCL.cpp:2195] [PG 0 (default_pg) Rank 2] ProcessGroupNCCL created ncclComm_ 0x564551de3db0 on CUDA device:  I1021 17:46:58.191991 22256 ProcessGroupNCCL.cpp:2200] [PG 0 (default_pg) Rank 7] NCCL_DEBUG: N/A I1021 17:46:58.192021 22253 ProcessGroupNCCL.cpp:2200] [PG 0 (default_pg) Rank 4] NCCL_DEBUG: N/A I1021 17:46:58.192076 22251 ProcessGroupNCCL.cpp:2200] [PG 0 (default_pg) Rank 2] NCCL_DEBUG: N/A I1021 17:46:58.192227 22252 ProcessGroupNCCL.cpp:2195] [PG 0 (default_pg) Rank 3] ProcessGroupNCCL created ncclComm_ 0x55ba21b62140 on CUDA device:  I1021 17:46:58.192262 22252 ProcessGroupNCCL.cpp:2200] [PG 0 (default_pg) Rank 3] NCCL_DEBUG: N/A I1021 17:46:58.192451 22250 ProcessGroupNCCL.cpp:2195] [PG 0 (default_pg) Rank 1] ProcessGroupNCCL created ncclComm_ 0x55f0b61792b0 on CUDA device:  I1021 17:46:58.192467 22254 ProcessGroupNCCL.cpp:2195] [PG 0 (default_pg) Rank 5] ProcessGroupNCCL created ncclComm_ 0x564cb5278580 on CUDA device:  I1021 17:46:58.192487 22250 ProcessGroupNCCL.cpp:2200] [PG 0 (default_pg) Rank 1] NCCL_DEBUG: N/A I1021 17:46:58.192500 22254 ProcessGroupNCCL.cpp:2200] [PG 0 (default_pg) Rank 5] NCCL_DEBUG: N/A /usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py:663: UserWarning: Graph break due to unsupported builtin flash_attn_2_cuda.PyCapsule.varlen_fwd. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph. torch._dynamo.utils.warn_once(msg) /usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py:663: UserWarning: Graph break due to unsupported builtin flash_attn_2_cuda.PyCapsule.varlen_fwd. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph. torch._dynamo.utils.warn_once(msg) /usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py:663: UserWarning: Graph break due to unsupported builtin flash_attn_2_cuda.PyCapsule.varlen_fwd. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph. torch._dynamo.utils.warn_once(msg) /usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py:663: UserWarning: Graph break due to unsupported builtin flash_attn_2_cuda.PyCapsule.varlen_fwd. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph. torch._dynamo.utils.warn_once(msg) /usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py:663: UserWarning: Graph break due to unsupported builtin flash_attn_2_cuda.PyCapsule.varlen_fwd. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph. torch._dynamo.utils.warn_once(msg) /usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py:663: UserWarning: Graph break due to unsupported builtin flash_attn_2_cuda.PyCapsule.varlen_fwd. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph. torch._dynamo.utils.warn_once(msg) /usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py:663: UserWarning: Graph break due to unsupported builtin flash_attn_2_cuda.PyCapsule.varlen_fwd. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph. torch._dynamo.utils.warn_once(msg) /usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py:663: UserWarning: Graph break due to unsupported builtin flash_attn_2_cuda.PyCapsule.varlen_fwd. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph. torch._dynamo.utils.warn_once(msg) /usr/local/lib/python3.10/dist-packages/torch/utils/checkpoint.py:1399: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead. with device_autocast_ctx, torch.cpu.amp.autocast(**cpu_autocast_kwargs), recompute_context: # type: ignore[attr-defined] /usr/local/lib/python3.10/dist-packages/torch/utils/checkpoint.py:1399: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead. with device_autocast_ctx, torch.cpu.amp.autocast(**cpu_autocast_kwargs), recompute_context: # type: ignore[attr-defined] /usr/local/lib/python3.10/dist-packages/torch/utils/checkpoint.py:1399: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead. with device_autocast_ctx, torch.cpu.amp.autocast(**cpu_autocast_kwargs), recompute_context: # type: ignore[attr-defined] /usr/local/lib/python3.10/dist-packages/torch/utils/checkpoint.py:1399: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead. with device_autocast_ctx, torch.cpu.amp.autocast(**cpu_autocast_kwargs), recompute_context: # type: ignore[attr-defined] /usr/local/lib/python3.10/dist-packages/torch/utils/checkpoint.py:1399: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead. with device_autocast_ctx, torch.cpu.amp.autocast(**cpu_autocast_kwargs), recompute_context: # type: ignore[attr-defined] /usr/local/lib/python3.10/dist-packages/torch/utils/checkpoint.py:1399: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead. with device_autocast_ctx, torch.cpu.amp.autocast(**cpu_autocast_kwargs), recompute_context: # type: ignore[attr-defined] /usr/local/lib/python3.10/dist-packages/torch/utils/checkpoint.py:1399: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead. with device_autocast_ctx, torch.cpu.amp.autocast(**cpu_autocast_kwargs), recompute_context: # type: ignore[attr-defined] /usr/local/lib/python3.10/dist-packages/torch/utils/checkpoint.py:1399: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead. with device_autocast_ctx, torch.cpu.amp.autocast(**cpu_autocast_kwargs), recompute_context: # type: ignore[attr-defined] Steps: 0%| | 0/20 [04:45 sys.exit(main()) File "/usr/local/lib/python3.10/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 348, in wrapper return f(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 901, in main run(args) File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 892, in run elastic_launch( File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 133, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 255, in launch_agent result = agent.run() File "/usr/local/lib/python3.10/dist-packages/torch/distributed/elastic/metrics/api.py", line 124, in wrapper result = f(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/distributed/elastic/agent/server/api.py", line 694, in run self._shutdown() File "/usr/local/lib/python3.10/dist-packages/torch/distributed/elastic/agent/server/local_elastic_agent.py", line 347, in _shutdown self._pcontext.close(death_sig) File "/usr/local/lib/python3.10/dist-packages/torch/distributed/elastic/multiprocessing/api.py", line 544, in close self._close(death_sig=death_sig, timeout=timeout) File "/usr/local/lib/python3.10/dist-packages/torch/distributed/elastic/multiprocessing/api.py", line 868, in _close handler.proc.wait(time_to_wait) File "/usr/lib/python3.10/subprocess.py", line 1209, in wait return self._wait(timeout=timeout) File "/usr/lib/python3.10/subprocess.py", line 1953, in _wait time.sleep(delay) File "/usr/local/lib/python3.10/dist-packages/torch/distributed/elastic/multiprocessing/api.py", line 79, in _terminate_process_handler raise SignalException(f"Process {os.getpid()} got signal: {sigval}", sigval=sigval) torch.distributed.elastic.multiprocessing.api.SignalException: Process 22119 got signal: 2