START TIME: Fri Mar 15 10:51:07 CST 2024 [2024-03-15 10:51:28,256] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2024-03-15 10:51:45,441] [INFO] [runner.py:463:main] Using IP address of 10.3.6.47 for node c06r3n06 [2024-03-15 10:51:45,449] [INFO] [multinode_runner.py:72:get_cmd] Running on the following workers: c06r3n06,c06r3n07,c06r3n08,c06r3n09 [2024-03-15 10:51:45,449] [INFO] [runner.py:570:main] cmd = pdsh -S -f 1024 -w c06r3n06,c06r3n07,c06r3n08,c06r3n09 export UCX_MAX_EAGER_LANES=4; export UCX_MAX_RNDV_LANES=4; export UCX_ZCOPY_THRESH=auto; export UCX_WARN_UNUSED_ENV_VARS=n; export UCX_RNDV_THRESH=auto; export NCCL_IB_TIMEOUT=22; export UCX_IB_PCI_BW=mlx5_0:50Gbs,mlx5_1:50Gbs,mlx5_2:50Gbs,mlx5_3:50Gbs; export UCX_NET_DEVICES=mlx5_0:1,mlx5_1:1,mlx5_2:1,mlx5_3:1; export PYTHONPATH=/work/home/liangjing/LLM/LLaMA-Factory-main; cd /work/home/liangjing/LLM/LLaMA-Factory-main; /work/home/liangjing/anaconda3/envs/torch2.1/bin/python -u -m deepspeed.launcher.launch --world_info=eyJjMDZyM24wNiI6IFswLCAxLCAyLCAzXSwgImMwNnIzbjA3IjogWzAsIDEsIDIsIDNdLCAiYzA2cjNuMDgiOiBbMCwgMSwgMiwgM10sICJjMDZyM24wOSI6IFswLCAxLCAyLCAzXX0= --node_rank=%n --master_addr=10.3.6.47 --master_port=29500 src/train_bash.py --stage 'sft' --do_train --template 'llama2' --dataset 'alpaca_gpt4_en,alpaca_gpt4_zh' --finetuning_type 'full' --model_name_or_path '/work/home/liangjing/.cache/modelscope/hub/skyline2006/llama-7b' --output_dir '/work/share/huchen1/liangjj/llama_factory' --per_device_train_batch_size '1' --per_device_eval_batch_size '1' --gradient_accumulation_steps '1' --preprocessing_num_workers '2' --lr_scheduler_type 'cosine' --logging_steps '10' --save_steps '100' --eval_steps '100' --learning_rate '5e-5' --max_grad_norm '0.5' --num_train_epochs '4.0' --val_size '0.01' --evaluation_strategy 'steps' --load_best_model_at_end --weight_decay '0.' --warmup_ratio '0.03' --plot_loss --fp16 --save_on_each_node --deepspeed 'deepspeed.json' c06r3n06: [2024-03-15 10:51:51,889] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) c06r3n06: [2024-03-15 10:51:53,429] [INFO] [launch.py:138:main] 0 NCCL_IB_TIMEOUT=22 c06r3n06: [2024-03-15 10:51:53,429] [INFO] [launch.py:145:main] WORLD INFO DICT: {'c06r3n06': [0, 1, 2, 3], 'c06r3n07': [0, 1, 2, 3], 'c06r3n08': [0, 1, 2, 3], 'c06r3n09': [0, 1, 2, 3]} c06r3n06: [2024-03-15 10:51:53,429] [INFO] [launch.py:151:main] nnodes=4, num_local_procs=4, node_rank=0 c06r3n06: [2024-03-15 10:51:53,429] [INFO] [launch.py:162:main] global_rank_mapping=defaultdict(, {'c06r3n06': [0, 1, 2, 3], 'c06r3n07': [4, 5, 6, 7], 'c06r3n08': [8, 9, 10, 11], 'c06r3n09': [12, 13, 14, 15]}) c06r3n06: [2024-03-15 10:51:53,429] [INFO] [launch.py:163:main] dist_world_size=16 c06r3n06: [2024-03-15 10:51:53,429] [INFO] [launch.py:165:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3 c06r3n09: [2024-03-15 10:51:59,642] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) c06r3n09: [2024-03-15 10:52:05,054] [INFO] [launch.py:138:main] 3 NCCL_IB_TIMEOUT=22 c06r3n09: [2024-03-15 10:52:05,054] [INFO] [launch.py:145:main] WORLD INFO DICT: {'c06r3n06': [0, 1, 2, 3], 'c06r3n07': [0, 1, 2, 3], 'c06r3n08': [0, 1, 2, 3], 'c06r3n09': [0, 1, 2, 3]} c06r3n09: [2024-03-15 10:52:05,054] [INFO] [launch.py:151:main] nnodes=4, num_local_procs=4, node_rank=3 c06r3n09: [2024-03-15 10:52:05,054] [INFO] [launch.py:162:main] global_rank_mapping=defaultdict(, {'c06r3n06': [0, 1, 2, 3], 'c06r3n07': [4, 5, 6, 7], 'c06r3n08': [8, 9, 10, 11], 'c06r3n09': [12, 13, 14, 15]}) c06r3n09: [2024-03-15 10:52:05,054] [INFO] [launch.py:163:main] dist_world_size=16 c06r3n09: [2024-03-15 10:52:05,054] [INFO] [launch.py:165:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3 c06r3n06: [2024-03-15 10:52:19,772] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) c06r3n06: [2024-03-15 10:52:19,772] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) c06r3n06: [2024-03-15 10:52:19,773] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) c06r3n06: [2024-03-15 10:52:19,773] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) c06r3n07: [2024-03-15 10:52:22,201] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) c06r3n08: [2024-03-15 10:52:22,201] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) c06r3n09: [2024-03-15 10:52:26,797] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) c06r3n09: [2024-03-15 10:52:26,797] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) c06r3n09: [2024-03-15 10:52:26,797] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) c06r3n09: [2024-03-15 10:52:26,797] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) c06r3n07: [2024-03-15 10:52:44,920] [INFO] [launch.py:138:main] 1 NCCL_IB_TIMEOUT=22 c06r3n08: [2024-03-15 10:52:44,920] [INFO] [launch.py:138:main] 2 NCCL_IB_TIMEOUT=22 c06r3n07: [2024-03-15 10:52:44,920] [INFO] [launch.py:145:main] WORLD INFO DICT: {'c06r3n06': [0, 1, 2, 3], 'c06r3n07': [0, 1, 2, 3], 'c06r3n08': [0, 1, 2, 3], 'c06r3n09': [0, 1, 2, 3]} c06r3n08: [2024-03-15 10:52:44,920] [INFO] [launch.py:145:main] WORLD INFO DICT: {'c06r3n06': [0, 1, 2, 3], 'c06r3n07': [0, 1, 2, 3], 'c06r3n08': [0, 1, 2, 3], 'c06r3n09': [0, 1, 2, 3]} c06r3n07: [2024-03-15 10:52:44,921] [INFO] [launch.py:151:main] nnodes=4, num_local_procs=4, node_rank=1 c06r3n08: [2024-03-15 10:52:44,920] [INFO] [launch.py:151:main] nnodes=4, num_local_procs=4, node_rank=2 c06r3n07: [2024-03-15 10:52:44,921] [INFO] [launch.py:162:main] global_rank_mapping=defaultdict(, {'c06r3n06': [0, 1, 2, 3], 'c06r3n07': [4, 5, 6, 7], 'c06r3n08': [8, 9, 10, 11], 'c06r3n09': [12, 13, 14, 15]}) c06r3n07: [2024-03-15 10:52:44,921] [INFO] [launch.py:163:main] dist_world_size=16 c06r3n07: [2024-03-15 10:52:44,921] [INFO] [launch.py:165:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3 c06r3n08: [2024-03-15 10:52:44,920] [INFO] [launch.py:162:main] global_rank_mapping=defaultdict(, {'c06r3n06': [0, 1, 2, 3], 'c06r3n07': [4, 5, 6, 7], 'c06r3n08': [8, 9, 10, 11], 'c06r3n09': [12, 13, 14, 15]}) c06r3n08: [2024-03-15 10:52:44,920] [INFO] [launch.py:163:main] dist_world_size=16 c06r3n08: [2024-03-15 10:52:44,920] [INFO] [launch.py:165:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3 c06r3n09: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n09: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n09: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n09: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n06: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n06: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n06: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n06: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n06: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n06: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n09: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n09: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n09: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n09: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n09: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n09: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n06: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n06: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n06: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n06: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n06: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n06: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n09: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n09: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n09: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n09: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n09: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n09: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n06: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n06: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n06: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n06: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n06: [2024-03-15 10:53:02,859] [INFO] [comm.py:637:init_distributed] cdb=None c06r3n09: [2024-03-15 10:53:02,861] [INFO] [comm.py:637:init_distributed] cdb=None c06r3n06: [2024-03-15 10:53:02,868] [INFO] [comm.py:637:init_distributed] cdb=None c06r3n06: [2024-03-15 10:53:02,869] [INFO] [comm.py:637:init_distributed] cdb=None c06r3n06: [2024-03-15 10:53:02,869] [INFO] [comm.py:668:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl c06r3n06: [2024-03-15 10:53:02,871] [INFO] [comm.py:637:init_distributed] cdb=None c06r3n09: [2024-03-15 10:53:02,871] [INFO] [comm.py:637:init_distributed] cdb=None c06r3n06: WARNING: Logging before InitGoogleLogging() is written to STDERR c06r3n06: I0315 10:53:02.872735 876 ProcessGroupNCCL.cpp:686] [Rank 2] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=258831984 c06r3n09: WARNING: Logging before InitGoogleLogging() is written to STDERR c06r3n09: I0315 10:53:02.873093 32465 ProcessGroupNCCL.cpp:686] [Rank 12] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=253251920 c06r3n09: [2024-03-15 10:53:02,873] [INFO] [comm.py:637:init_distributed] cdb=None c06r3n09: WARNING: Logging before InitGoogleLogging() is written to STDERR c06r3n09: I0315 10:53:02.874532 32466 ProcessGroupNCCL.cpp:686] [Rank 13] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=247019408 c06r3n09: [2024-03-15 10:53:02,875] [INFO] [comm.py:637:init_distributed] cdb=None c06r3n09: WARNING: Logging before InitGoogleLogging() is written to STDERR c06r3n09: I0315 10:53:02.876525 32468 ProcessGroupNCCL.cpp:686] [Rank 15] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=274413520 c06r3n09: 03/15/2024 10:53:02 - INFO - llmtuner.hparams.parser - Process rank: 0, device: cuda:0, n_gpu: 1 c06r3n09: distributed training: True, compute dtype: torch.float16 c06r3n09: 03/15/2024 10:53:02 - INFO - llmtuner.hparams.parser - Process rank: 1, device: cuda:1, n_gpu: 1 c06r3n09: distributed training: True, compute dtype: torch.float16 c06r3n06: 03/15/2024 10:53:02 - INFO - llmtuner.hparams.parser - Process rank: 2, device: cuda:2, n_gpu: 1 c06r3n06: distributed training: True, compute dtype: torch.float16 c06r3n09: 03/15/2024 10:53:02 - INFO - llmtuner.hparams.parser - Process rank: 3, device: cuda:3, n_gpu: 1 c06r3n09: distributed training: True, compute dtype: torch.float16 c06r3n09: 03/15/2024 10:53:02 - INFO - llmtuner.hparams.parser - Training/evaluation parameters Seq2SeqTrainingArguments( c06r3n09: _n_gpu=1, c06r3n09: adafactor=False, c06r3n09: adam_beta1=0.9, c06r3n09: adam_beta2=0.999, c06r3n09: adam_epsilon=1e-08, c06r3n09: auto_find_batch_size=False, c06r3n09: bf16=False, c06r3n09: bf16_full_eval=False, c06r3n09: data_seed=None, c06r3n09: dataloader_drop_last=False, c06r3n09: dataloader_num_workers=0, c06r3n09: dataloader_persistent_workers=False, c06r3n09: dataloader_pin_memory=True, c06r3n06: 03/15/2024 10:53:02 - INFO - llmtuner.hparams.parser - Training/evaluation parameters Seq2SeqTrainingArguments( c06r3n09: ddp_backend=None, c06r3n09: ddp_broadcast_buffers=None, c06r3n09: ddp_bucket_cap_mb=None, c06r3n09: ddp_find_unused_parameters=None, c06r3n09: ddp_timeout=1800, c06r3n09: debug=[], c06r3n09: deepspeed=deepspeed.json, c06r3n09: disable_tqdm=False, c06r3n09: dispatch_batches=None, c06r3n09: do_eval=True, c06r3n09: do_predict=False, c06r3n09: do_train=True, c06r3n09: eval_accumulation_steps=None, c06r3n09: eval_delay=0, c06r3n09: eval_steps=100, c06r3n09: evaluation_strategy=steps, c06r3n09: fp16=True, c06r3n09: fp16_backend=auto, c06r3n09: fp16_full_eval=False, c06r3n09: fp16_opt_level=O1, c06r3n09: fsdp=[], c06r3n09: fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, c06r3n09: fsdp_min_num_params=0, c06r3n09: fsdp_transformer_layer_cls_to_wrap=None, c06r3n09: full_determinism=False, c06r3n09: generation_config=None, c06r3n09: generation_max_length=None, c06r3n09: generation_num_beams=None, c06r3n09: gradient_accumulation_steps=1, c06r3n09: gradient_checkpointing=False, c06r3n09: gradient_checkpointing_kwargs=None, c06r3n09: greater_is_better=False, c06r3n09: group_by_length=False, c06r3n09: half_precision_backend=auto, c06r3n09: hub_always_push=False, c06r3n09: hub_model_id=None, c06r3n09: hub_private_repo=False, c06r3n09: hub_strategy=every_save, c06r3n09: hub_token=, c06r3n09: ignore_data_skip=False, c06r3n09: include_inputs_for_metrics=False, c06r3n09: include_num_input_tokens_seen=False, c06r3n09: include_tokens_per_second=False, c06r3n09: jit_mode_eval=False, c06r3n09: label_names=None, c06r3n09: label_smoothing_factor=0.0, c06r3n09: learning_rate=5e-05, c06r3n09: length_column_name=length, c06r3n09: load_best_model_at_end=True, c06r3n09: local_rank=0, c06r3n09: log_level=passive, c06r3n09: log_level_replica=warning, c06r3n09: log_on_each_node=True, c06r3n09: logging_dir=/work/share/huchen1/liangjj/llama_factory/runs/Mar15_10-53-02_c06r3n09, c06r3n09: logging_first_step=False, c06r3n09: logging_nan_inf_filter=True, c06r3n09: logging_steps=10, c06r3n09: logging_strategy=steps, c06r3n09: lr_scheduler_kwargs={}, c06r3n09: lr_scheduler_type=cosine, c06r3n09: max_grad_norm=0.5, c06r3n09: max_steps=-1, c06r3n09: metric_for_best_model=loss, c06r3n09: mp_parameters=, c06r3n09: neftune_noise_alpha=None, c06r3n09: no_cuda=False, c06r3n09: num_train_epochs=4.0, c06r3n09: optim=adamw_torch, c06r3n09: optim_args=None, c06r3n09: output_dir=/work/share/huchen1/liangjj/llama_factory, c06r3n09: overwrite_output_dir=False, c06r3n09: past_index=-1, c06r3n09: per_device_eval_batch_size=1, c06r3n09: per_device_train_batch_size=1, c06r3n09: predict_with_generate=False, c06r3n09: prediction_loss_only=False, c06r3n09: push_to_hub=False, c06r3n09: push_to_hub_model_id=None, c06r3n09: push_to_hub_organization=None, c06r3n09: push_to_hub_token=, c06r3n09: ray_scope=last, c06r3n09: remove_unused_columns=True, c06r3n09: report_to=['tensorboard'], c06r3n09: resume_from_checkpoint=None, c06r3n09: run_name=/work/share/huchen1/liangjj/llama_factory, c06r3n09: save_on_each_node=True, c06r3n09: save_only_model=False, c06r3n09: save_safetensors=True, c06r3n09: save_steps=100, c06r3n09: save_strategy=steps, c06r3n09: save_total_limit=None, c06r3n09: seed=42, c06r3n09: skip_memory_metrics=True, c06r3n09: sortish_sampler=False, c06r3n09: split_batches=False, c06r3n09: tf32=None, c06r3n09: torch_compile=False, c06r3n09: torch_compile_backend=None, c06r3n09: torch_compile_mode=None, c06r3n09: torchdynamo=None, c06r3n09: tpu_metrics_debug=False, c06r3n09: tpu_num_cores=None, c06r3n09: use_cpu=False, c06r3n09: use_ipex=False, c06r3n09: use_legacy_prediction_loop=False, c06r3n09: use_mps_device=False, c06r3n09: warmup_ratio=0.03, c06r3n09: warmup_steps=0, c06r3n09: weight_decay=0.0, c06r3n09: ) c06r3n09: 03/15/2024 10:53:02 - INFO - llmtuner.hparams.parser - Training/evaluation parameters Seq2SeqTrainingArguments( c06r3n09: _n_gpu=1, c06r3n09: adafactor=False, c06r3n09: adam_beta1=0.9, c06r3n09: adam_beta2=0.999, c06r3n09: adam_epsilon=1e-08, c06r3n09: auto_find_batch_size=False, c06r3n09: bf16=False, c06r3n09: bf16_full_eval=False, c06r3n09: data_seed=None, c06r3n09: dataloader_drop_last=False, c06r3n09: dataloader_num_workers=0, c06r3n09: dataloader_persistent_workers=False, c06r3n09: dataloader_pin_memory=True, c06r3n09: ddp_backend=None, c06r3n09: ddp_broadcast_buffers=None, c06r3n09: ddp_bucket_cap_mb=None, c06r3n09: ddp_find_unused_parameters=None, c06r3n09: ddp_timeout=1800, c06r3n09: debug=[], c06r3n09: deepspeed=deepspeed.json, c06r3n09: disable_tqdm=False, c06r3n09: dispatch_batches=None, c06r3n09: do_eval=True, c06r3n09: do_predict=False, c06r3n09: do_train=True, c06r3n09: eval_accumulation_steps=None, c06r3n09: eval_delay=0, c06r3n09: eval_steps=100, c06r3n09: evaluation_strategy=steps, c06r3n09: fp16=True, c06r3n09: fp16_backend=auto, c06r3n09: fp16_full_eval=False, c06r3n09: fp16_opt_level=O1, c06r3n09: fsdp=[], c06r3n09: fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, c06r3n09: fsdp_min_num_params=0, c06r3n09: fsdp_transformer_layer_cls_to_wrap=None, c06r3n09: full_determinism=False, c06r3n09: generation_config=None, c06r3n09: generation_max_length=None, c06r3n09: generation_num_beams=None, c06r3n09: gradient_accumulation_steps=1, c06r3n09: gradient_checkpointing=False, c06r3n09: gradient_checkpointing_kwargs=None, c06r3n09: greater_is_better=False, c06r3n09: group_by_length=False, c06r3n09: half_precision_backend=auto, c06r3n09: hub_always_push=False, c06r3n09: hub_model_id=None, c06r3n09: hub_private_repo=False, c06r3n09: hub_strategy=every_save, c06r3n09: hub_token=, c06r3n09: ignore_data_skip=False, c06r3n09: include_inputs_for_metrics=False, c06r3n09: include_num_input_tokens_seen=False, c06r3n09: include_tokens_per_second=False, c06r3n09: jit_mode_eval=False, c06r3n09: label_names=None, c06r3n09: label_smoothing_factor=0.0, c06r3n09: learning_rate=5e-05, c06r3n09: length_column_name=length, c06r3n09: load_best_model_at_end=True, c06r3n09: local_rank=1, c06r3n09: log_level=passive, c06r3n09: log_level_replica=warning, c06r3n09: log_on_each_node=True, c06r3n09: logging_dir=/work/share/huchen1/liangjj/llama_factory/runs/Mar15_10-53-02_c06r3n09, c06r3n09: logging_first_step=False, c06r3n09: logging_nan_inf_filter=True, c06r3n09: logging_steps=10, c06r3n09: logging_strategy=steps, c06r3n09: lr_scheduler_kwargs={}, c06r3n09: lr_scheduler_type=cosine, c06r3n09: max_grad_norm=0.5, c06r3n09: max_steps=-1, c06r3n09: metric_for_best_model=loss, c06r3n09: mp_parameters=, c06r3n09: neftune_noise_alpha=None, c06r3n09: no_cuda=False, c06r3n09: num_train_epochs=4.0, c06r3n09: optim=adamw_torch, c06r3n09: optim_args=None, c06r3n09: output_dir=/work/share/huchen1/liangjj/llama_factory, c06r3n09: overwrite_output_dir=False, c06r3n09: past_index=-1, c06r3n09: per_device_eval_batch_size=1, c06r3n09: per_device_train_batch_size=1, c06r3n09: predict_with_generate=False, c06r3n09: prediction_loss_only=False, c06r3n09: push_to_hub=False, c06r3n09: push_to_hub_model_id=None, c06r3n09: push_to_hub_organization=None, c06r3n09: push_to_hub_token=, c06r3n09: ray_scope=last, c06r3n09: remove_unused_columns=True, c06r3n09: report_to=['tensorboard'], c06r3n09: resume_from_checkpoint=None, c06r3n09: run_name=/work/share/huchen1/liangjj/llama_factory, c06r3n09: save_on_each_node=True, c06r3n09: save_only_model=False, c06r3n09: save_safetensors=True, c06r3n09: save_steps=100, c06r3n09: save_strategy=steps, c06r3n09: save_total_limit=None, c06r3n09: seed=42, c06r3n09: skip_memory_metrics=True, c06r3n09: sortish_sampler=False, c06r3n09: split_batches=False, c06r3n09: tf32=None, c06r3n09: torch_compile=False, c06r3n09: torch_compile_backend=None, c06r3n06: _n_gpu=1, c06r3n09: torch_compile_mode=None, c06r3n06: adafactor=False, c06r3n06: adam_beta1=0.9, c06r3n06: adam_beta2=0.999, c06r3n06: adam_epsilon=1e-08, c06r3n06: auto_find_batch_size=False, c06r3n06: bf16=False, c06r3n06: bf16_full_eval=False, c06r3n06: data_seed=None, c06r3n06: dataloader_drop_last=False, c06r3n06: dataloader_num_workers=0, c06r3n06: dataloader_persistent_workers=False, c06r3n06: dataloader_pin_memory=True, c06r3n06: ddp_backend=None, c06r3n06: ddp_broadcast_buffers=None, c06r3n06: ddp_bucket_cap_mb=None, c06r3n06: ddp_find_unused_parameters=None, c06r3n06: ddp_timeout=1800, c06r3n06: debug=[], c06r3n06: deepspeed=deepspeed.json, c06r3n06: disable_tqdm=False, c06r3n06: dispatch_batches=None, c06r3n06: do_eval=True, c06r3n06: do_predict=False, c06r3n06: do_train=True, c06r3n06: eval_accumulation_steps=None, c06r3n06: eval_delay=0, c06r3n06: eval_steps=100, c06r3n06: evaluation_strategy=steps, c06r3n06: fp16=True, c06r3n06: fp16_backend=auto, c06r3n06: fp16_full_eval=False, c06r3n06: fp16_opt_level=O1, c06r3n06: fsdp=[], c06r3n06: fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, c06r3n06: fsdp_min_num_params=0, c06r3n06: fsdp_transformer_layer_cls_to_wrap=None, c06r3n06: full_determinism=False, c06r3n06: generation_config=None, c06r3n06: generation_max_length=None, c06r3n06: generation_num_beams=None, c06r3n06: gradient_accumulation_steps=1, c06r3n06: gradient_checkpointing=False, c06r3n06: gradient_checkpointing_kwargs=None, c06r3n06: greater_is_better=False, c06r3n06: group_by_length=False, c06r3n06: half_precision_backend=auto, c06r3n06: hub_always_push=False, c06r3n06: hub_model_id=None, c06r3n06: hub_private_repo=False, c06r3n06: hub_strategy=every_save, c06r3n06: hub_token=, c06r3n06: ignore_data_skip=False, c06r3n06: include_inputs_for_metrics=False, c06r3n06: include_num_input_tokens_seen=False, c06r3n06: include_tokens_per_second=False, c06r3n06: jit_mode_eval=False, c06r3n06: label_names=None, c06r3n06: label_smoothing_factor=0.0, c06r3n06: learning_rate=5e-05, c06r3n06: length_column_name=length, c06r3n06: load_best_model_at_end=True, c06r3n06: local_rank=2, c06r3n06: log_level=passive, c06r3n06: log_level_replica=warning, c06r3n06: log_on_each_node=True, c06r3n06: logging_dir=/work/share/huchen1/liangjj/llama_factory/runs/Mar15_10-53-02_c06r3n06, c06r3n06: logging_first_step=False, c06r3n06: logging_nan_inf_filter=True, c06r3n06: logging_steps=10, c06r3n06: logging_strategy=steps, c06r3n06: lr_scheduler_kwargs={}, c06r3n06: lr_scheduler_type=cosine, c06r3n06: max_grad_norm=0.5, c06r3n06: max_steps=-1, c06r3n06: metric_for_best_model=loss, c06r3n06: mp_parameters=, c06r3n06: neftune_noise_alpha=None, c06r3n06: no_cuda=False, c06r3n06: num_train_epochs=4.0, c06r3n06: optim=adamw_torch, c06r3n06: optim_args=None, c06r3n06: output_dir=/work/share/huchen1/liangjj/llama_factory, c06r3n06: overwrite_output_dir=False, c06r3n06: past_index=-1, c06r3n06: per_device_eval_batch_size=1, c06r3n06: per_device_train_batch_size=1, c06r3n06: predict_with_generate=False, c06r3n06: prediction_loss_only=False, c06r3n06: push_to_hub=False, c06r3n06: push_to_hub_model_id=None, c06r3n06: push_to_hub_organization=None, c06r3n06: push_to_hub_token=, c06r3n06: ray_scope=last, c06r3n06: remove_unused_columns=True, c06r3n06: report_to=['tensorboard'], c06r3n06: resume_from_checkpoint=None, c06r3n06: run_name=/work/share/huchen1/liangjj/llama_factory, c06r3n06: save_on_each_node=True, c06r3n06: save_only_model=False, c06r3n06: save_safetensors=True, c06r3n06: save_steps=100, c06r3n06: save_strategy=steps, c06r3n06: save_total_limit=None, c06r3n06: seed=42, c06r3n06: skip_memory_metrics=True, c06r3n06: sortish_sampler=False, c06r3n06: split_batches=False, c06r3n06: tf32=None, c06r3n06: torch_compile=False, c06r3n06: torch_compile_backend=None, c06r3n06: torch_compile_mode=None, c06r3n06: torchdynamo=None, c06r3n06: tpu_metrics_debug=False, c06r3n06: tpu_num_cores=None, c06r3n06: use_cpu=False, c06r3n06: use_ipex=False, c06r3n09: torchdynamo=None, c06r3n06: use_legacy_prediction_loop=False, c06r3n09: tpu_metrics_debug=False, c06r3n09: tpu_num_cores=None, c06r3n09: use_cpu=False, c06r3n09: use_ipex=False, c06r3n09: use_legacy_prediction_loop=False, c06r3n09: use_mps_device=False, c06r3n09: warmup_ratio=0.03, c06r3n09: warmup_steps=0, c06r3n09: weight_decay=0.0, c06r3n09: ) c06r3n09: 03/15/2024 10:53:02 - INFO - llmtuner.hparams.parser - Training/evaluation parameters Seq2SeqTrainingArguments( c06r3n09: _n_gpu=1, c06r3n09: adafactor=False, c06r3n09: adam_beta1=0.9, c06r3n09: adam_beta2=0.999, c06r3n09: adam_epsilon=1e-08, c06r3n09: auto_find_batch_size=False, c06r3n09: bf16=False, c06r3n09: bf16_full_eval=False, c06r3n09: data_seed=None, c06r3n09: dataloader_drop_last=False, c06r3n09: dataloader_num_workers=0, c06r3n09: dataloader_persistent_workers=False, c06r3n09: dataloader_pin_memory=True, c06r3n09: ddp_backend=None, c06r3n09: ddp_broadcast_buffers=None, c06r3n09: ddp_bucket_cap_mb=None, c06r3n09: ddp_find_unused_parameters=None, c06r3n09: ddp_timeout=1800, c06r3n09: debug=[], c06r3n09: deepspeed=deepspeed.json, c06r3n09: disable_tqdm=False, c06r3n09: dispatch_batches=None, c06r3n09: do_eval=True, c06r3n09: do_predict=False, c06r3n09: do_train=True, c06r3n09: eval_accumulation_steps=None, c06r3n09: eval_delay=0, c06r3n09: eval_steps=100, c06r3n09: evaluation_strategy=steps, c06r3n09: fp16=True, c06r3n09: fp16_backend=auto, c06r3n09: fp16_full_eval=False, c06r3n09: fp16_opt_level=O1, c06r3n09: fsdp=[], c06r3n09: fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, c06r3n09: fsdp_min_num_params=0, c06r3n09: fsdp_transformer_layer_cls_to_wrap=None, c06r3n09: full_determinism=False, c06r3n09: generation_config=None, c06r3n09: generation_max_length=None, c06r3n09: generation_num_beams=None, c06r3n09: gradient_accumulation_steps=1, c06r3n09: gradient_checkpointing=False, c06r3n09: gradient_checkpointing_kwargs=None, c06r3n09: greater_is_better=False, c06r3n09: group_by_length=False, c06r3n09: half_precision_backend=auto, c06r3n09: hub_always_push=False, c06r3n09: hub_model_id=None, c06r3n09: hub_private_repo=False, c06r3n09: hub_strategy=every_save, c06r3n09: hub_token=, c06r3n09: ignore_data_skip=False, c06r3n09: include_inputs_for_metrics=False, c06r3n09: include_num_input_tokens_seen=False, c06r3n09: include_tokens_per_second=False, c06r3n09: jit_mode_eval=False, c06r3n09: label_names=None, c06r3n09: label_smoothing_factor=0.0, c06r3n09: learning_rate=5e-05, c06r3n09: length_column_name=length, c06r3n09: load_best_model_at_end=True, c06r3n09: local_rank=3, c06r3n09: log_level=passive, c06r3n09: log_level_replica=warning, c06r3n09: log_on_each_node=True, c06r3n09: logging_dir=/work/share/huchen1/liangjj/llama_factory/runs/Mar15_10-53-02_c06r3n09, c06r3n09: logging_first_step=False, c06r3n09: logging_nan_inf_filter=True, c06r3n09: logging_steps=10, c06r3n09: logging_strategy=steps, c06r3n09: lr_scheduler_kwargs={}, c06r3n09: lr_scheduler_type=cosine, c06r3n09: max_grad_norm=0.5, c06r3n09: max_steps=-1, c06r3n09: metric_for_best_model=loss, c06r3n09: mp_parameters=, c06r3n09: neftune_noise_alpha=None, c06r3n09: no_cuda=False, c06r3n09: num_train_epochs=4.0, c06r3n09: optim=adamw_torch, c06r3n09: optim_args=None, c06r3n09: output_dir=/work/share/huchen1/liangjj/llama_factory, c06r3n09: overwrite_output_dir=False, c06r3n09: past_index=-1, c06r3n09: per_device_eval_batch_size=1, c06r3n09: per_device_train_batch_size=1, c06r3n09: predict_with_generate=False, c06r3n09: prediction_loss_only=False, c06r3n09: push_to_hub=False, c06r3n09: push_to_hub_model_id=None, c06r3n09: push_to_hub_organization=None, c06r3n09: push_to_hub_token=, c06r3n09: ray_scope=last, c06r3n09: remove_unused_columns=True, c06r3n09: report_to=['tensorboard'], c06r3n09: resume_from_checkpoint=None, c06r3n09: run_name=/work/share/huchen1/liangjj/llama_factory, c06r3n09: save_on_each_node=True, c06r3n09: save_only_model=False, c06r3n09: save_safetensors=True, c06r3n09: save_steps=100, c06r3n09: save_strategy=steps, c06r3n09: save_total_limit=None, c06r3n09: seed=42, c06r3n09: skip_memory_metrics=True, c06r3n09: sortish_sampler=False, c06r3n09: split_batches=False, c06r3n09: tf32=None, c06r3n09: torch_compile=False, c06r3n09: torch_compile_backend=None, c06r3n09: torch_compile_mode=None, c06r3n09: torchdynamo=None, c06r3n09: tpu_metrics_debug=False, c06r3n09: tpu_num_cores=None, c06r3n09: use_cpu=False, c06r3n09: use_ipex=False, c06r3n09: use_legacy_prediction_loop=False, c06r3n09: use_mps_device=False, c06r3n09: warmup_ratio=0.03, c06r3n09: warmup_steps=0, c06r3n09: weight_decay=0.0, c06r3n09: ) c06r3n06: use_mps_device=False, c06r3n06: warmup_ratio=0.03, c06r3n06: warmup_steps=0, c06r3n06: weight_decay=0.0, c06r3n06: ) c06r3n09: [INFO|tokenization_utils_base.py:2025] 2024-03-15 10:53:03,004 >> loading file tokenizer.model c06r3n09: [INFO|tokenization_utils_base.py:2025] 2024-03-15 10:53:03,004 >> loading file added_tokens.json c06r3n09: [INFO|tokenization_utils_base.py:2025] 2024-03-15 10:53:03,004 >> loading file special_tokens_map.json c06r3n09: [INFO|tokenization_utils_base.py:2025] 2024-03-15 10:53:03,004 >> loading file tokenizer_config.json c06r3n09: [INFO|tokenization_utils_base.py:2025] 2024-03-15 10:53:03,004 >> loading file tokenizer.json c06r3n09: [WARNING|logging.py:329] 2024-03-15 10:53:03,020 >> You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 c06r3n06: You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 c06r3n09: You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 c06r3n09: You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 c06r3n09: [INFO|configuration_utils.py:727] 2024-03-15 10:53:03,139 >> loading configuration file /work/home/liangjing/.cache/modelscope/hub/skyline2006/llama-7b/config.json c06r3n09: [INFO|configuration_utils.py:792] 2024-03-15 10:53:03,141 >> Model config LlamaConfig { c06r3n09: "_name_or_path": "/work/home/liangjing/.cache/modelscope/hub/skyline2006/llama-7b", c06r3n09: "architectures": [ c06r3n09: "LlamaForCausalLM" c06r3n09: ], c06r3n09: "attention_bias": false, c06r3n09: "attention_dropout": 0.0, c06r3n09: "bos_token_id": 0, c06r3n09: "eos_token_id": 1, c06r3n09: "hidden_act": "silu", c06r3n09: "hidden_size": 4096, c06r3n09: "initializer_range": 0.02, c06r3n09: "intermediate_size": 11008, c06r3n09: "max_position_embeddings": 2048, c06r3n09: "max_sequence_length": 2048, c06r3n09: "model_type": "llama", c06r3n09: "num_attention_heads": 32, c06r3n09: "num_hidden_layers": 32, c06r3n09: "num_key_value_heads": 32, c06r3n09: "pad_token_id": -1, c06r3n09: "pretraining_tp": 1, c06r3n09: "rms_norm_eps": 1e-06, c06r3n09: "rope_scaling": null, c06r3n09: "rope_theta": 10000.0, c06r3n09: "tie_word_embeddings": false, c06r3n09: "torch_dtype": "float16", c06r3n09: "transformers_version": "4.37.2", c06r3n09: "use_cache": true, c06r3n09: "vocab_size": 32000 c06r3n09: } c06r3n09: c06r3n06: WARNING: Logging before InitGoogleLogging() is written to STDERR c06r3n06: I0315 10:53:03.861181 877 ProcessGroupNCCL.cpp:686] [Rank 3] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=246126816 c06r3n09: WARNING: Logging before InitGoogleLogging() is written to STDERR c06r3n09: I0315 10:53:03.863322 32467 ProcessGroupNCCL.cpp:686] [Rank 14] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=261693952 c06r3n06: WARNING: Logging before InitGoogleLogging() is written to STDERR c06r3n06: I0315 10:53:03.870285 875 ProcessGroupNCCL.cpp:686] [Rank 1] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=269450304 c06r3n06: 03/15/2024 10:53:03 - INFO - llmtuner.hparams.parser - Process rank: 3, device: cuda:3, n_gpu: 1 c06r3n06: distributed training: True, compute dtype: torch.float16 c06r3n06: 03/15/2024 10:53:03 - INFO - llmtuner.hparams.parser - Training/evaluation parameters Seq2SeqTrainingArguments( c06r3n06: _n_gpu=1, c06r3n06: adafactor=False, c06r3n06: adam_beta1=0.9, c06r3n06: adam_beta2=0.999, c06r3n06: adam_epsilon=1e-08, c06r3n06: auto_find_batch_size=False, c06r3n06: bf16=False, c06r3n06: bf16_full_eval=False, c06r3n06: data_seed=None, c06r3n06: dataloader_drop_last=False, c06r3n06: dataloader_num_workers=0, c06r3n06: dataloader_persistent_workers=False, c06r3n06: dataloader_pin_memory=True, c06r3n06: ddp_backend=None, c06r3n06: ddp_broadcast_buffers=None, c06r3n06: ddp_bucket_cap_mb=None, c06r3n06: ddp_find_unused_parameters=None, c06r3n06: ddp_timeout=1800, c06r3n06: debug=[], c06r3n06: deepspeed=deepspeed.json, c06r3n06: disable_tqdm=False, c06r3n06: dispatch_batches=None, c06r3n06: do_eval=True, c06r3n06: do_predict=False, c06r3n06: do_train=True, c06r3n06: eval_accumulation_steps=None, c06r3n06: eval_delay=0, c06r3n06: eval_steps=100, c06r3n06: evaluation_strategy=steps, c06r3n06: fp16=True, c06r3n06: fp16_backend=auto, c06r3n06: fp16_full_eval=False, c06r3n06: fp16_opt_level=O1, c06r3n06: fsdp=[], c06r3n06: fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, c06r3n06: fsdp_min_num_params=0, c06r3n06: fsdp_transformer_layer_cls_to_wrap=None, c06r3n06: full_determinism=False, c06r3n06: generation_config=None, c06r3n06: generation_max_length=None, c06r3n06: generation_num_beams=None, c06r3n06: gradient_accumulation_steps=1, c06r3n06: gradient_checkpointing=False, c06r3n06: gradient_checkpointing_kwargs=None, c06r3n06: greater_is_better=False, c06r3n06: group_by_length=False, c06r3n06: half_precision_backend=auto, c06r3n06: hub_always_push=False, c06r3n06: hub_model_id=None, c06r3n06: hub_private_repo=False, c06r3n06: hub_strategy=every_save, c06r3n06: hub_token=, c06r3n06: ignore_data_skip=False, c06r3n06: include_inputs_for_metrics=False, c06r3n06: include_num_input_tokens_seen=False, c06r3n06: include_tokens_per_second=False, c06r3n06: jit_mode_eval=False, c06r3n06: label_names=None, c06r3n06: label_smoothing_factor=0.0, c06r3n06: learning_rate=5e-05, c06r3n06: length_column_name=length, c06r3n06: load_best_model_at_end=True, c06r3n06: local_rank=3, c06r3n06: log_level=passive, c06r3n06: log_level_replica=warning, c06r3n06: log_on_each_node=True, c06r3n06: logging_dir=/work/share/huchen1/liangjj/llama_factory/runs/Mar15_10-53-02_c06r3n06, c06r3n06: logging_first_step=False, c06r3n06: logging_nan_inf_filter=True, c06r3n06: logging_steps=10, c06r3n06: logging_strategy=steps, c06r3n06: lr_scheduler_kwargs={}, c06r3n06: lr_scheduler_type=cosine, c06r3n06: max_grad_norm=0.5, c06r3n06: max_steps=-1, c06r3n06: metric_for_best_model=loss, c06r3n06: mp_parameters=, c06r3n06: neftune_noise_alpha=None, c06r3n06: no_cuda=False, c06r3n06: num_train_epochs=4.0, c06r3n06: optim=adamw_torch, c06r3n06: optim_args=None, c06r3n06: output_dir=/work/share/huchen1/liangjj/llama_factory, c06r3n06: overwrite_output_dir=False, c06r3n06: past_index=-1, c06r3n06: per_device_eval_batch_size=1, c06r3n06: per_device_train_batch_size=1, c06r3n06: predict_with_generate=False, c06r3n06: prediction_loss_only=False, c06r3n06: push_to_hub=False, c06r3n06: push_to_hub_model_id=None, c06r3n06: push_to_hub_organization=None, c06r3n06: push_to_hub_token=, c06r3n06: ray_scope=last, c06r3n06: remove_unused_columns=True, c06r3n06: report_to=['tensorboard'], c06r3n06: resume_from_checkpoint=None, c06r3n06: run_name=/work/share/huchen1/liangjj/llama_factory, c06r3n06: save_on_each_node=True, c06r3n06: save_only_model=False, c06r3n06: save_safetensors=True, c06r3n06: save_steps=100, c06r3n06: save_strategy=steps, c06r3n06: save_total_limit=None, c06r3n06: seed=42, c06r3n06: skip_memory_metrics=True, c06r3n06: sortish_sampler=False, c06r3n06: split_batches=False, c06r3n06: tf32=None, c06r3n06: torch_compile=False, c06r3n06: torch_compile_backend=None, c06r3n06: torch_compile_mode=None, c06r3n06: torchdynamo=None, c06r3n06: tpu_metrics_debug=False, c06r3n06: tpu_num_cores=None, c06r3n06: use_cpu=False, c06r3n06: use_ipex=False, c06r3n06: use_legacy_prediction_loop=False, c06r3n06: use_mps_device=False, c06r3n06: warmup_ratio=0.03, c06r3n06: warmup_steps=0, c06r3n06: weight_decay=0.0, c06r3n06: ) c06r3n09: 03/15/2024 10:53:03 - INFO - llmtuner.hparams.parser - Process rank: 2, device: cuda:2, n_gpu: 1 c06r3n09: distributed training: True, compute dtype: torch.float16 c06r3n09: 03/15/2024 10:53:03 - INFO - llmtuner.hparams.parser - Training/evaluation parameters Seq2SeqTrainingArguments( c06r3n09: _n_gpu=1, c06r3n09: adafactor=False, c06r3n09: adam_beta1=0.9, c06r3n09: adam_beta2=0.999, c06r3n09: adam_epsilon=1e-08, c06r3n09: auto_find_batch_size=False, c06r3n09: bf16=False, c06r3n09: bf16_full_eval=False, c06r3n09: data_seed=None, c06r3n09: dataloader_drop_last=False, c06r3n09: dataloader_num_workers=0, c06r3n09: dataloader_persistent_workers=False, c06r3n09: dataloader_pin_memory=True, c06r3n09: ddp_backend=None, c06r3n09: ddp_broadcast_buffers=None, c06r3n09: ddp_bucket_cap_mb=None, c06r3n09: ddp_find_unused_parameters=None, c06r3n09: ddp_timeout=1800, c06r3n09: debug=[], c06r3n09: deepspeed=deepspeed.json, c06r3n09: disable_tqdm=False, c06r3n09: dispatch_batches=None, c06r3n09: do_eval=True, c06r3n09: do_predict=False, c06r3n09: do_train=True, c06r3n09: eval_accumulation_steps=None, c06r3n09: eval_delay=0, c06r3n09: eval_steps=100, c06r3n09: evaluation_strategy=steps, c06r3n09: fp16=True, c06r3n09: fp16_backend=auto, c06r3n09: fp16_full_eval=False, c06r3n09: fp16_opt_level=O1, c06r3n09: fsdp=[], c06r3n09: fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, c06r3n09: fsdp_min_num_params=0, c06r3n09: fsdp_transformer_layer_cls_to_wrap=None, c06r3n09: full_determinism=False, c06r3n09: generation_config=None, c06r3n09: generation_max_length=None, c06r3n09: generation_num_beams=None, c06r3n09: gradient_accumulation_steps=1, c06r3n09: gradient_checkpointing=False, c06r3n09: gradient_checkpointing_kwargs=None, c06r3n09: greater_is_better=False, c06r3n09: group_by_length=False, c06r3n09: half_precision_backend=auto, c06r3n09: hub_always_push=False, c06r3n09: hub_model_id=None, c06r3n09: hub_private_repo=False, c06r3n09: hub_strategy=every_save, c06r3n09: hub_token=, c06r3n09: ignore_data_skip=False, c06r3n09: include_inputs_for_metrics=False, c06r3n09: include_num_input_tokens_seen=False, c06r3n09: include_tokens_per_second=False, c06r3n09: jit_mode_eval=False, c06r3n09: label_names=None, c06r3n09: label_smoothing_factor=0.0, c06r3n09: learning_rate=5e-05, c06r3n09: length_column_name=length, c06r3n09: load_best_model_at_end=True, c06r3n09: local_rank=2, c06r3n09: log_level=passive, c06r3n09: log_level_replica=warning, c06r3n09: log_on_each_node=True, c06r3n09: logging_dir=/work/share/huchen1/liangjj/llama_factory/runs/Mar15_10-53-02_c06r3n09, c06r3n09: logging_first_step=False, c06r3n09: logging_nan_inf_filter=True, c06r3n09: logging_steps=10, c06r3n09: logging_strategy=steps, c06r3n09: lr_scheduler_kwargs={}, c06r3n09: lr_scheduler_type=cosine, c06r3n09: max_grad_norm=0.5, c06r3n09: max_steps=-1, c06r3n09: metric_for_best_model=loss, c06r3n09: mp_parameters=, c06r3n09: neftune_noise_alpha=None, c06r3n09: no_cuda=False, c06r3n09: num_train_epochs=4.0, c06r3n09: optim=adamw_torch, c06r3n09: optim_args=None, c06r3n09: output_dir=/work/share/huchen1/liangjj/llama_factory, c06r3n09: overwrite_output_dir=False, c06r3n09: past_index=-1, c06r3n09: per_device_eval_batch_size=1, c06r3n09: per_device_train_batch_size=1, c06r3n09: predict_with_generate=False, c06r3n09: prediction_loss_only=False, c06r3n09: push_to_hub=False, c06r3n09: push_to_hub_model_id=None, c06r3n09: push_to_hub_organization=None, c06r3n09: push_to_hub_token=, c06r3n09: ray_scope=last, c06r3n09: remove_unused_columns=True, c06r3n09: report_to=['tensorboard'], c06r3n09: resume_from_checkpoint=None, c06r3n09: run_name=/work/share/huchen1/liangjj/llama_factory, c06r3n09: save_on_each_node=True, c06r3n09: save_only_model=False, c06r3n09: save_safetensors=True, c06r3n09: save_steps=100, c06r3n09: save_strategy=steps, c06r3n09: save_total_limit=None, c06r3n09: seed=42, c06r3n09: skip_memory_metrics=True, c06r3n09: sortish_sampler=False, c06r3n09: split_batches=False, c06r3n09: tf32=None, c06r3n09: torch_compile=False, c06r3n09: torch_compile_backend=None, c06r3n09: torch_compile_mode=None, c06r3n09: torchdynamo=None, c06r3n09: tpu_metrics_debug=False, c06r3n09: tpu_num_cores=None, c06r3n09: use_cpu=False, c06r3n09: use_ipex=False, c06r3n09: use_legacy_prediction_loop=False, c06r3n09: use_mps_device=False, c06r3n09: warmup_ratio=0.03, c06r3n09: warmup_steps=0, c06r3n09: weight_decay=0.0, c06r3n09: ) c06r3n06: You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 c06r3n09: You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 c06r3n06: 03/15/2024 10:53:03 - INFO - llmtuner.hparams.parser - Process rank: 1, device: cuda:1, n_gpu: 1 c06r3n06: distributed training: True, compute dtype: torch.float16 c06r3n06: 03/15/2024 10:53:03 - INFO - llmtuner.hparams.parser - Training/evaluation parameters Seq2SeqTrainingArguments( c06r3n06: _n_gpu=1, c06r3n06: adafactor=False, c06r3n06: adam_beta1=0.9, c06r3n06: adam_beta2=0.999, c06r3n06: adam_epsilon=1e-08, c06r3n06: auto_find_batch_size=False, c06r3n06: bf16=False, c06r3n06: bf16_full_eval=False, c06r3n06: data_seed=None, c06r3n06: dataloader_drop_last=False, c06r3n06: dataloader_num_workers=0, c06r3n06: dataloader_persistent_workers=False, c06r3n06: dataloader_pin_memory=True, c06r3n06: ddp_backend=None, c06r3n06: ddp_broadcast_buffers=None, c06r3n06: ddp_bucket_cap_mb=None, c06r3n06: ddp_find_unused_parameters=None, c06r3n06: ddp_timeout=1800, c06r3n06: debug=[], c06r3n06: deepspeed=deepspeed.json, c06r3n06: disable_tqdm=False, c06r3n06: dispatch_batches=None, c06r3n06: do_eval=True, c06r3n06: do_predict=False, c06r3n06: do_train=True, c06r3n06: eval_accumulation_steps=None, c06r3n06: eval_delay=0, c06r3n06: eval_steps=100, c06r3n06: evaluation_strategy=steps, c06r3n06: fp16=True, c06r3n06: fp16_backend=auto, c06r3n06: fp16_full_eval=False, c06r3n06: fp16_opt_level=O1, c06r3n06: fsdp=[], c06r3n06: fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, c06r3n06: fsdp_min_num_params=0, c06r3n06: fsdp_transformer_layer_cls_to_wrap=None, c06r3n06: full_determinism=False, c06r3n06: generation_config=None, c06r3n06: generation_max_length=None, c06r3n06: generation_num_beams=None, c06r3n06: gradient_accumulation_steps=1, c06r3n06: gradient_checkpointing=False, c06r3n06: gradient_checkpointing_kwargs=None, c06r3n06: greater_is_better=False, c06r3n06: group_by_length=False, c06r3n06: half_precision_backend=auto, c06r3n06: hub_always_push=False, c06r3n06: hub_model_id=None, c06r3n06: hub_private_repo=False, c06r3n06: hub_strategy=every_save, c06r3n06: hub_token=, c06r3n06: ignore_data_skip=False, c06r3n06: include_inputs_for_metrics=False, c06r3n06: include_num_input_tokens_seen=False, c06r3n06: include_tokens_per_second=False, c06r3n06: jit_mode_eval=False, c06r3n06: label_names=None, c06r3n06: label_smoothing_factor=0.0, c06r3n06: learning_rate=5e-05, c06r3n06: length_column_name=length, c06r3n06: load_best_model_at_end=True, c06r3n06: local_rank=1, c06r3n06: log_level=passive, c06r3n06: log_level_replica=warning, c06r3n06: log_on_each_node=True, c06r3n06: logging_dir=/work/share/huchen1/liangjj/llama_factory/runs/Mar15_10-53-02_c06r3n06, c06r3n06: logging_first_step=False, c06r3n06: logging_nan_inf_filter=True, c06r3n06: logging_steps=10, c06r3n06: logging_strategy=steps, c06r3n06: lr_scheduler_kwargs={}, c06r3n06: lr_scheduler_type=cosine, c06r3n06: max_grad_norm=0.5, c06r3n06: max_steps=-1, c06r3n06: metric_for_best_model=loss, c06r3n06: mp_parameters=, c06r3n06: neftune_noise_alpha=None, c06r3n06: no_cuda=False, c06r3n06: num_train_epochs=4.0, c06r3n06: optim=adamw_torch, c06r3n06: optim_args=None, c06r3n06: output_dir=/work/share/huchen1/liangjj/llama_factory, c06r3n06: overwrite_output_dir=False, c06r3n06: past_index=-1, c06r3n06: per_device_eval_batch_size=1, c06r3n06: per_device_train_batch_size=1, c06r3n06: predict_with_generate=False, c06r3n06: prediction_loss_only=False, c06r3n06: push_to_hub=False, c06r3n06: push_to_hub_model_id=None, c06r3n06: push_to_hub_organization=None, c06r3n06: push_to_hub_token=, c06r3n06: ray_scope=last, c06r3n06: remove_unused_columns=True, c06r3n06: report_to=['tensorboard'], c06r3n06: resume_from_checkpoint=None, c06r3n06: run_name=/work/share/huchen1/liangjj/llama_factory, c06r3n06: save_on_each_node=True, c06r3n06: save_only_model=False, c06r3n06: save_safetensors=True, c06r3n06: save_steps=100, c06r3n06: save_strategy=steps, c06r3n06: save_total_limit=None, c06r3n06: seed=42, c06r3n06: skip_memory_metrics=True, c06r3n06: sortish_sampler=False, c06r3n06: split_batches=False, c06r3n06: tf32=None, c06r3n06: torch_compile=False, c06r3n06: torch_compile_backend=None, c06r3n06: torch_compile_mode=None, c06r3n06: torchdynamo=None, c06r3n06: tpu_metrics_debug=False, c06r3n06: tpu_num_cores=None, c06r3n06: use_cpu=False, c06r3n06: use_ipex=False, c06r3n06: use_legacy_prediction_loop=False, c06r3n06: use_mps_device=False, c06r3n06: warmup_ratio=0.03, c06r3n06: warmup_steps=0, c06r3n06: weight_decay=0.0, c06r3n06: ) c06r3n06: You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 c06r3n09: [INFO|modeling_utils.py:3473] 2024-03-15 10:53:04,763 >> loading weights file /work/home/liangjing/.cache/modelscope/hub/skyline2006/llama-7b/pytorch_model.bin.index.json c06r3n09: [INFO|modeling_utils.py:1426] 2024-03-15 10:53:04,780 >> Instantiating LlamaForCausalLM model under default dtype torch.float16. c06r3n09: [INFO|modeling_utils.py:3582] 2024-03-15 10:53:04,780 >> Detected DeepSpeed ZeRO-3: activating zero.init() for this model c06r3n09: [INFO|configuration_utils.py:826] 2024-03-15 10:53:04,788 >> Generate config GenerationConfig { c06r3n09: "bos_token_id": 0, c06r3n09: "eos_token_id": 1, c06r3n09: "pad_token_id": -1 c06r3n09: } c06r3n09: c06r3n08: [2024-03-15 10:53:17,399] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) c06r3n08: [2024-03-15 10:53:17,399] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) c06r3n08: [2024-03-15 10:53:17,399] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) c06r3n08: [2024-03-15 10:53:17,399] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) c06r3n07: [2024-03-15 10:53:17,557] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) c06r3n07: [2024-03-15 10:53:17,557] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) c06r3n07: [2024-03-15 10:53:17,558] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) c06r3n07: [2024-03-15 10:53:17,558] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) c06r3n07: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n08: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n07: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n08: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n07: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n07: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n08: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n08: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n07: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n07: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n08: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n08: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n07: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n07: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n08: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n08: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n07: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n07: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n08: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n08: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n07: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n07: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n08: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n08: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n07: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n07: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n08: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n08: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n07: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n07: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n08: /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for : No known documentation group for module 'gradio.mix' c06r3n08: warnings.warn(f"Could not get documentation group for {cls}: {exc}") c06r3n08: [2024-03-15 10:53:58,618] [INFO] [comm.py:637:init_distributed] cdb=None c06r3n08: [2024-03-15 10:53:58,618] [INFO] [comm.py:637:init_distributed] cdb=None c06r3n07: [2024-03-15 10:53:58,620] [INFO] [comm.py:637:init_distributed] cdb=None c06r3n07: [2024-03-15 10:53:58,620] [INFO] [comm.py:637:init_distributed] cdb=None c06r3n07: [2024-03-15 10:53:58,620] [INFO] [comm.py:637:init_distributed] cdb=None c06r3n07: [2024-03-15 10:53:58,620] [INFO] [comm.py:637:init_distributed] cdb=None c06r3n08: [2024-03-15 10:53:58,618] [INFO] [comm.py:637:init_distributed] cdb=None c06r3n08: [2024-03-15 10:53:58,618] [INFO] [comm.py:637:init_distributed] cdb=None c06r3n08: WARNING: Logging before InitGoogleLogging() is written to STDERR c06r3n08: I0315 10:53:58.625358 31087 ProcessGroupNCCL.cpp:686] [Rank 9] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=258593952 c06r3n08: WARNING: Logging before InitGoogleLogging() is written to STDERR c06r3n08: I0315 10:53:58.625381 31089 ProcessGroupNCCL.cpp:686] [Rank 11] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=252329968 c06r3n08: WARNING: Logging before InitGoogleLogging() is written to STDERR c06r3n08: I0315 10:53:58.625396 31088 ProcessGroupNCCL.cpp:686] [Rank 10] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=273233504 c06r3n08: WARNING: Logging before InitGoogleLogging() is written to STDERR c06r3n08: I0315 10:53:58.625388 31086 ProcessGroupNCCL.cpp:686] [Rank 8] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=254678992 c06r3n06: WARNING: Logging before InitGoogleLogging() is written to STDERR c06r3n06: I0315 10:53:58.626438 874 ProcessGroupNCCL.cpp:686] [Rank 0] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=256847312 c06r3n07: WARNING: Logging before InitGoogleLogging() is written to STDERR c06r3n07: WARNING: Logging before InitGoogleLogging() is written to STDERR c06r3n07: WARNING: Logging before InitGoogleLogging() is written to STDERR c06r3n07: I0315 10:53:58.628240 18636 ProcessGroupNCCL.cpp:686] [Rank 7] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=256984624 c06r3n07: I0315 10:53:58.628228 18634 ProcessGroupNCCL.cpp:686] [Rank 5] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=247879216 c06r3n07: I0315 10:53:58.628254 18635 ProcessGroupNCCL.cpp:686] [Rank 6] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=274987872 c06r3n07: WARNING: Logging before InitGoogleLogging() is written to STDERR c06r3n07: I0315 10:53:58.628278 18633 ProcessGroupNCCL.cpp:686] [Rank 4] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=264023088 c06r3n06: 03/15/2024 10:53:58 - INFO - llmtuner.hparams.parser - Process rank: 0, device: cuda:0, n_gpu: 1 c06r3n06: distributed training: True, compute dtype: torch.float16 c06r3n06: 03/15/2024 10:53:58 - INFO - llmtuner.hparams.parser - Training/evaluation parameters Seq2SeqTrainingArguments( c06r3n06: _n_gpu=1, c06r3n06: adafactor=False, c06r3n06: adam_beta1=0.9, c06r3n06: adam_beta2=0.999, c06r3n06: adam_epsilon=1e-08, c06r3n06: auto_find_batch_size=False, c06r3n06: bf16=False, c06r3n06: bf16_full_eval=False, c06r3n06: data_seed=None, c06r3n06: dataloader_drop_last=False, c06r3n06: dataloader_num_workers=0, c06r3n06: dataloader_persistent_workers=False, c06r3n06: dataloader_pin_memory=True, c06r3n06: ddp_backend=None, c06r3n06: ddp_broadcast_buffers=None, c06r3n06: ddp_bucket_cap_mb=None, c06r3n06: ddp_find_unused_parameters=None, c06r3n06: ddp_timeout=1800, c06r3n06: debug=[], c06r3n06: deepspeed=deepspeed.json, c06r3n06: disable_tqdm=False, c06r3n06: dispatch_batches=None, c06r3n06: do_eval=True, c06r3n06: do_predict=False, c06r3n06: do_train=True, c06r3n06: eval_accumulation_steps=None, c06r3n06: eval_delay=0, c06r3n06: eval_steps=100, c06r3n06: evaluation_strategy=steps, c06r3n06: fp16=True, c06r3n06: fp16_backend=auto, c06r3n06: fp16_full_eval=False, c06r3n06: fp16_opt_level=O1, c06r3n06: fsdp=[], c06r3n06: fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, c06r3n06: fsdp_min_num_params=0, c06r3n06: fsdp_transformer_layer_cls_to_wrap=None, c06r3n06: full_determinism=False, c06r3n06: generation_config=None, c06r3n06: generation_max_length=None, c06r3n06: generation_num_beams=None, c06r3n06: gradient_accumulation_steps=1, c06r3n06: gradient_checkpointing=False, c06r3n06: gradient_checkpointing_kwargs=None, c06r3n06: greater_is_better=False, c06r3n06: group_by_length=False, c06r3n06: half_precision_backend=auto, c06r3n06: hub_always_push=False, c06r3n06: hub_model_id=None, c06r3n06: hub_private_repo=False, c06r3n06: hub_strategy=every_save, c06r3n06: hub_token=, c06r3n06: ignore_data_skip=False, c06r3n06: include_inputs_for_metrics=False, c06r3n06: include_num_input_tokens_seen=False, c06r3n06: include_tokens_per_second=False, c06r3n06: jit_mode_eval=False, c06r3n06: label_names=None, c06r3n06: label_smoothing_factor=0.0, c06r3n06: learning_rate=5e-05, c06r3n06: length_column_name=length, c06r3n06: load_best_model_at_end=True, c06r3n06: local_rank=0, c06r3n06: log_level=passive, c06r3n06: log_level_replica=warning, c06r3n06: log_on_each_node=True, c06r3n06: logging_dir=/work/share/huchen1/liangjj/llama_factory/runs/Mar15_10-53-02_c06r3n06, c06r3n06: logging_first_step=False, c06r3n06: logging_nan_inf_filter=True, c06r3n06: logging_steps=10, c06r3n06: logging_strategy=steps, c06r3n06: lr_scheduler_kwargs={}, c06r3n06: lr_scheduler_type=cosine, c06r3n06: max_grad_norm=0.5, c06r3n06: max_steps=-1, c06r3n06: metric_for_best_model=loss, c06r3n06: mp_parameters=, c06r3n06: neftune_noise_alpha=None, c06r3n06: no_cuda=False, c06r3n06: num_train_epochs=4.0, c06r3n06: optim=adamw_torch, c06r3n06: optim_args=None, c06r3n06: output_dir=/work/share/huchen1/liangjj/llama_factory, c06r3n06: overwrite_output_dir=False, c06r3n06: past_index=-1, c06r3n06: per_device_eval_batch_size=1, c06r3n06: per_device_train_batch_size=1, c06r3n06: predict_with_generate=False, c06r3n06: prediction_loss_only=False, c06r3n06: push_to_hub=False, c06r3n06: push_to_hub_model_id=None, c06r3n06: push_to_hub_organization=None, c06r3n06: push_to_hub_token=, c06r3n06: ray_scope=last, c06r3n06: remove_unused_columns=True, c06r3n06: report_to=['tensorboard'], c06r3n06: resume_from_checkpoint=None, c06r3n06: run_name=/work/share/huchen1/liangjj/llama_factory, c06r3n06: save_on_each_node=True, c06r3n06: save_only_model=False, c06r3n06: save_safetensors=True, c06r3n06: save_steps=100, c06r3n06: save_strategy=steps, c06r3n06: save_total_limit=None, c06r3n06: seed=42, c06r3n06: skip_memory_metrics=True, c06r3n06: sortish_sampler=False, c06r3n06: split_batches=False, c06r3n06: tf32=None, c06r3n06: torch_compile=False, c06r3n06: torch_compile_backend=None, c06r3n06: torch_compile_mode=None, c06r3n06: torchdynamo=None, c06r3n06: tpu_metrics_debug=False, c06r3n06: tpu_num_cores=None, c06r3n06: use_cpu=False, c06r3n06: use_ipex=False, c06r3n06: use_legacy_prediction_loop=False, c06r3n06: use_mps_device=False, c06r3n06: warmup_ratio=0.03, c06r3n06: warmup_steps=0, c06r3n06: weight_decay=0.0, c06r3n06: ) c06r3n06: [INFO|tokenization_utils_base.py:2025] 2024-03-15 10:53:58,675 >> loading file tokenizer.model c06r3n06: [INFO|tokenization_utils_base.py:2025] 2024-03-15 10:53:58,675 >> loading file added_tokens.json c06r3n06: [INFO|tokenization_utils_base.py:2025] 2024-03-15 10:53:58,675 >> loading file special_tokens_map.json c06r3n06: [INFO|tokenization_utils_base.py:2025] 2024-03-15 10:53:58,675 >> loading file tokenizer_config.json c06r3n06: [INFO|tokenization_utils_base.py:2025] 2024-03-15 10:53:58,676 >> loading file tokenizer.json c06r3n06: [WARNING|logging.py:329] 2024-03-15 10:53:58,676 >> You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 c06r3n08: 03/15/2024 10:53:58 - INFO - llmtuner.hparams.parser - Process rank: 0, device: cuda:0, n_gpu: 1 c06r3n08: distributed training: True, compute dtype: torch.float16 c06r3n08: 03/15/2024 10:53:58 - INFO - llmtuner.hparams.parser - Process rank: 2, device: cuda:2, n_gpu: 1 c06r3n08: distributed training: True, compute dtype: torch.float16 c06r3n08: 03/15/2024 10:53:58 - INFO - llmtuner.hparams.parser - Process rank: 3, device: cuda:3, n_gpu: 1 c06r3n08: distributed training: True, compute dtype: torch.float16 c06r3n08: 03/15/2024 10:53:58 - INFO - llmtuner.hparams.parser - Process rank: 1, device: cuda:1, n_gpu: 1 c06r3n08: distributed training: True, compute dtype: torch.float16 c06r3n08: 03/15/2024 10:53:58 - INFO - llmtuner.hparams.parser - Training/evaluation parameters Seq2SeqTrainingArguments( c06r3n08: _n_gpu=1, c06r3n08: adafactor=False, c06r3n08: adam_beta1=0.9, c06r3n08: adam_beta2=0.999, c06r3n08: adam_epsilon=1e-08, c06r3n08: auto_find_batch_size=False, c06r3n08: bf16=False, c06r3n08: bf16_full_eval=False, c06r3n08: data_seed=None, c06r3n08: dataloader_drop_last=False, c06r3n08: dataloader_num_workers=0, c06r3n08: dataloader_persistent_workers=False, c06r3n08: dataloader_pin_memory=True, c06r3n08: ddp_backend=None, c06r3n08: ddp_broadcast_buffers=None, c06r3n08: ddp_bucket_cap_mb=None, c06r3n08: ddp_find_unused_parameters=None, c06r3n08: ddp_timeout=1800, c06r3n08: debug=[], c06r3n08: deepspeed=deepspeed.json, c06r3n08: disable_tqdm=False, c06r3n08: dispatch_batches=None, c06r3n08: do_eval=True, c06r3n08: do_predict=False, c06r3n08: do_train=True, c06r3n08: eval_accumulation_steps=None, c06r3n08: eval_delay=0, c06r3n08: eval_steps=100, c06r3n08: evaluation_strategy=steps, c06r3n08: fp16=True, c06r3n08: fp16_backend=auto, c06r3n08: fp16_full_eval=False, c06r3n08: fp16_opt_level=O1, c06r3n08: fsdp=[], c06r3n08: fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, c06r3n08: fsdp_min_num_params=0, c06r3n08: fsdp_transformer_layer_cls_to_wrap=None, c06r3n08: full_determinism=False, c06r3n08: generation_config=None, c06r3n08: generation_max_length=None, c06r3n08: generation_num_beams=None, c06r3n08: gradient_accumulation_steps=1, c06r3n08: gradient_checkpointing=False, c06r3n08: gradient_checkpointing_kwargs=None, c06r3n08: greater_is_better=False, c06r3n08: group_by_length=False, c06r3n08: half_precision_backend=auto, c06r3n08: hub_always_push=False, c06r3n08: hub_model_id=None, c06r3n08: hub_private_repo=False, c06r3n08: hub_strategy=every_save, c06r3n08: hub_token=, c06r3n08: ignore_data_skip=False, c06r3n08: include_inputs_for_metrics=False, c06r3n08: include_num_input_tokens_seen=False, c06r3n08: include_tokens_per_second=False, c06r3n08: jit_mode_eval=False, c06r3n08: label_names=None, c06r3n08: label_smoothing_factor=0.0, c06r3n08: learning_rate=5e-05, c06r3n08: length_column_name=length, c06r3n08: load_best_model_at_end=True, c06r3n08: local_rank=2, c06r3n08: log_level=passive, c06r3n08: log_level_replica=warning, c06r3n08: log_on_each_node=True, c06r3n08: logging_dir=/work/share/huchen1/liangjj/llama_factory/runs/Mar15_10-53-58_c06r3n08, c06r3n08: logging_first_step=False, c06r3n08: logging_nan_inf_filter=True, c06r3n08: logging_steps=10, c06r3n08: logging_strategy=steps, c06r3n08: lr_scheduler_kwargs={}, c06r3n08: lr_scheduler_type=cosine, c06r3n08: max_grad_norm=0.5, c06r3n08: max_steps=-1, c06r3n08: metric_for_best_model=loss, c06r3n08: mp_parameters=, c06r3n08: neftune_noise_alpha=None, c06r3n08: no_cuda=False, c06r3n08: num_train_epochs=4.0, c06r3n08: optim=adamw_torch, c06r3n08: optim_args=None, c06r3n08: output_dir=/work/share/huchen1/liangjj/llama_factory, c06r3n08: overwrite_output_dir=False, c06r3n08: past_index=-1, c06r3n08: per_device_eval_batch_size=1, c06r3n08: per_device_train_batch_size=1, c06r3n08: predict_with_generate=False, c06r3n08: prediction_loss_only=False, c06r3n08: push_to_hub=False, c06r3n08: push_to_hub_model_id=None, c06r3n08: push_to_hub_organization=None, c06r3n08: push_to_hub_token=, c06r3n08: ray_scope=last, c06r3n08: remove_unused_columns=True, c06r3n08: report_to=['tensorboard'], c06r3n08: resume_from_checkpoint=None, c06r3n08: run_name=/work/share/huchen1/liangjj/llama_factory, c06r3n08: save_on_each_node=True, c06r3n08: save_only_model=False, c06r3n08: save_safetensors=True, c06r3n08: save_steps=100, c06r3n08: save_strategy=steps, c06r3n08: save_total_limit=None, c06r3n08: seed=42, c06r3n08: skip_memory_metrics=True, c06r3n08: sortish_sampler=False, c06r3n08: split_batches=False, c06r3n08: tf32=None, c06r3n08: torch_compile=False, c06r3n08: torch_compile_backend=None, c06r3n08: torch_compile_mode=None, c06r3n08: torchdynamo=None, c06r3n08: tpu_metrics_debug=False, c06r3n08: tpu_num_cores=None, c06r3n08: use_cpu=False, c06r3n08: use_ipex=False, c06r3n08: use_legacy_prediction_loop=False, c06r3n08: use_mps_device=False, c06r3n08: warmup_ratio=0.03, c06r3n08: warmup_steps=0, c06r3n08: weight_decay=0.0, c06r3n08: ) c06r3n08: 03/15/2024 10:53:58 - INFO - llmtuner.hparams.parser - Training/evaluation parameters Seq2SeqTrainingArguments( c06r3n08: _n_gpu=1, c06r3n08: adafactor=False, c06r3n08: adam_beta1=0.9, c06r3n08: adam_beta2=0.999, c06r3n08: adam_epsilon=1e-08, c06r3n08: auto_find_batch_size=False, c06r3n08: bf16=False, c06r3n08: bf16_full_eval=False, c06r3n08: data_seed=None, c06r3n08: dataloader_drop_last=False, c06r3n08: dataloader_num_workers=0, c06r3n08: dataloader_persistent_workers=False, c06r3n08: dataloader_pin_memory=True, c06r3n08: ddp_backend=None, c06r3n08: ddp_broadcast_buffers=None, c06r3n08: ddp_bucket_cap_mb=None, c06r3n08: ddp_find_unused_parameters=None, c06r3n08: ddp_timeout=1800, c06r3n08: debug=[], c06r3n08: deepspeed=deepspeed.json, c06r3n08: disable_tqdm=False, c06r3n08: dispatch_batches=None, c06r3n08: do_eval=True, c06r3n08: do_predict=False, c06r3n08: do_train=True, c06r3n08: eval_accumulation_steps=None, c06r3n08: eval_delay=0, c06r3n08: eval_steps=100, c06r3n08: evaluation_strategy=steps, c06r3n08: fp16=True, c06r3n08: fp16_backend=auto, c06r3n08: fp16_full_eval=False, c06r3n08: fp16_opt_level=O1, c06r3n08: fsdp=[], c06r3n08: fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, c06r3n08: fsdp_min_num_params=0, c06r3n08: fsdp_transformer_layer_cls_to_wrap=None, c06r3n08: full_determinism=False, c06r3n08: generation_config=None, c06r3n08: generation_max_length=None, c06r3n08: generation_num_beams=None, c06r3n08: gradient_accumulation_steps=1, c06r3n08: gradient_checkpointing=False, c06r3n08: gradient_checkpointing_kwargs=None, c06r3n08: greater_is_better=False, c06r3n08: group_by_length=False, c06r3n08: half_precision_backend=auto, c06r3n08: hub_always_push=False, c06r3n08: hub_model_id=None, c06r3n08: hub_private_repo=False, c06r3n08: hub_strategy=every_save, c06r3n08: hub_token=, c06r3n08: ignore_data_skip=False, c06r3n08: include_inputs_for_metrics=False, c06r3n08: include_num_input_tokens_seen=False, c06r3n08: include_tokens_per_second=False, c06r3n08: jit_mode_eval=False, c06r3n08: label_names=None, c06r3n08: label_smoothing_factor=0.0, c06r3n08: learning_rate=5e-05, c06r3n08: length_column_name=length, c06r3n08: load_best_model_at_end=True, c06r3n08: local_rank=3, c06r3n08: log_level=passive, c06r3n08: log_level_replica=warning, c06r3n08: log_on_each_node=True, c06r3n08: logging_dir=/work/share/huchen1/liangjj/llama_factory/runs/Mar15_10-53-58_c06r3n08, c06r3n08: logging_first_step=False, c06r3n08: logging_nan_inf_filter=True, c06r3n08: logging_steps=10, c06r3n08: logging_strategy=steps, c06r3n08: lr_scheduler_kwargs={}, c06r3n08: lr_scheduler_type=cosine, c06r3n08: max_grad_norm=0.5, c06r3n08: max_steps=-1, c06r3n08: metric_for_best_model=loss, c06r3n08: mp_parameters=, c06r3n08: neftune_noise_alpha=None, c06r3n08: no_cuda=False, c06r3n08: num_train_epochs=4.0, c06r3n08: optim=adamw_torch, c06r3n08: optim_args=None, c06r3n08: output_dir=/work/share/huchen1/liangjj/llama_factory, c06r3n08: overwrite_output_dir=False, c06r3n08: past_index=-1, c06r3n08: per_device_eval_batch_size=1, c06r3n08: per_device_train_batch_size=1, c06r3n08: predict_with_generate=False, c06r3n08: prediction_loss_only=False, c06r3n08: push_to_hub=False, c06r3n08: push_to_hub_model_id=None, c06r3n08: push_to_hub_organization=None, c06r3n08: push_to_hub_token=, c06r3n08: ray_scope=last, c06r3n08: remove_unused_columns=True, c06r3n08: report_to=['tensorboard'], c06r3n08: resume_from_checkpoint=None, c06r3n08: run_name=/work/share/huchen1/liangjj/llama_factory, c06r3n08: save_on_each_node=True, c06r3n08: save_only_model=False, c06r3n08: save_safetensors=True, c06r3n08: save_steps=100, c06r3n08: save_strategy=steps, c06r3n08: save_total_limit=None, c06r3n08: seed=42, c06r3n08: skip_memory_metrics=True, c06r3n08: sortish_sampler=False, c06r3n08: split_batches=False, c06r3n08: tf32=None, c06r3n08: torch_compile=False, c06r3n08: torch_compile_backend=None, c06r3n08: torch_compile_mode=None, c06r3n08: torchdynamo=None, c06r3n08: tpu_metrics_debug=False, c06r3n08: tpu_num_cores=None, c06r3n08: use_cpu=False, c06r3n08: use_ipex=False, c06r3n08: use_legacy_prediction_loop=False, c06r3n08: use_mps_device=False, c06r3n08: warmup_ratio=0.03, c06r3n08: warmup_steps=0, c06r3n08: weight_decay=0.0, c06r3n08: ) c06r3n08: 03/15/2024 10:53:58 - INFO - llmtuner.hparams.parser - Training/evaluation parameters Seq2SeqTrainingArguments( c06r3n08: _n_gpu=1, c06r3n08: adafactor=False, c06r3n08: adam_beta1=0.9, c06r3n08: adam_beta2=0.999, c06r3n08: adam_epsilon=1e-08, c06r3n08: auto_find_batch_size=False, c06r3n08: bf16=False, c06r3n08: bf16_full_eval=False, c06r3n08: data_seed=None, c06r3n08: dataloader_drop_last=False, c06r3n08: dataloader_num_workers=0, c06r3n08: dataloader_persistent_workers=False, c06r3n08: dataloader_pin_memory=True, c06r3n08: ddp_backend=None, c06r3n08: ddp_broadcast_buffers=None, c06r3n08: ddp_bucket_cap_mb=None, c06r3n08: ddp_find_unused_parameters=None, c06r3n08: ddp_timeout=1800, c06r3n08: debug=[], c06r3n08: deepspeed=deepspeed.json, c06r3n08: disable_tqdm=False, c06r3n08: dispatch_batches=None, c06r3n08: do_eval=True, c06r3n08: do_predict=False, c06r3n08: do_train=True, c06r3n08: eval_accumulation_steps=None, c06r3n08: eval_delay=0, c06r3n08: eval_steps=100, c06r3n08: evaluation_strategy=steps, c06r3n08: fp16=True, c06r3n08: fp16_backend=auto, c06r3n08: fp16_full_eval=False, c06r3n08: fp16_opt_level=O1, c06r3n08: fsdp=[], c06r3n08: fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, c06r3n08: fsdp_min_num_params=0, c06r3n08: fsdp_transformer_layer_cls_to_wrap=None, c06r3n08: full_determinism=False, c06r3n08: generation_config=None, c06r3n08: generation_max_length=None, c06r3n08: generation_num_beams=None, c06r3n08: gradient_accumulation_steps=1, c06r3n08: gradient_checkpointing=False, c06r3n08: gradient_checkpointing_kwargs=None, c06r3n08: greater_is_better=False, c06r3n08: group_by_length=False, c06r3n08: half_precision_backend=auto, c06r3n08: hub_always_push=False, c06r3n08: hub_model_id=None, c06r3n08: hub_private_repo=False, c06r3n08: hub_strategy=every_save, c06r3n08: hub_token=, c06r3n08: ignore_data_skip=False, c06r3n08: include_inputs_for_metrics=False, c06r3n08: include_num_input_tokens_seen=False, c06r3n08: include_tokens_per_second=False, c06r3n08: jit_mode_eval=False, c06r3n08: label_names=None, c06r3n08: label_smoothing_factor=0.0, c06r3n08: learning_rate=5e-05, c06r3n08: length_column_name=length, c06r3n08: load_best_model_at_end=True, c06r3n08: local_rank=1, c06r3n08: log_level=passive, c06r3n08: log_level_replica=warning, c06r3n08: log_on_each_node=True, c06r3n08: logging_dir=/work/share/huchen1/liangjj/llama_factory/runs/Mar15_10-53-58_c06r3n08, c06r3n08: logging_first_step=False, c06r3n08: logging_nan_inf_filter=True, c06r3n08: logging_steps=10, c06r3n08: logging_strategy=steps, c06r3n08: lr_scheduler_kwargs={}, c06r3n08: lr_scheduler_type=cosine, c06r3n08: max_grad_norm=0.5, c06r3n08: max_steps=-1, c06r3n08: metric_for_best_model=loss, c06r3n08: mp_parameters=, c06r3n08: neftune_noise_alpha=None, c06r3n08: no_cuda=False, c06r3n08: num_train_epochs=4.0, c06r3n08: optim=adamw_torch, c06r3n08: optim_args=None, c06r3n08: output_dir=/work/share/huchen1/liangjj/llama_factory, c06r3n08: overwrite_output_dir=False, c06r3n08: past_index=-1, c06r3n08: per_device_eval_batch_size=1, c06r3n08: per_device_train_batch_size=1, c06r3n08: predict_with_generate=False, c06r3n08: prediction_loss_only=False, c06r3n08: push_to_hub=False, c06r3n08: push_to_hub_model_id=None, c06r3n08: push_to_hub_organization=None, c06r3n08: push_to_hub_token=, c06r3n08: ray_scope=last, c06r3n08: remove_unused_columns=True, c06r3n08: report_to=['tensorboard'], c06r3n08: resume_from_checkpoint=None, c06r3n08: run_name=/work/share/huchen1/liangjj/llama_factory, c06r3n08: save_on_each_node=True, c06r3n08: save_only_model=False, c06r3n08: save_safetensors=True, c06r3n08: save_steps=100, c06r3n08: save_strategy=steps, c06r3n08: save_total_limit=None, c06r3n08: seed=42, c06r3n08: skip_memory_metrics=True, c06r3n08: sortish_sampler=False, c06r3n08: split_batches=False, c06r3n08: tf32=None, c06r3n08: torch_compile=False, c06r3n08: torch_compile_backend=None, c06r3n08: torch_compile_mode=None, c06r3n08: torchdynamo=None, c06r3n08: tpu_metrics_debug=False, c06r3n08: tpu_num_cores=None, c06r3n08: use_cpu=False, c06r3n08: use_ipex=False, c06r3n08: use_legacy_prediction_loop=False, c06r3n08: use_mps_device=False, c06r3n08: warmup_ratio=0.03, c06r3n08: warmup_steps=0, c06r3n08: weight_decay=0.0, c06r3n08: ) c06r3n08: 03/15/2024 10:53:58 - INFO - llmtuner.hparams.parser - Training/evaluation parameters Seq2SeqTrainingArguments( c06r3n08: _n_gpu=1, c06r3n08: adafactor=False, c06r3n08: adam_beta1=0.9, c06r3n08: adam_beta2=0.999, c06r3n08: adam_epsilon=1e-08, c06r3n08: auto_find_batch_size=False, c06r3n08: bf16=False, c06r3n08: bf16_full_eval=False, c06r3n08: data_seed=None, c06r3n08: dataloader_drop_last=False, c06r3n08: dataloader_num_workers=0, c06r3n08: dataloader_persistent_workers=False, c06r3n08: dataloader_pin_memory=True, c06r3n08: ddp_backend=None, c06r3n08: ddp_broadcast_buffers=None, c06r3n08: ddp_bucket_cap_mb=None, c06r3n08: ddp_find_unused_parameters=None, c06r3n08: ddp_timeout=1800, c06r3n08: debug=[], c06r3n08: deepspeed=deepspeed.json, c06r3n08: disable_tqdm=False, c06r3n08: dispatch_batches=None, c06r3n08: do_eval=True, c06r3n08: do_predict=False, c06r3n08: do_train=True, c06r3n08: eval_accumulation_steps=None, c06r3n08: eval_delay=0, c06r3n08: eval_steps=100, c06r3n08: evaluation_strategy=steps, c06r3n08: fp16=True, c06r3n08: fp16_backend=auto, c06r3n08: fp16_full_eval=False, c06r3n08: fp16_opt_level=O1, c06r3n08: fsdp=[], c06r3n08: fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, c06r3n08: fsdp_min_num_params=0, c06r3n08: fsdp_transformer_layer_cls_to_wrap=None, c06r3n08: full_determinism=False, c06r3n08: generation_config=None, c06r3n08: generation_max_length=None, c06r3n08: generation_num_beams=None, c06r3n08: gradient_accumulation_steps=1, c06r3n08: gradient_checkpointing=False, c06r3n08: gradient_checkpointing_kwargs=None, c06r3n08: greater_is_better=False, c06r3n08: group_by_length=False, c06r3n08: half_precision_backend=auto, c06r3n08: hub_always_push=False, c06r3n08: hub_model_id=None, c06r3n08: hub_private_repo=False, c06r3n08: hub_strategy=every_save, c06r3n08: hub_token=, c06r3n08: ignore_data_skip=False, c06r3n08: include_inputs_for_metrics=False, c06r3n08: include_num_input_tokens_seen=False, c06r3n08: include_tokens_per_second=False, c06r3n08: jit_mode_eval=False, c06r3n08: label_names=None, c06r3n08: label_smoothing_factor=0.0, c06r3n08: learning_rate=5e-05, c06r3n08: length_column_name=length, c06r3n08: load_best_model_at_end=True, c06r3n08: local_rank=0, c06r3n08: log_level=passive, c06r3n08: log_level_replica=warning, c06r3n08: log_on_each_node=True, c06r3n08: logging_dir=/work/share/huchen1/liangjj/llama_factory/runs/Mar15_10-53-58_c06r3n08, c06r3n08: logging_first_step=False, c06r3n08: logging_nan_inf_filter=True, c06r3n08: logging_steps=10, c06r3n08: logging_strategy=steps, c06r3n08: lr_scheduler_kwargs={}, c06r3n08: lr_scheduler_type=cosine, c06r3n08: max_grad_norm=0.5, c06r3n08: max_steps=-1, c06r3n08: metric_for_best_model=loss, c06r3n08: mp_parameters=, c06r3n08: neftune_noise_alpha=None, c06r3n08: no_cuda=False, c06r3n08: num_train_epochs=4.0, c06r3n08: optim=adamw_torch, c06r3n08: optim_args=None, c06r3n08: output_dir=/work/share/huchen1/liangjj/llama_factory, c06r3n08: overwrite_output_dir=False, c06r3n08: past_index=-1, c06r3n08: per_device_eval_batch_size=1, c06r3n08: per_device_train_batch_size=1, c06r3n08: predict_with_generate=False, c06r3n08: prediction_loss_only=False, c06r3n08: push_to_hub=False, c06r3n08: push_to_hub_model_id=None, c06r3n08: push_to_hub_organization=None, c06r3n08: push_to_hub_token=, c06r3n08: ray_scope=last, c06r3n08: remove_unused_columns=True, c06r3n08: report_to=['tensorboard'], c06r3n08: resume_from_checkpoint=None, c06r3n08: run_name=/work/share/huchen1/liangjj/llama_factory, c06r3n08: save_on_each_node=True, c06r3n08: save_only_model=False, c06r3n08: save_safetensors=True, c06r3n08: save_steps=100, c06r3n08: save_strategy=steps, c06r3n08: save_total_limit=None, c06r3n08: seed=42, c06r3n08: skip_memory_metrics=True, c06r3n08: sortish_sampler=False, c06r3n08: split_batches=False, c06r3n08: tf32=None, c06r3n08: torch_compile=False, c06r3n08: torch_compile_backend=None, c06r3n08: torch_compile_mode=None, c06r3n08: torchdynamo=None, c06r3n08: tpu_metrics_debug=False, c06r3n08: tpu_num_cores=None, c06r3n08: use_cpu=False, c06r3n08: use_ipex=False, c06r3n08: use_legacy_prediction_loop=False, c06r3n08: use_mps_device=False, c06r3n08: warmup_ratio=0.03, c06r3n08: warmup_steps=0, c06r3n08: weight_decay=0.0, c06r3n08: ) c06r3n06: [INFO|configuration_utils.py:727] 2024-03-15 10:53:58,757 >> loading configuration file /work/home/liangjing/.cache/modelscope/hub/skyline2006/llama-7b/config.json c06r3n06: [INFO|configuration_utils.py:792] 2024-03-15 10:53:58,759 >> Model config LlamaConfig { c06r3n06: "_name_or_path": "/work/home/liangjing/.cache/modelscope/hub/skyline2006/llama-7b", c06r3n06: "architectures": [ c06r3n06: "LlamaForCausalLM" c06r3n06: ], c06r3n06: "attention_bias": false, c06r3n06: "attention_dropout": 0.0, c06r3n06: "bos_token_id": 0, c06r3n06: "eos_token_id": 1, c06r3n06: "hidden_act": "silu", c06r3n06: "hidden_size": 4096, c06r3n06: "initializer_range": 0.02, c06r3n06: "intermediate_size": 11008, c06r3n06: "max_position_embeddings": 2048, c06r3n06: "max_sequence_length": 2048, c06r3n06: "model_type": "llama", c06r3n06: "num_attention_heads": 32, c06r3n06: "num_hidden_layers": 32, c06r3n06: "num_key_value_heads": 32, c06r3n06: "pad_token_id": -1, c06r3n06: "pretraining_tp": 1, c06r3n06: "rms_norm_eps": 1e-06, c06r3n06: "rope_scaling": null, c06r3n06: "rope_theta": 10000.0, c06r3n06: "tie_word_embeddings": false, c06r3n06: "torch_dtype": "float16", c06r3n06: "transformers_version": "4.37.2", c06r3n06: "use_cache": true, c06r3n06: "vocab_size": 32000 c06r3n06: } c06r3n06: c06r3n07: 03/15/2024 10:53:58 - INFO - llmtuner.hparams.parser - Process rank: 2, device: cuda:2, n_gpu: 1 c06r3n07: distributed training: True, compute dtype: torch.float16 c06r3n07: 03/15/2024 10:53:58 - INFO - llmtuner.hparams.parser - Process rank: 0, device: cuda:0, n_gpu: 1 c06r3n07: distributed training: True, compute dtype: torch.float16 c06r3n07: 03/15/2024 10:53:58 - INFO - llmtuner.hparams.parser - Training/evaluation parameters Seq2SeqTrainingArguments( c06r3n07: _n_gpu=1, c06r3n07: adafactor=False, c06r3n07: adam_beta1=0.9, c06r3n07: adam_beta2=0.999, c06r3n07: adam_epsilon=1e-08, c06r3n07: auto_find_batch_size=False, c06r3n07: bf16=False, c06r3n07: bf16_full_eval=False, c06r3n07: data_seed=None, c06r3n07: dataloader_drop_last=False, c06r3n07: dataloader_num_workers=0, c06r3n07: dataloader_persistent_workers=False, c06r3n07: dataloader_pin_memory=True, c06r3n07: ddp_backend=None, c06r3n07: ddp_broadcast_buffers=None, c06r3n07: ddp_bucket_cap_mb=None, c06r3n07: ddp_find_unused_parameters=None, c06r3n07: ddp_timeout=1800, c06r3n07: debug=[], c06r3n07: deepspeed=deepspeed.json, c06r3n07: disable_tqdm=False, c06r3n07: dispatch_batches=None, c06r3n07: do_eval=True, c06r3n07: do_predict=False, c06r3n07: do_train=True, c06r3n07: eval_accumulation_steps=None, c06r3n07: eval_delay=0, c06r3n07: eval_steps=100, c06r3n07: evaluation_strategy=steps, c06r3n07: fp16=True, c06r3n07: fp16_backend=auto, c06r3n07: fp16_full_eval=False, c06r3n07: fp16_opt_level=O1, c06r3n07: fsdp=[], c06r3n07: fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, c06r3n07: fsdp_min_num_params=0, c06r3n07: fsdp_transformer_layer_cls_to_wrap=None, c06r3n07: full_determinism=False, c06r3n07: generation_config=None, c06r3n07: generation_max_length=None, c06r3n07: generation_num_beams=None, c06r3n07: gradient_accumulation_steps=1, c06r3n07: gradient_checkpointing=False, c06r3n07: gradient_checkpointing_kwargs=None, c06r3n07: greater_is_better=False, c06r3n07: group_by_length=False, c06r3n07: half_precision_backend=auto, c06r3n07: hub_always_push=False, c06r3n07: hub_model_id=None, c06r3n07: hub_private_repo=False, c06r3n07: hub_strategy=every_save, c06r3n07: hub_token=, c06r3n07: ignore_data_skip=False, c06r3n07: include_inputs_for_metrics=False, c06r3n07: include_num_input_tokens_seen=False, c06r3n07: include_tokens_per_second=False, c06r3n07: jit_mode_eval=False, c06r3n07: label_names=None, c06r3n07: label_smoothing_factor=0.0, c06r3n07: learning_rate=5e-05, c06r3n07: length_column_name=length, c06r3n07: load_best_model_at_end=True, c06r3n07: local_rank=2, c06r3n07: log_level=passive, c06r3n07: log_level_replica=warning, c06r3n07: log_on_each_node=True, c06r3n07: logging_dir=/work/share/huchen1/liangjj/llama_factory/runs/Mar15_10-53-58_c06r3n07, c06r3n07: logging_first_step=False, c06r3n07: logging_nan_inf_filter=True, c06r3n07: logging_steps=10, c06r3n07: logging_strategy=steps, c06r3n07: lr_scheduler_kwargs={}, c06r3n07: lr_scheduler_type=cosine, c06r3n07: max_grad_norm=0.5, c06r3n07: max_steps=-1, c06r3n07: metric_for_best_model=loss, c06r3n07: mp_parameters=, c06r3n07: neftune_noise_alpha=None, c06r3n07: no_cuda=False, c06r3n07: num_train_epochs=4.0, c06r3n07: optim=adamw_torch, c06r3n07: optim_args=None, c06r3n07: output_dir=/work/share/huchen1/liangjj/llama_factory, c06r3n07: overwrite_output_dir=False, c06r3n07: past_index=-1, c06r3n07: per_device_eval_batch_size=1, c06r3n07: per_device_train_batch_size=1, c06r3n07: predict_with_generate=False, c06r3n07: prediction_loss_only=False, c06r3n07: push_to_hub=False, c06r3n07: push_to_hub_model_id=None, c06r3n07: push_to_hub_organization=None, c06r3n07: push_to_hub_token=, c06r3n07: ray_scope=last, c06r3n07: remove_unused_columns=True, c06r3n07: report_to=['tensorboard'], c06r3n07: resume_from_checkpoint=None, c06r3n07: run_name=/work/share/huchen1/liangjj/llama_factory, c06r3n07: save_on_each_node=True, c06r3n07: save_only_model=False, c06r3n07: save_safetensors=True, c06r3n07: save_steps=100, c06r3n07: save_strategy=steps, c06r3n07: save_total_limit=None, c06r3n07: seed=42, c06r3n07: skip_memory_metrics=True, c06r3n07: sortish_sampler=False, c06r3n07: split_batches=False, c06r3n07: tf32=None, c06r3n07: torch_compile=False, c06r3n07: torch_compile_backend=None, c06r3n07: torch_compile_mode=None, c06r3n07: torchdynamo=None, c06r3n07: tpu_metrics_debug=False, c06r3n07: tpu_num_cores=None, c06r3n07: use_cpu=False, c06r3n07: use_ipex=False, c06r3n07: use_legacy_prediction_loop=False, c06r3n07: use_mps_device=False, c06r3n07: warmup_ratio=0.03, c06r3n07: warmup_steps=0, c06r3n07: weight_decay=0.0, c06r3n07: ) c06r3n07: 03/15/2024 10:53:58 - INFO - llmtuner.hparams.parser - Process rank: 1, device: cuda:1, n_gpu: 1 c06r3n07: distributed training: True, compute dtype: torch.float16 c06r3n07: 03/15/2024 10:53:58 - INFO - llmtuner.hparams.parser - Training/evaluation parameters Seq2SeqTrainingArguments( c06r3n07: _n_gpu=1, c06r3n07: adafactor=False, c06r3n07: adam_beta1=0.9, c06r3n07: adam_beta2=0.999, c06r3n07: adam_epsilon=1e-08, c06r3n07: auto_find_batch_size=False, c06r3n07: bf16=False, c06r3n07: bf16_full_eval=False, c06r3n07: data_seed=None, c06r3n07: dataloader_drop_last=False, c06r3n07: dataloader_num_workers=0, c06r3n07: dataloader_persistent_workers=False, c06r3n07: dataloader_pin_memory=True, c06r3n07: ddp_backend=None, c06r3n07: ddp_broadcast_buffers=None, c06r3n07: ddp_bucket_cap_mb=None, c06r3n07: ddp_find_unused_parameters=None, c06r3n07: ddp_timeout=1800, c06r3n07: debug=[], c06r3n07: deepspeed=deepspeed.json, c06r3n07: disable_tqdm=False, c06r3n07: dispatch_batches=None, c06r3n07: do_eval=True, c06r3n07: do_predict=False, c06r3n07: do_train=True, c06r3n07: eval_accumulation_steps=None, c06r3n07: eval_delay=0, c06r3n07: eval_steps=100, c06r3n07: evaluation_strategy=steps, c06r3n07: fp16=True, c06r3n07: fp16_backend=auto, c06r3n07: fp16_full_eval=False, c06r3n07: fp16_opt_level=O1, c06r3n07: fsdp=[], c06r3n07: fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, c06r3n07: fsdp_min_num_params=0, c06r3n07: fsdp_transformer_layer_cls_to_wrap=None, c06r3n07: full_determinism=False, c06r3n07: generation_config=None, c06r3n07: generation_max_length=None, c06r3n07: generation_num_beams=None, c06r3n07: gradient_accumulation_steps=1, c06r3n07: gradient_checkpointing=False, c06r3n07: gradient_checkpointing_kwargs=None, c06r3n07: greater_is_better=False, c06r3n07: group_by_length=False, c06r3n07: half_precision_backend=auto, c06r3n07: hub_always_push=False, c06r3n07: hub_model_id=None, c06r3n07: hub_private_repo=False, c06r3n07: hub_strategy=every_save, c06r3n07: hub_token=, c06r3n07: ignore_data_skip=False, c06r3n07: include_inputs_for_metrics=False, c06r3n07: include_num_input_tokens_seen=False, c06r3n07: include_tokens_per_second=False, c06r3n07: jit_mode_eval=False, c06r3n07: label_names=None, c06r3n07: label_smoothing_factor=0.0, c06r3n07: learning_rate=5e-05, c06r3n07: length_column_name=length, c06r3n07: load_best_model_at_end=True, c06r3n07: local_rank=0, c06r3n07: log_level=passive, c06r3n07: log_level_replica=warning, c06r3n07: log_on_each_node=True, c06r3n07: logging_dir=/work/share/huchen1/liangjj/llama_factory/runs/Mar15_10-53-58_c06r3n07, c06r3n07: logging_first_step=False, c06r3n07: logging_nan_inf_filter=True, c06r3n07: logging_steps=10, c06r3n07: logging_strategy=steps, c06r3n07: lr_scheduler_kwargs={}, c06r3n07: lr_scheduler_type=cosine, c06r3n07: max_grad_norm=0.5, c06r3n07: max_steps=-1, c06r3n07: metric_for_best_model=loss, c06r3n07: mp_parameters=, c06r3n07: neftune_noise_alpha=None, c06r3n07: no_cuda=False, c06r3n07: num_train_epochs=4.0, c06r3n07: optim=adamw_torch, c06r3n07: optim_args=None, c06r3n07: output_dir=/work/share/huchen1/liangjj/llama_factory, c06r3n07: overwrite_output_dir=False, c06r3n07: past_index=-1, c06r3n07: per_device_eval_batch_size=1, c06r3n07: per_device_train_batch_size=1, c06r3n07: predict_with_generate=False, c06r3n07: prediction_loss_only=False, c06r3n07: push_to_hub=False, c06r3n07: push_to_hub_model_id=None, c06r3n07: push_to_hub_organization=None, c06r3n07: push_to_hub_token=, c06r3n07: ray_scope=last, c06r3n07: remove_unused_columns=True, c06r3n07: report_to=['tensorboard'], c06r3n07: resume_from_checkpoint=None, c06r3n07: run_name=/work/share/huchen1/liangjj/llama_factory, c06r3n07: save_on_each_node=True, c06r3n07: save_only_model=False, c06r3n07: save_safetensors=True, c06r3n07: save_steps=100, c06r3n07: save_strategy=steps, c06r3n07: save_total_limit=None, c06r3n07: seed=42, c06r3n07: skip_memory_metrics=True, c06r3n07: sortish_sampler=False, c06r3n07: split_batches=False, c06r3n07: tf32=None, c06r3n07: torch_compile=False, c06r3n07: torch_compile_backend=None, c06r3n07: torch_compile_mode=None, c06r3n07: torchdynamo=None, c06r3n07: tpu_metrics_debug=False, c06r3n07: tpu_num_cores=None, c06r3n07: use_cpu=False, c06r3n07: use_ipex=False, c06r3n07: use_legacy_prediction_loop=False, c06r3n07: use_mps_device=False, c06r3n07: warmup_ratio=0.03, c06r3n07: warmup_steps=0, c06r3n07: weight_decay=0.0, c06r3n07: ) c06r3n07: 03/15/2024 10:53:58 - INFO - llmtuner.hparams.parser - Process rank: 3, device: cuda:3, n_gpu: 1 c06r3n07: distributed training: True, compute dtype: torch.float16 c06r3n07: 03/15/2024 10:53:58 - INFO - llmtuner.hparams.parser - Training/evaluation parameters Seq2SeqTrainingArguments( c06r3n07: _n_gpu=1, c06r3n07: adafactor=False, c06r3n07: adam_beta1=0.9, c06r3n07: adam_beta2=0.999, c06r3n07: adam_epsilon=1e-08, c06r3n07: auto_find_batch_size=False, c06r3n07: bf16=False, c06r3n07: bf16_full_eval=False, c06r3n07: data_seed=None, c06r3n07: dataloader_drop_last=False, c06r3n07: dataloader_num_workers=0, c06r3n07: dataloader_persistent_workers=False, c06r3n07: dataloader_pin_memory=True, c06r3n07: ddp_backend=None, c06r3n07: ddp_broadcast_buffers=None, c06r3n07: ddp_bucket_cap_mb=None, c06r3n07: ddp_find_unused_parameters=None, c06r3n07: ddp_timeout=1800, c06r3n07: debug=[], c06r3n07: deepspeed=deepspeed.json, c06r3n07: disable_tqdm=False, c06r3n07: dispatch_batches=None, c06r3n07: do_eval=True, c06r3n07: do_predict=False, c06r3n07: do_train=True, c06r3n07: eval_accumulation_steps=None, c06r3n07: eval_delay=0, c06r3n07: eval_steps=100, c06r3n07: evaluation_strategy=steps, c06r3n07: fp16=True, c06r3n07: fp16_backend=auto, c06r3n07: fp16_full_eval=False, c06r3n07: fp16_opt_level=O1, c06r3n07: fsdp=[], c06r3n07: fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, c06r3n07: fsdp_min_num_params=0, c06r3n07: fsdp_transformer_layer_cls_to_wrap=None, c06r3n07: full_determinism=False, c06r3n07: generation_config=None, c06r3n07: generation_max_length=None, c06r3n07: generation_num_beams=None, c06r3n07: gradient_accumulation_steps=1, c06r3n07: gradient_checkpointing=False, c06r3n07: gradient_checkpointing_kwargs=None, c06r3n07: greater_is_better=False, c06r3n07: group_by_length=False, c06r3n07: half_precision_backend=auto, c06r3n07: hub_always_push=False, c06r3n07: hub_model_id=None, c06r3n07: hub_private_repo=False, c06r3n07: hub_strategy=every_save, c06r3n07: hub_token=, c06r3n07: ignore_data_skip=False, c06r3n07: include_inputs_for_metrics=False, c06r3n07: include_num_input_tokens_seen=False, c06r3n07: include_tokens_per_second=False, c06r3n07: jit_mode_eval=False, c06r3n07: label_names=None, c06r3n07: label_smoothing_factor=0.0, c06r3n07: learning_rate=5e-05, c06r3n07: length_column_name=length, c06r3n07: load_best_model_at_end=True, c06r3n07: local_rank=1, c06r3n07: log_level=passive, c06r3n07: log_level_replica=warning, c06r3n07: log_on_each_node=True, c06r3n07: logging_dir=/work/share/huchen1/liangjj/llama_factory/runs/Mar15_10-53-58_c06r3n07, c06r3n07: logging_first_step=False, c06r3n07: logging_nan_inf_filter=True, c06r3n07: logging_steps=10, c06r3n07: logging_strategy=steps, c06r3n07: lr_scheduler_kwargs={}, c06r3n07: lr_scheduler_type=cosine, c06r3n07: max_grad_norm=0.5, c06r3n07: max_steps=-1, c06r3n07: metric_for_best_model=loss, c06r3n07: mp_parameters=, c06r3n07: neftune_noise_alpha=None, c06r3n07: no_cuda=False, c06r3n07: num_train_epochs=4.0, c06r3n07: optim=adamw_torch, c06r3n07: optim_args=None, c06r3n07: output_dir=/work/share/huchen1/liangjj/llama_factory, c06r3n07: overwrite_output_dir=False, c06r3n07: past_index=-1, c06r3n07: per_device_eval_batch_size=1, c06r3n07: per_device_train_batch_size=1, c06r3n07: predict_with_generate=False, c06r3n07: prediction_loss_only=False, c06r3n07: push_to_hub=False, c06r3n07: push_to_hub_model_id=None, c06r3n07: push_to_hub_organization=None, c06r3n07: push_to_hub_token=, c06r3n07: ray_scope=last, c06r3n07: remove_unused_columns=True, c06r3n07: report_to=['tensorboard'], c06r3n07: resume_from_checkpoint=None, c06r3n07: run_name=/work/share/huchen1/liangjj/llama_factory, c06r3n07: save_on_each_node=True, c06r3n07: save_only_model=False, c06r3n07: save_safetensors=True, c06r3n07: save_steps=100, c06r3n07: save_strategy=steps, c06r3n07: save_total_limit=None, c06r3n07: seed=42, c06r3n07: skip_memory_metrics=True, c06r3n07: sortish_sampler=False, c06r3n07: split_batches=False, c06r3n07: tf32=None, c06r3n07: torch_compile=False, c06r3n07: torch_compile_backend=None, c06r3n07: torch_compile_mode=None, c06r3n07: torchdynamo=None, c06r3n07: tpu_metrics_debug=False, c06r3n07: tpu_num_cores=None, c06r3n07: use_cpu=False, c06r3n07: use_ipex=False, c06r3n07: use_legacy_prediction_loop=False, c06r3n07: use_mps_device=False, c06r3n07: warmup_ratio=0.03, c06r3n07: warmup_steps=0, c06r3n07: weight_decay=0.0, c06r3n07: ) c06r3n07: 03/15/2024 10:53:58 - INFO - llmtuner.hparams.parser - Training/evaluation parameters Seq2SeqTrainingArguments( c06r3n07: _n_gpu=1, c06r3n07: adafactor=False, c06r3n07: adam_beta1=0.9, c06r3n07: adam_beta2=0.999, c06r3n07: adam_epsilon=1e-08, c06r3n07: auto_find_batch_size=False, c06r3n07: bf16=False, c06r3n07: bf16_full_eval=False, c06r3n07: data_seed=None, c06r3n07: dataloader_drop_last=False, c06r3n07: dataloader_num_workers=0, c06r3n07: dataloader_persistent_workers=False, c06r3n07: dataloader_pin_memory=True, c06r3n07: ddp_backend=None, c06r3n07: ddp_broadcast_buffers=None, c06r3n07: ddp_bucket_cap_mb=None, c06r3n07: ddp_find_unused_parameters=None, c06r3n07: ddp_timeout=1800, c06r3n07: debug=[], c06r3n07: deepspeed=deepspeed.json, c06r3n07: disable_tqdm=False, c06r3n07: dispatch_batches=None, c06r3n07: do_eval=True, c06r3n07: do_predict=False, c06r3n07: do_train=True, c06r3n07: eval_accumulation_steps=None, c06r3n07: eval_delay=0, c06r3n07: eval_steps=100, c06r3n07: evaluation_strategy=steps, c06r3n07: fp16=True, c06r3n07: fp16_backend=auto, c06r3n07: fp16_full_eval=False, c06r3n07: fp16_opt_level=O1, c06r3n07: fsdp=[], c06r3n07: fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, c06r3n07: fsdp_min_num_params=0, c06r3n07: fsdp_transformer_layer_cls_to_wrap=None, c06r3n07: full_determinism=False, c06r3n07: generation_config=None, c06r3n07: generation_max_length=None, c06r3n07: generation_num_beams=None, c06r3n07: gradient_accumulation_steps=1, c06r3n07: gradient_checkpointing=False, c06r3n07: gradient_checkpointing_kwargs=None, c06r3n07: greater_is_better=False, c06r3n07: group_by_length=False, c06r3n07: half_precision_backend=auto, c06r3n07: hub_always_push=False, c06r3n07: hub_model_id=None, c06r3n07: hub_private_repo=False, c06r3n07: hub_strategy=every_save, c06r3n07: hub_token=, c06r3n07: ignore_data_skip=False, c06r3n07: include_inputs_for_metrics=False, c06r3n07: include_num_input_tokens_seen=False, c06r3n07: include_tokens_per_second=False, c06r3n07: jit_mode_eval=False, c06r3n07: label_names=None, c06r3n07: label_smoothing_factor=0.0, c06r3n07: learning_rate=5e-05, c06r3n07: length_column_name=length, c06r3n07: load_best_model_at_end=True, c06r3n07: local_rank=3, c06r3n07: log_level=passive, c06r3n07: log_level_replica=warning, c06r3n07: log_on_each_node=True, c06r3n07: logging_dir=/work/share/huchen1/liangjj/llama_factory/runs/Mar15_10-53-58_c06r3n07, c06r3n07: logging_first_step=False, c06r3n07: logging_nan_inf_filter=True, c06r3n07: logging_steps=10, c06r3n07: logging_strategy=steps, c06r3n07: lr_scheduler_kwargs={}, c06r3n07: lr_scheduler_type=cosine, c06r3n07: max_grad_norm=0.5, c06r3n07: max_steps=-1, c06r3n07: metric_for_best_model=loss, c06r3n07: mp_parameters=, c06r3n07: neftune_noise_alpha=None, c06r3n07: no_cuda=False, c06r3n07: num_train_epochs=4.0, c06r3n07: optim=adamw_torch, c06r3n07: optim_args=None, c06r3n07: output_dir=/work/share/huchen1/liangjj/llama_factory, c06r3n07: overwrite_output_dir=False, c06r3n07: past_index=-1, c06r3n07: per_device_eval_batch_size=1, c06r3n07: per_device_train_batch_size=1, c06r3n07: predict_with_generate=False, c06r3n07: prediction_loss_only=False, c06r3n07: push_to_hub=False, c06r3n07: push_to_hub_model_id=None, c06r3n07: push_to_hub_organization=None, c06r3n07: push_to_hub_token=, c06r3n07: ray_scope=last, c06r3n07: remove_unused_columns=True, c06r3n07: report_to=['tensorboard'], c06r3n07: resume_from_checkpoint=None, c06r3n07: run_name=/work/share/huchen1/liangjj/llama_factory, c06r3n07: save_on_each_node=True, c06r3n07: save_only_model=False, c06r3n07: save_safetensors=True, c06r3n07: save_steps=100, c06r3n07: save_strategy=steps, c06r3n07: save_total_limit=None, c06r3n07: seed=42, c06r3n07: skip_memory_metrics=True, c06r3n07: sortish_sampler=False, c06r3n07: split_batches=False, c06r3n07: tf32=None, c06r3n07: torch_compile=False, c06r3n07: torch_compile_backend=None, c06r3n07: torch_compile_mode=None, c06r3n07: torchdynamo=None, c06r3n07: tpu_metrics_debug=False, c06r3n07: tpu_num_cores=None, c06r3n07: use_cpu=False, c06r3n07: use_ipex=False, c06r3n07: use_legacy_prediction_loop=False, c06r3n07: use_mps_device=False, c06r3n07: warmup_ratio=0.03, c06r3n07: warmup_steps=0, c06r3n07: weight_decay=0.0, c06r3n07: ) c06r3n07: [INFO|tokenization_utils_base.py:2025] 2024-03-15 10:53:58,783 >> loading file tokenizer.model c06r3n07: [INFO|tokenization_utils_base.py:2025] 2024-03-15 10:53:58,783 >> loading file added_tokens.json c06r3n07: [INFO|tokenization_utils_base.py:2025] 2024-03-15 10:53:58,783 >> loading file special_tokens_map.json c06r3n07: [INFO|tokenization_utils_base.py:2025] 2024-03-15 10:53:58,783 >> loading file tokenizer_config.json c06r3n07: [INFO|tokenization_utils_base.py:2025] 2024-03-15 10:53:58,783 >> loading file tokenizer.json c06r3n08: [INFO|tokenization_utils_base.py:2025] 2024-03-15 10:53:58,783 >> loading file tokenizer.model c06r3n08: [INFO|tokenization_utils_base.py:2025] 2024-03-15 10:53:58,783 >> loading file added_tokens.json c06r3n08: [INFO|tokenization_utils_base.py:2025] 2024-03-15 10:53:58,783 >> loading file special_tokens_map.json c06r3n08: [INFO|tokenization_utils_base.py:2025] 2024-03-15 10:53:58,783 >> loading file tokenizer_config.json c06r3n08: [INFO|tokenization_utils_base.py:2025] 2024-03-15 10:53:58,783 >> loading file tokenizer.json c06r3n07: [WARNING|logging.py:329] 2024-03-15 10:53:58,797 >> You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 c06r3n08: You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 c06r3n07: You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 c06r3n08: You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 c06r3n07: You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 c06r3n08: [WARNING|logging.py:329] 2024-03-15 10:53:58,796 >> You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 c06r3n07: You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 c06r3n08: You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 c06r3n06: [INFO|modeling_utils.py:3473] 2024-03-15 10:53:58,838 >> loading weights file /work/home/liangjing/.cache/modelscope/hub/skyline2006/llama-7b/pytorch_model.bin.index.json c06r3n06: [INFO|modeling_utils.py:1426] 2024-03-15 10:53:58,839 >> Instantiating LlamaForCausalLM model under default dtype torch.float16. c06r3n06: [INFO|modeling_utils.py:3582] 2024-03-15 10:53:58,839 >> Detected DeepSpeed ZeRO-3: activating zero.init() for this model c06r3n06: [INFO|configuration_utils.py:826] 2024-03-15 10:53:58,850 >> Generate config GenerationConfig { c06r3n06: "bos_token_id": 0, c06r3n06: "eos_token_id": 1, c06r3n06: "pad_token_id": -1 c06r3n06: } c06r3n06: c06r3n07: [INFO|configuration_utils.py:727] 2024-03-15 10:53:58,916 >> loading configuration file /work/home/liangjing/.cache/modelscope/hub/skyline2006/llama-7b/config.json c06r3n08: [INFO|configuration_utils.py:727] 2024-03-15 10:53:58,915 >> loading configuration file /work/home/liangjing/.cache/modelscope/hub/skyline2006/llama-7b/config.json c06r3n07: [INFO|configuration_utils.py:792] 2024-03-15 10:53:58,918 >> Model config LlamaConfig { c06r3n07: "_name_or_path": "/work/home/liangjing/.cache/modelscope/hub/skyline2006/llama-7b", c06r3n07: "architectures": [ c06r3n07: "LlamaForCausalLM" c06r3n07: ], c06r3n07: "attention_bias": false, c06r3n07: "attention_dropout": 0.0, c06r3n07: "bos_token_id": 0, c06r3n07: "eos_token_id": 1, c06r3n07: "hidden_act": "silu", c06r3n07: "hidden_size": 4096, c06r3n07: "initializer_range": 0.02, c06r3n07: "intermediate_size": 11008, c06r3n07: "max_position_embeddings": 2048, c06r3n07: "max_sequence_length": 2048, c06r3n07: "model_type": "llama", c06r3n07: "num_attention_heads": 32, c06r3n07: "num_hidden_layers": 32, c06r3n07: "num_key_value_heads": 32, c06r3n07: "pad_token_id": -1, c06r3n07: "pretraining_tp": 1, c06r3n07: "rms_norm_eps": 1e-06, c06r3n07: "rope_scaling": null, c06r3n07: "rope_theta": 10000.0, c06r3n07: "tie_word_embeddings": false, c06r3n07: "torch_dtype": "float16", c06r3n07: "transformers_version": "4.37.2", c06r3n07: "use_cache": true, c06r3n07: "vocab_size": 32000 c06r3n07: } c06r3n07: c06r3n08: [INFO|configuration_utils.py:792] 2024-03-15 10:53:58,917 >> Model config LlamaConfig { c06r3n08: "_name_or_path": "/work/home/liangjing/.cache/modelscope/hub/skyline2006/llama-7b", c06r3n08: "architectures": [ c06r3n08: "LlamaForCausalLM" c06r3n08: ], c06r3n08: "attention_bias": false, c06r3n08: "attention_dropout": 0.0, c06r3n08: "bos_token_id": 0, c06r3n08: "eos_token_id": 1, c06r3n08: "hidden_act": "silu", c06r3n08: "hidden_size": 4096, c06r3n08: "initializer_range": 0.02, c06r3n08: "intermediate_size": 11008, c06r3n08: "max_position_embeddings": 2048, c06r3n08: "max_sequence_length": 2048, c06r3n08: "model_type": "llama", c06r3n08: "num_attention_heads": 32, c06r3n08: "num_hidden_layers": 32, c06r3n08: "num_key_value_heads": 32, c06r3n08: "pad_token_id": -1, c06r3n08: "pretraining_tp": 1, c06r3n08: "rms_norm_eps": 1e-06, c06r3n08: "rope_scaling": null, c06r3n08: "rope_theta": 10000.0, c06r3n08: "tie_word_embeddings": false, c06r3n08: "torch_dtype": "float16", c06r3n08: "transformers_version": "4.37.2", c06r3n08: "use_cache": true, c06r3n08: "vocab_size": 32000 c06r3n08: } c06r3n08: c06r3n08: [INFO|modeling_utils.py:3473] 2024-03-15 10:54:00,431 >> loading weights file /work/home/liangjing/.cache/modelscope/hub/skyline2006/llama-7b/pytorch_model.bin.index.json c06r3n07: [INFO|modeling_utils.py:3473] 2024-03-15 10:54:00,433 >> loading weights file /work/home/liangjing/.cache/modelscope/hub/skyline2006/llama-7b/pytorch_model.bin.index.json c06r3n08: [INFO|modeling_utils.py:1426] 2024-03-15 10:54:00,472 >> Instantiating LlamaForCausalLM model under default dtype torch.float16. c06r3n07: [INFO|modeling_utils.py:1426] 2024-03-15 10:54:00,474 >> Instantiating LlamaForCausalLM model under default dtype torch.float16. c06r3n08: [INFO|modeling_utils.py:3582] 2024-03-15 10:54:00,473 >> Detected DeepSpeed ZeRO-3: activating zero.init() for this model c06r3n07: [INFO|modeling_utils.py:3582] 2024-03-15 10:54:00,474 >> Detected DeepSpeed ZeRO-3: activating zero.init() for this model c06r3n07: [INFO|configuration_utils.py:826] 2024-03-15 10:54:00,482 >> Generate config GenerationConfig { c06r3n07: "bos_token_id": 0, c06r3n07: "eos_token_id": 1, c06r3n07: "pad_token_id": -1 c06r3n07: } c06r3n07: c06r3n08: [INFO|configuration_utils.py:826] 2024-03-15 10:54:00,482 >> Generate config GenerationConfig { c06r3n08: "bos_token_id": 0, c06r3n08: "eos_token_id": 1, c06r3n08: "pad_token_id": -1 c06r3n08: } c06r3n08: c06r3n06: pthread_mutex_timedlock() returned 110 c06r3n06: Failed to initialize RSMI device mutex after 5 seconds. Previous execution may not have shutdown cleanly. To fix problem, stop all rocm_smi programs, and then delete the rocm_smi* shared memory files in /dev/shm.: Success c06r3n06: pthread_mutex_timedlock() returned 110 c06r3n06: Failed to initialize RSMI device mutex after 5 seconds. Previous execution may not have shutdown cleanly. To fix problem, stop all rocm_smi programs, and then delete the rocm_smi* shared memory files in /dev/shm.: Success c06r3n06: pthread_mutex_timedlock() returned 110 c06r3n06: Failed to initialize RSMI device mutex after 5 seconds. Previous execution may not have shutdown cleanly. To fix problem, stop all rocm_smi programs, and then delete the rocm_smi* shared memory files in /dev/shm.: Success c06r3n06: pthread_mutex_timedlock() returned 110 c06r3n06: Failed to initialize RSMI device mutex after 5 seconds. Previous execution may not have shutdown cleanly. To fix problem, stop all rocm_smi programs, and then delete the rocm_smi* shared memory files in /dev/shm.: Success c06r3n06: I0315 10:54:07.062819 874 ProcessGroupNCCL.cpp:1340] NCCL_DEBUG: N/A c06r3n06: [2024-03-15 10:54:12,115] [INFO] [partition_parameters.py:348:__exit__] finished initializing model - num_params = 291, num_elems = 6.74B c06r3n07: Loading checkpoint shards: 0%| | 0/33 [00:00> All model checkpoint weights were used when initializing LlamaForCausalLM. c06r3n08: c06r3n08: [INFO|modeling_utils.py:4358] 2024-03-15 10:54:39,713 >> All the weights of LlamaForCausalLM were initialized from the model checkpoint at /work/home/liangjing/.cache/modelscope/hub/skyline2006/llama-7b. c06r3n08: If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training. c06r3n08: Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.09it/s] Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.20it/s] c06r3n08: Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.09it/s] Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.20it/s] c06r3n08: Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.09it/s] Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.20it/s] c06r3n08: [INFO|configuration_utils.py:779] 2024-03-15 10:54:39,733 >> loading configuration file /work/home/liangjing/.cache/modelscope/hub/skyline2006/llama-7b/generation_config.json c06r3n08: [INFO|configuration_utils.py:826] 2024-03-15 10:54:39,733 >> Generate config GenerationConfig { c06r3n08: "bos_token_id": 0, c06r3n08: "eos_token_id": 1, c06r3n08: "pad_token_id": 0 c06r3n08: } c06r3n08: c06r3n07: 00:08<00:17, 1.32it/s] Loading checkpoint shards: 30%|███ | 10/33 [00:08<00:17, 1.33it/s] Loading checkpoint shards: 30%|███ | 10/33 [00:08<00:17, 1.32it/s] Loading checkpoint shards: 33%|███▎ | 11/33 [00:09<00:16, 1.33it/s] Loading checkpoint shards: 33%|███▎ | 11/33 [00:09<00:16, 1.33it/s] Loading checkpoint shards: 33%|███▎ | 11/33 [00:09<00:16, 1.32it/s] Loading checkpoint shards: 33%|███▎ | 11/33 [00:09<00:16, 1.32it/s] Loading checkpoint shards: 36%|███▋ | 12/33 [00:10<00:17, 1.20it/s] Loading checkpoint shards: 36%|███▋ | 12/33 [00:10<00:17, 1.21it/s] Loading checkpoint shards: 36%|███▋ | 12/33 [00:10<00:17, 1.21it/s] Loading checkpoint shards: 36%|███▋ | 12/33 [00:10<00:17, 1.19it/s] Loading checkpoint shards: 39%|███▉ | 13/33 [00:11<00:17, 1.14it/s] Loading checkpoint shards: 39%|███▉ | 13/33 [00:11<00:17, 1.14it/s] Loading checkpoint shards: 39%|███▉ | 13/33 [00:11<00:17, 1.14it/s] Loading checkpoint shards: 39%|███▉ | 13/33 [00:11<00:17, 1.13it/s] Loading checkpoint shards: 42%|████▏ | 14/33 [00:12<00:15, 1.23it/s] Loading checkpoint shards: 42%|████▏ | 14/33 [00:12<00:15, 1.23it/s] Loading checkpoint shards: 42%|████▏ | 14/33 [00:12<00:15, 1.23it/s] Loading checkpoint shards: 42%|████▏ | 14/33 [00:12<00:15, 1.21it/s] Loading checkpoint shards: 45%|████▌ | 15/33 [00:12<00:14, 1.20it/s] Loading checkpoint shards: 45%|████▌ | 15/33 [00:13<00:14, 1.20it/s] Loading checkpoint shards: 45%|████▌ | 15/33 [00:13<00:15, 1.20it/s] Loading checkpoint shards: 45%|████▌ | 15/33 [00:13<00:15, 1.20it/s] Loading checkpoint shards: 48%|████▊ | 16/33 [00:13<00:13, 1.25it/s] Loading checkpoint shards: 48%|████▊ | 16/33 [00:13<00:13, 1.25it/s] Loading checkpoint shards: 48%|████▊ | 16/33 [00:13<00:13, 1.25it/s] Loading checkpoint shards: 48%|████▊ | 16/33 [00:13<00:13, 1.24it/s] Loading checkpoint shards: 52%|█████▏ | 17/33 [00:14<00:12, 1.26it/s] Loading checkpoint shards: 52%|█████▏ | 17/33 [00:14<00:12, 1.26it/s] Loading checkpoint shards: 52%|█████▏ | 17/33 [00:14<00:12, 1.26it/s] Loading checkpoint shards: 52%|█████▏ | 17/33 [00:14<00:12, 1.28it/s] Loading checkpoint shards: 55%|█████▍ | 18/33 [00:15<00:11, 1.29it/s] Loading checkpoint shards: 55%|█████▍ | 18/33 [00:15<00:11, 1.29it/s] Loading checkpoint shards: 55%|█████▍ | 18/33 [00:15<00:11, 1.29it/s] Loading checkpoint shards: 55%|█████▍ | 18/33 [00:15<00:11, 1.30it/s] Loading checkpoint shards: 58%|█████▊ | 19/33 [00:16<00:11, 1.22it/s] Loading checkpoint shards: 58%|█████▊ | 19/33 [00:16<00:11, 1.22it/s] Loading checkpoint shards: 58%|█████▊ | 19/33 [00:16<00:11, 1.22it/s] Loading checkpoint shards: 58%|█████▊ | 19/33 [00:16<00:11, 1.23it/s] Loading checkpoint shards: 61%|██████ | 20/33 [00:16<00:10, 1.24it/s] Loading checkpoint shards: 61%|██████ | 20/33 [00:16<00:10, 1.24it/s] Loading checkpoint shards: 61%|██████ | 20/33 [00:16<00:10, 1.24it/s] Loading checkpoint shards: 61%|██████ | 20/33 [00:17<00:10, 1.22it/s] Loading checkpoint shards: 64%|██████▎ | 21/33 [00:17<00:09, 1.29it/s] Loading checkpoint shards: 64%|██████▎ | 21/33 [00:17<00:09, 1.29it/s] Loading checkpoint shards: 64%|██████▎ | 21/33 [00:17<00:09, 1.29it/s] Loading checkpoint shards: 64%|██████▎ | 21/33 [00:17<00:09, 1.31it/s] Loading checkpoint shards: 67%|██████▋ | 22/33 [00:18<00:08, 1.31it/s] Loading checkpoint shards: 67%|██████▋ | 22/33 [00:18<00:08, 1.31it/s] Loading checkpoint shards: 67%|██████▋ | 22/33 [00:18<00:08, 1.31it/s] Loading checkpoint shards: 67%|██████▋ | 22/33 [00:18<00:08, 1.32it/s] Loading checkpoint shards: 70%|██████▉ | 23/33 [00:19<00:07, 1.29it/s] Loading checkpoint shards: 70%|██████▉ | 23/33 [00:19<00:07, 1.29it/s] Loading checkpoint shards: 70%|██████▉ | 23/33 [00:19<00:07, 1.29it/s] Loading checkpoint shards: 70%|██████▉ | 23/33 [00:19<00:07, 1.30it/s] Loading checkpoint shards: 73%|███████▎ | 24/33 [00:20<00:07, 1.26it/s] Loading checkpoint shards: 73%|███████▎ | 24/33 [00:20<00:07, 1.26it/s] Loading checkpoint shards: 73%|███████▎ | 24/33 [00:20<00:07, 1.25it/s] Loading checkpoint shards: 73%|███████▎ | 24/33 [00:20<00:07, 1.26it/s] Loading checkpoint shards: 76%|███████▌ | 25/33 [00:20<00:06, 1.20it/s] Loading checkpoint shards: 76%|███████▌ | 25/33 [00:20<00:06, 1.20it/s] Loading checkpoint shards: 76%|███████▌ | 25/33 [00:20<00:06, 1.20it/s] Loading checkpoint shards: 76%|███████▌ | 25/33 [00:20<00:06, 1.19it/s] Loading checkpoint shards: 79%|███████▉ | 26/33 [00:21<00:05, 1.24it/s] Loading checkpoint shards: 79%|███████▉ | 26/33 [00:21<00:05, 1.25it/s] Loading checkpoint shards: 79%|███████▉ | 26/33 [00:21<00:05, 1.25it/s] Loading checkpoint shards: 79%|███████▉ | 26/33 [00:21<00:05, 1.23it/s] Loading checkpoint shards: 82%|████████▏ | 27/33 [00:22<00:04, 1.26it/s] Loading checkpoint shards: 82%|████████▏ | 27/33 [00:22<00:04, 1.26it/s] Loading checkpoint shards: 82%|████████▏ | 27/33 [00:22<00:04, 1.27it/s] Loading checkpoint shards: 82%|████████▏ | 27/33 [00:22<00:04, 1.24it/s] Loading checkpoint shards: 85%|████████▍ | 28/33 [00:23<00:03, 1.29it/s] Loading checkpoint shards: 85%|████████▍ | 28/33 [00:23<00:03, 1.29it/s] Loading checkpoint shards: 85%|████████▍ | 28/33 [00:23<00:03, 1.30it/s] Loading checkpoint shards: 85%|████████▍ | 28/33 [00:23<00:03, 1.26it/s] Loading checkpoint shards: 88%|████████▊ | 29/33 [00:24<00:03, 1.27it/s] Loading checkpoint shards: 88%|████████▊ | 29/33 [00:24<00:03, 1.25it/s] Loading checkpoint shards: 88%|████████▊ | 29/33 [00:24<00:03, 1.25it/s] Loading checkpoint shards: 88%|████████▊ | 29/33 [00:24<00:03, 1.25it/s] Loading checkpoint shards: 91%|█████████ | 30/33 [00:24<00:02, 1.27it/s] Loading checkpoint shards: 91%|█████████ | 30/33 [00:24<00:02, 1.27it/s] Loading checkpoint shards: 91%|█████████ | 30/33 [00:24<00:02, 1.27it/s] Loading checkpoint shards: 91%|█████████ | 30/33 [00:24<00:02, 1.26it/s] Loading checkpoint shards: 94%|█████████▍| 31/33 [00:25<00:01, 1.30it/s] Loading checkpoint shards: 94%|█████████▍| 31/33 [00:25<00:01, 1.30it/s] Loading checkpoint shards: 94%|█████████▍| 31/33 [00:25<00:01, 1.30it/s] Loading checkpoint shards: 94%|█████████▍| 31/33 [00:25<00:01, 1.31it/s] Loading checkpoint shards: 97%|█████████▋| 32/33 [00:26<00:00, 1.30it/s] Loading checkpoint shards: 97%|█████████▋| 32/33 [00:26<00:00, 1.30it/s] Loading checkpoint shards: 97%|█████████▋| 32/33 [00:26<00:00, 1.31it/s] Loading checkpoint shards: 97%|█████████▋| 32/33 [00:26<00:00, 1.31it/s] Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.09it/s] Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.20it/s]c06r3n08: 03/15/2024 10:54:39 - INFO - llmtuner.model.patcher - Gradient checkpointing enabled. c06r3n08: 03/15/2024 10:54:39 - INFO - llmtuner.model.patcher - Gradient checkpointing enabled. c06r3n08: 03/15/2024 10:54:39 - INFO - llmtuner.model.adapter - Fine-tuning method: Full c06r3n08: 03/15/2024 10:54:39 - INFO - llmtuner.model.patcher - Gradient checkpointing enabled. c06r3n08: 03/15/2024 10:54:39 - INFO - llmtuner.model.adapter - Fine-tuning method: Full c06r3n08: 03/15/2024 10:54:39 - INFO - llmtuner.model.patcher - Gradient checkpointing enabled. c06r3n08: 03/15/2024 10:54:39 - INFO - llmtuner.model.adapter - Fine-tuning method: Full c06r3n08: 03/15/2024 10:54:39 - INFO - llmtuner.model.adapter - Fine-tuning method: Full c06r3n06: 00:08<00:17, 1.33it/s] Loading checkpoint shards: 30%|███ | 10/33 [00:08<00:18, 1.28it/s] Loading checkpoint shards: 30%|███ | 10/33 [00:09<00:18, 1.26it/s] Loading checkpoint shards: 33%|███▎ | 11/33 [00:09<00:16, 1.30it/s] Loading checkpoint shards: 33%|███▎ | 11/33 [00:09<00:16, 1.33it/s] Loading checkpoint shards: 33%|███▎ | 11/33 [00:09<00:17, 1.29it/s] Loading checkpoint shards: 33%|███▎ | 11/33 [00:09<00:17, 1.29it/s] Loading checkpoint shards: 36%|███▋ | 12/33 [00:10<00:17, 1.22it/s] Loading checkpoint shards: 36%|███▋ | 12/33 [00:10<00:17, 1.20it/s] Loading checkpoint shards: 36%|███▋ | 12/33 [00:10<00:17, 1.20it/s] Loading checkpoint shards: 36%|███▋ | 12/33 [00:10<00:17, 1.17it/s] Loading checkpoint shards: 39%|███▉ | 13/33 [00:11<00:17, 1.12it/s] Loading checkpoint shards: 39%|███▉ | 13/33 [00:11<00:17, 1.13it/s] Loading checkpoint shards: 39%|███▉ | 13/33 [00:11<00:17, 1.13it/s] Loading checkpoint shards: 39%|███▉ | 13/33 [00:11<00:17, 1.12it/s] Loading checkpoint shards: 42%|████▏ | 14/33 [00:12<00:15, 1.22it/s] Loading checkpoint shards: 42%|████▏ | 14/33 [00:12<00:15, 1.22it/s] Loading checkpoint shards: 42%|████▏ | 14/33 [00:12<00:15, 1.21it/s] Loading checkpoint shards: 42%|████▏ | 14/33 [00:12<00:15, 1.21it/s] Loading checkpoint shards: 45%|████▌ | 15/33 [00:12<00:14, 1.20it/s] Loading checkpoint shards: 45%|████▌ | 15/33 [00:12<00:14, 1.21it/s] Loading checkpoint shards: 45%|████▌ | 15/33 [00:12<00:15, 1.20it/s] Loading checkpoint shards: 45%|████▌ | 15/33 [00:13<00:15, 1.18it/s] Loading checkpoint shards: 48%|████▊ | 16/33 [00:13<00:13, 1.25it/s] Loading checkpoint shards: 48%|████▊ | 16/33 [00:13<00:13, 1.24it/s] Loading checkpoint shards: 48%|████▊ | 16/33 [00:13<00:13, 1.25it/s] Loading checkpoint shards: 48%|████▊ | 16/33 [00:14<00:13, 1.26it/s] Loading checkpoint shards: 52%|█████▏ | 17/33 [00:14<00:12, 1.26it/s] Loading checkpoint shards: 52%|█████▏ | 17/33 [00:14<00:12, 1.26it/s] Loading checkpoint shards: 52%|█████▏ | 17/33 [00:14<00:12, 1.26it/s] Loading checkpoint shards: 52%|█████▏ | 17/33 [00:14<00:12, 1.27it/s] Loading checkpoint shards: 55%|█████▍ | 18/33 [00:15<00:11, 1.28it/s] Loading checkpoint shards: 55%|█████▍ | 18/33 [00:15<00:11, 1.26it/s] Loading checkpoint shards: 55%|█████▍ | 18/33 [00:15<00:11, 1.27it/s] Loading checkpoint shards: 55%|█████▍ | 18/33 [00:15<00:11, 1.30it/s] Loading checkpoint shards: 58%|█████▊ | 19/33 [00:16<00:11, 1.22it/s] Loading checkpoint shards: 58%|█████▊ | 19/33 [00:16<00:11, 1.22it/s] Loading checkpoint shards: 58%|█████▊ | 19/33 [00:16<00:11, 1.22it/s] Loading checkpoint shards: 58%|█████▊ | 19/33 [00:16<00:11, 1.23it/s] Loading checkpoint shards: 61%|██████ | 20/33 [00:16<00:10, 1.24it/s] Loading checkpoint shards: 61%|██████ | 20/33 [00:16<00:10, 1.24it/s] Loading checkpoint shards: 61%|██████ | 20/33 [00:16<00:10, 1.23it/s] Loading checkpoint shards: 61%|██████ | 20/33 [00:17<00:10, 1.24it/s] Loading checkpoint shards: 64%|██████▎ | 21/33 [00:17<00:09, 1.29it/s] Loading checkpoint shards: 64%|██████▎ | 21/33 [00:17<00:09, 1.29it/s] Loading checkpoint shards: 64%|██████▎ | 21/33 [00:17<00:09, 1.29it/s] Loading checkpoint shards: 64%|██████▎ | 21/33 [00:17<00:09, 1.29it/s] Loading checkpoint shards: 67%|██████▋ | 22/33 [00:18<00:08, 1.30it/s] Loading checkpoint shards: 67%|██████▋ | 22/33 [00:18<00:08, 1.30it/s] Loading checkpoint shards: 67%|██████▋ | 22/33 [00:18<00:08, 1.30it/s] Loading checkpoint shards: 67%|██████▋ | 22/33 [00:18<00:08, 1.27it/s] Loading checkpoint shards: 70%|██████▉ | 23/33 [00:19<00:07, 1.29it/s] Loading checkpoint shards: 70%|██████▉ | 23/33 [00:19<00:07, 1.29it/s] Loading checkpoint shards: 70%|██████▉ | 23/33 [00:19<00:07, 1.29it/s] Loading checkpoint shards: 70%|██████▉ | 23/33 [00:19<00:07, 1.28it/s] Loading checkpoint shards: 73%|███████▎ | 24/33 [00:20<00:07, 1.25it/s] Loading checkpoint shards: 73%|███████▎ | 24/33 [00:20<00:07, 1.25it/s] Loading checkpoint shards: 73%|███████▎ | 24/33 [00:20<00:07, 1.25it/s] Loading checkpoint shards: 73%|███████▎ | 24/33 [00:20<00:06, 1.29it/s] Loading checkpoint shards: 76%|███████▌ | 25/33 [00:20<00:06, 1.21it/s] Loading checkpoint shards: 76%|███████▌ | 25/33 [00:20<00:06, 1.21it/s] Loading checkpoint shards: 76%|███████▌ | 25/33 [00:20<00:06, 1.21it/s] Loading checkpoint shards: 76%|███████▌ | 25/33 [00:21<00:06, 1.22it/s] Loading checkpoint shards: 79%|███████▉ | 26/33 [00:21<00:05, 1.25it/s] Loading checkpoint shards: 79%|███████▉ | 26/33 [00:21<00:05, 1.25it/s] Loading checkpoint shards: 79%|███████▉ | 26/33 [00:21<00:05, 1.25it/s] Loading checkpoint shards: 79%|███████▉ | 26/33 [00:22<00:05, 1.24it/s] Loading checkpoint shards: 82%|████████▏ | 27/33 [00:22<00:04, 1.27it/s] Loading checkpoint shards: 82%|████████▏ | 27/33 [00:22<00:04, 1.27it/s] Loading checkpoint shards: 82%|████████▏ | 27/33 [00:22<00:04, 1.27it/s] Loading checkpoint shards: 82%|████████▏ | 27/33 [00:22<00:04, 1.29it/s] Loading checkpoint shards: 85%|████████▍ | 28/33 [00:23<00:03, 1.28it/s] Loading checkpoint shards: 85%|████████▍ | 28/33 [00:23<00:03, 1.28it/s] Loading checkpoint shards: 85%|████████▍ | 28/33 [00:23<00:03, 1.28it/s] Loading checkpoint shards: 85%|████████▍ | 28/33 [00:23<00:03, 1.29it/s] Loading checkpoint shards: 88%|████████▊ | 29/33 [00:23<00:03, 1.26it/s] Loading checkpoint shards: 88%|████████▊ | 29/33 [00:23<00:03, 1.26it/s] Loading checkpoint shards: 88%|████████▊ | 29/33 [00:23<00:03, 1.26it/s] Loading checkpoint shards: 88%|████████▊ | 29/33 [00:24<00:03, 1.27it/s] Loading checkpoint shards: 91%|█████████ | 30/33 [00:24<00:02, 1.27it/s] Loading checkpoint shards: 91%|█████████ | 30/33 [00:24<00:02, 1.27it/s] Loading checkpoint shards: 91%|█████████ | 30/33 [00:24<00:02, 1.28it/s] Loading checkpoint shards: 91%|█████████ | 30/33 [00:25<00:02, 1.28it/s] Loading checkpoint shards: 94%|█████████▍| 31/33 [00:25<00:01, 1.29it/s] Loading checkpoint shards: 94%|█████████▍| 31/33 [00:25<00:01, 1.29it/s] Loading checkpoint shards: 94%|█████████▍| 31/33 [00:25<00:01, 1.29it/s] Loading checkpoint shards: 94%|█████████▍| 31/33 [00:25<00:01, 1.27it/s] Loading checkpoint shards: 97%|█████████▋| 32/33 [00:26<00:00, 1.30it/s] Loading checkpoint shards: 97%|█████████▋| 32/33 [00:26<00:00, 1.30it/s] Loading checkpoint shards: 97%|█████████▋| 32/33 [00:26<00:00, 1.30it/s] Loading checkpoint shards: 97%|█████████▋| 32/33 [00:26<00:00, 1.31it/s] Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.08it/s] Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.20it/s]c06r3n09: 00:08<00:17, 1.33it/s] Loading checkpoint shards: 30%|███ | 10/33 [00:08<00:17, 1.34it/s] Loading checkpoint shards: 30%|███ | 10/33 [00:08<00:18, 1.26it/s] Loading checkpoint shards: 33%|███▎ | 11/33 [00:09<00:16, 1.36it/s] Loading checkpoint shards: 33%|███▎ | 11/33 [00:09<00:16, 1.37it/s] Loading checkpoint shards: 33%|███▎ | 11/33 [00:09<00:16, 1.36it/s] Loading checkpoint shards: 33%|███▎ | 11/33 [00:09<00:15, 1.38it/s] Loading checkpoint shards: 36%|███▋ | 12/33 [00:10<00:17, 1.23it/s] Loading checkpoint shards: 36%|███▋ | 12/33 [00:10<00:17, 1.23it/s] Loading checkpoint shards: 36%|███▋ | 12/33 [00:10<00:17, 1.23it/s] Loading checkpoint shards: 36%|███▋ | 12/33 [00:10<00:16, 1.24it/s] Loading checkpoint shards: 39%|███▉ | 13/33 [00:11<00:17, 1.13it/s] Loading checkpoint shards: 39%|███▉ | 13/33 [00:11<00:17, 1.13it/s] Loading checkpoint shards: 39%|███▉ | 13/33 [00:11<00:17, 1.14it/s] Loading checkpoint shards: 39%|███▉ | 13/33 [00:11<00:17, 1.15it/s] Loading checkpoint shards: 42%|████▏ | 14/33 [00:12<00:15, 1.22it/s] Loading checkpoint shards: 42%|████▏ | 14/33 [00:12<00:15, 1.22it/s] Loading checkpoint shards: 42%|████▏ | 14/33 [00:12<00:15, 1.21it/s] Loading checkpoint shards: 42%|████▏ | 14/33 [00:12<00:15, 1.23it/s] Loading checkpoint shards: 45%|████▌ | 15/33 [00:12<00:14, 1.21it/s] Loading checkpoint shards: 45%|████▌ | 15/33 [00:12<00:14, 1.21it/s] Loading checkpoint shards: 45%|████▌ | 15/33 [00:12<00:14, 1.20it/s] Loading checkpoint shards: 45%|████▌ | 15/33 [00:13<00:15, 1.18it/s] Loading checkpoint shards: 48%|████▊ | 16/33 [00:13<00:13, 1.26it/s] Loading checkpoint shards: 48%|████▊ | 16/33 [00:13<00:13, 1.26it/s] Loading checkpoint shards: 48%|████▊ | 16/33 [00:13<00:13, 1.26it/s] Loading checkpoint shards: 48%|████▊ | 16/33 [00:13<00:13, 1.27it/s] Loading checkpoint shards: 52%|█████▏ | 17/33 [00:14<00:12, 1.28it/s] Loading checkpoint shards: 52%|█████▏ | 17/33 [00:14<00:12, 1.27it/s] Loading checkpoint shards: 52%|█████▏ | 17/33 [00:14<00:12, 1.27it/s] Loading checkpoint shards: 52%|█████▏ | 17/33 [00:14<00:13, 1.19it/s] Loading checkpoint shards: 55%|█████▍ | 18/33 [00:15<00:11, 1.26it/s] Loading checkpoint shards: 55%|█████▍ | 18/33 [00:15<00:11, 1.26it/s] Loading checkpoint shards: 55%|█████▍ | 18/33 [00:15<00:11, 1.25it/s] Loading checkpoint shards: 55%|█████▍ | 18/33 [00:15<00:11, 1.31it/s] Loading checkpoint shards: 58%|█████▊ | 19/33 [00:16<00:11, 1.24it/s] Loading checkpoint shards: 58%|█████▊ | 19/33 [00:16<00:11, 1.24it/s] Loading checkpoint shards: 58%|█████▊ | 19/33 [00:16<00:11, 1.21it/s] Loading checkpoint shards: 58%|█████▊ | 19/33 [00:16<00:11, 1.25it/s] Loading checkpoint shards: 61%|██████ | 20/33 [00:16<00:10, 1.24it/s] Loading checkpoint shards: 61%|██████ | 20/33 [00:16<00:10, 1.25it/s] Loading checkpoint shards: 61%|██████ | 20/33 [00:16<00:10, 1.26it/s] Loading checkpoint shards: 61%|██████ | 20/33 [00:17<00:10, 1.19it/s] Loading checkpoint shards: 64%|██████▎ | 21/33 [00:17<00:09, 1.29it/s] Loading checkpoint shards: 64%|██████▎ | 21/33 [00:17<00:09, 1.29it/s] Loading checkpoint shards: 64%|██████▎ | 21/33 [00:17<00:09, 1.29it/s] Loading checkpoint shards: 64%|██████▎ | 21/33 [00:17<00:09, 1.32it/s] Loading checkpoint shards: 67%|██████▋ | 22/33 [00:18<00:08, 1.31it/s] Loading checkpoint shards: 67%|██████▋ | 22/33 [00:18<00:08, 1.37it/s] Loading checkpoint shards: 67%|██████▋ | 22/33 [00:18<00:08, 1.30it/s] Loading checkpoint shards: 67%|██████▋ | 22/33 [00:18<00:08, 1.31it/s] Loading checkpoint shards: 70%|██████▉ | 23/33 [00:19<00:07, 1.33it/s] Loading checkpoint shards: 70%|██████▉ | 23/33 [00:19<00:07, 1.28it/s] Loading checkpoint shards: 70%|██████▉ | 23/33 [00:19<00:07, 1.29it/s] Loading checkpoint shards: 70%|██████▉ | 23/33 [00:19<00:07, 1.30it/s] Loading checkpoint shards: 73%|███████▎ | 24/33 [00:20<00:07, 1.28it/s] Loading checkpoint shards: 73%|███████▎ | 24/33 [00:20<00:07, 1.24it/s] Loading checkpoint shards: 73%|███████▎ | 24/33 [00:20<00:07, 1.25it/s] Loading checkpoint shards: 73%|███████▎ | 24/33 [00:20<00:07, 1.25it/s] Loading checkpoint shards: 76%|███████▌ | 25/33 [00:20<00:06, 1.21it/s] Loading checkpoint shards: 76%|███████▌ | 25/33 [00:20<00:06, 1.22it/s] Loading checkpoint shards: 76%|███████▌ | 25/33 [00:20<00:06, 1.23it/s] Loading checkpoint shards: 76%|███████▌ | 25/33 [00:20<00:06, 1.21it/s] Loading checkpoint shards: 79%|███████▉ | 26/33 [00:21<00:05, 1.27it/s] Loading checkpoint shards: 79%|███████▉ | 26/33 [00:21<00:05, 1.25it/s] Loading checkpoint shards: 79%|███████▉ | 26/33 [00:21<00:05, 1.25it/s] Loading checkpoint shards: 79%|███████▉ | 26/33 [00:21<00:05, 1.25it/s] Loading checkpoint shards: 82%|████████▏ | 27/33 [00:22<00:04, 1.27it/s] Loading checkpoint shards: 82%|████████▏ | 27/33 [00:22<00:04, 1.26it/s] Loading checkpoint shards: 82%|████████▏ | 27/33 [00:22<00:04, 1.26it/s] Loading checkpoint shards: 82%|████████▏ | 27/33 [00:22<00:04, 1.26it/s] Loading checkpoint shards: 85%|████████▍ | 28/33 [00:23<00:03, 1.30it/s] Loading checkpoint shards: 85%|████████▍ | 28/33 [00:23<00:03, 1.29it/s] Loading checkpoint shards: 85%|████████▍ | 28/33 [00:23<00:03, 1.29it/s] Loading checkpoint shards: 85%|████████▍ | 28/33 [00:23<00:03, 1.29it/s] Loading checkpoint shards: 88%|████████▊ | 29/33 [00:23<00:03, 1.27it/s] Loading checkpoint shards: 88%|████████▊ | 29/33 [00:23<00:03, 1.27it/s] Loading checkpoint shards: 88%|████████▊ | 29/33 [00:23<00:03, 1.26it/s] Loading checkpoint shards: 88%|████████▊ | 29/33 [00:23<00:03, 1.26it/s] Loading checkpoint shards: 91%|█████████ | 30/33 [00:24<00:02, 1.29it/s] Loading checkpoint shards: 91%|█████████ | 30/33 [00:24<00:02, 1.28it/s] Loading checkpoint shards: 91%|█████████ | 30/33 [00:24<00:02, 1.27it/s] Loading checkpoint shards: 91%|█████████ | 30/33 [00:24<00:02, 1.28it/s] Loading checkpoint shards: 94%|█████████▍| 31/33 [00:25<00:01, 1.29it/s] Loading checkpoint shards: 94%|█████████▍| 31/33 [00:25<00:01, 1.29it/s] Loading checkpoint shards: 94%|█████████▍| 31/33 [00:25<00:01, 1.30it/s] Loading checkpoint shards: 94%|█████████▍| 31/33 [00:25<00:01, 1.29it/s] Loading checkpoint shards: 97%|█████████▋| 32/33 [00:26<00:00, 1.30it/s] Loading checkpoint shards: 97%|█████████▋| 32/33 [00:26<00:00, 1.30it/s] Loading checkpoint shards: 97%|█████████▋| 32/33 [00:26<00:00, 1.30it/s] Loading checkpoint shards: 97%|█████████▋| 32/33 [00:26<00:00, 1.30it/s] Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.09it/s] Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.20it/s]c06r3n06: Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.08it/s] Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.20it/s] c06r3n06: Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.08it/s] Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.20it/s] c06r3n09: Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.08it/s] Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.20it/s] c06r3n07: 03/15/2024 10:54:39 - INFO - llmtuner.model.patcher - Gradient checkpointing enabled. c06r3n07: 03/15/2024 10:54:39 - INFO - llmtuner.model.adapter - Fine-tuning method: Full c06r3n09: Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.08it/s] Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.20it/s] c06r3n06: 03/15/2024 10:54:39 - INFO - llmtuner.model.patcher - Gradient checkpointing enabled. c06r3n06: 03/15/2024 10:54:39 - INFO - llmtuner.model.adapter - Fine-tuning method: Full c06r3n06: 03/15/2024 10:54:39 - INFO - llmtuner.model.patcher - Gradient checkpointing enabled. c06r3n09: 03/15/2024 10:54:39 - INFO - llmtuner.model.patcher - Gradient checkpointing enabled. c06r3n06: 03/15/2024 10:54:39 - INFO - llmtuner.model.adapter - Fine-tuning method: Full c06r3n09: 03/15/2024 10:54:39 - INFO - llmtuner.model.adapter - Fine-tuning method: Full c06r3n07: Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.09it/s] Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.20it/s] c06r3n07: [INFO|modeling_utils.py:4350] 2024-03-15 10:54:39,749 >> All model checkpoint weights were used when initializing LlamaForCausalLM. c06r3n07: c06r3n07: [INFO|modeling_utils.py:4358] 2024-03-15 10:54:39,749 >> All the weights of LlamaForCausalLM were initialized from the model checkpoint at /work/home/liangjing/.cache/modelscope/hub/skyline2006/llama-7b. c06r3n07: If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training. c06r3n06: 03/15/2024 10:54:39 - INFO - llmtuner.model.patcher - Gradient checkpointing enabled. c06r3n06: 03/15/2024 10:54:39 - INFO - llmtuner.model.adapter - Fine-tuning method: Full c06r3n09: 03/15/2024 10:54:39 - INFO - llmtuner.model.patcher - Gradient checkpointing enabled. c06r3n09: 03/15/2024 10:54:39 - INFO - llmtuner.model.adapter - Fine-tuning method: Full c06r3n08: 03/15/2024 10:54:39 - INFO - llmtuner.model.loader - trainable params: 6738415616 || all params: 6738415616 || trainable%: 100.0000 c06r3n08: 03/15/2024 10:54:39 - INFO - llmtuner.model.loader - trainable params: 6738415616 || all params: 6738415616 || trainable%: 100.0000 c06r3n08: 03/15/2024 10:54:39 - INFO - llmtuner.model.loader - trainable params: 6738415616 || all params: 6738415616 || trainable%: 100.0000 c06r3n08: 03/15/2024 10:54:39 - INFO - llmtuner.model.loader - trainable params: 6738415616 || all params: 6738415616 || trainable%: 100.0000 c06r3n09: 03/15/2024 10:54:39 - INFO - llmtuner.model.patcher - Gradient checkpointing enabled. c06r3n09: 03/15/2024 10:54:39 - INFO - llmtuner.model.adapter - Fine-tuning method: Full c06r3n07: Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.10it/s] Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.20it/s] c06r3n07: [INFO|configuration_utils.py:779] 2024-03-15 10:54:39,754 >> loading configuration file /work/home/liangjing/.cache/modelscope/hub/skyline2006/llama-7b/generation_config.json c06r3n07: [INFO|configuration_utils.py:826] 2024-03-15 10:54:39,754 >> Generate config GenerationConfig { c06r3n07: "bos_token_id": 0, c06r3n07: "eos_token_id": 1, c06r3n07: "pad_token_id": 0 c06r3n07: } c06r3n07: c06r3n07: 03/15/2024 10:54:39 - INFO - llmtuner.model.patcher - Gradient checkpointing enabled. c06r3n07: 03/15/2024 10:54:39 - INFO - llmtuner.model.adapter - Fine-tuning method: Full c06r3n07: 03/15/2024 10:54:39 - INFO - llmtuner.model.loader - trainable params: 6738415616 || all params: 6738415616 || trainable%: 100.0000 c06r3n06: 03/15/2024 10:54:39 - INFO - llmtuner.model.loader - trainable params: 6738415616 || all params: 6738415616 || trainable%: 100.0000 c06r3n06: 03/15/2024 10:54:39 - INFO - llmtuner.model.loader - trainable params: 6738415616 || all params: 6738415616 || trainable%: 100.0000 c06r3n07: 03/15/2024 10:54:39 - INFO - llmtuner.model.patcher - Gradient checkpointing enabled. c06r3n07: 03/15/2024 10:54:39 - INFO - llmtuner.model.adapter - Fine-tuning method: Full c06r3n09: 03/15/2024 10:54:39 - INFO - llmtuner.model.loader - trainable params: 6738415616 || all params: 6738415616 || trainable%: 100.0000 c06r3n09: Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.08it/s] Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.20it/s] c06r3n09: [INFO|modeling_utils.py:4350] 2024-03-15 10:54:39,762 >> All model checkpoint weights were used when initializing LlamaForCausalLM. c06r3n09: c06r3n09: [INFO|modeling_utils.py:4358] 2024-03-15 10:54:39,762 >> All the weights of LlamaForCausalLM were initialized from the model checkpoint at /work/home/liangjing/.cache/modelscope/hub/skyline2006/llama-7b. c06r3n09: If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training. c06r3n06: 03/15/2024 10:54:39 - INFO - llmtuner.model.loader - trainable params: 6738415616 || all params: 6738415616 || trainable%: 100.0000 c06r3n07: Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.10it/s] Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.20it/s] c06r3n09: 03/15/2024 10:54:39 - INFO - llmtuner.model.loader - trainable params: 6738415616 || all params: 6738415616 || trainable%: 100.0000 c06r3n09: 03/15/2024 10:54:39 - INFO - llmtuner.model.loader - trainable params: 6738415616 || all params: 6738415616 || trainable%: 100.0000 c06r3n09: [INFO|configuration_utils.py:779] 2024-03-15 10:54:39,766 >> loading configuration file /work/home/liangjing/.cache/modelscope/hub/skyline2006/llama-7b/generation_config.json c06r3n09: [INFO|configuration_utils.py:826] 2024-03-15 10:54:39,767 >> Generate config GenerationConfig { c06r3n09: "bos_token_id": 0, c06r3n09: "eos_token_id": 1, c06r3n09: "pad_token_id": 0 c06r3n09: } c06r3n09: c06r3n07: 03/15/2024 10:54:39 - INFO - llmtuner.model.loader - trainable params: 6738415616 || all params: 6738415616 || trainable%: 100.0000 c06r3n09: 03/15/2024 10:54:39 - INFO - llmtuner.model.patcher - Gradient checkpointing enabled. c06r3n09: 03/15/2024 10:54:39 - INFO - llmtuner.model.adapter - Fine-tuning method: Full c06r3n07: 03/15/2024 10:54:39 - INFO - llmtuner.model.patcher - Gradient checkpointing enabled. c06r3n07: 03/15/2024 10:54:39 - INFO - llmtuner.model.adapter - Fine-tuning method: Full c06r3n07: 03/15/2024 10:54:39 - INFO - llmtuner.model.loader - trainable params: 6738415616 || all params: 6738415616 || trainable%: 100.0000 c06r3n09: 03/15/2024 10:54:39 - INFO - llmtuner.model.loader - trainable params: 6738415616 || all params: 6738415616 || trainable%: 100.0000 c06r3n07: 03/15/2024 10:54:39 - INFO - llmtuner.model.loader - trainable params: 6738415616 || all params: 6738415616 || trainable%: 100.0000 c06r3n06: 03/15/2024 10:54:39 - INFO - llmtuner.data.template - Add pad token: c06r3n06: 03/15/2024 10:54:39 - INFO - llmtuner.data.template - Add pad token: c06r3n06: 03/15/2024 10:54:39 - INFO - llmtuner.data.template - Add pad token: c06r3n09: 03/15/2024 10:54:39 - INFO - llmtuner.data.template - Add pad token: c06r3n09: 03/15/2024 10:54:39 - INFO - llmtuner.data.template - Add pad token: c06r3n09: 03/15/2024 10:54:39 - INFO - llmtuner.data.template - Add pad token: c06r3n07: 03/15/2024 10:54:39 - INFO - llmtuner.data.template - Add pad token: c06r3n07: 03/15/2024 10:54:39 - INFO - llmtuner.data.template - Add pad token: c06r3n09: 03/15/2024 10:54:39 - INFO - llmtuner.data.template - Add pad token: c06r3n07: 03/15/2024 10:54:39 - INFO - llmtuner.data.template - Add pad token: c06r3n07: 03/15/2024 10:54:39 - INFO - llmtuner.data.template - Add pad token: c06r3n08: 03/15/2024 10:54:39 - INFO - llmtuner.data.template - Add pad token: c06r3n08: 03/15/2024 10:54:39 - INFO - llmtuner.data.template - Add pad token: c06r3n08: 03/15/2024 10:54:39 - INFO - llmtuner.data.template - Add pad token: c06r3n08: 03/15/2024 10:54:39 - INFO - llmtuner.data.template - Add pad token: c06r3n07: 03/15/2024 10:54:39 - INFO - llmtuner.data.loader - Loading dataset alpaca_gpt4_data_en.json... c06r3n09: 03/15/2024 10:54:39 - INFO - llmtuner.data.loader - Loading dataset alpaca_gpt4_data_en.json... c06r3n08: 03/15/2024 10:54:39 - INFO - llmtuner.data.loader - Loading dataset alpaca_gpt4_data_en.json... c06r3n06: Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.17it/s] Loading checkpoint shards: 100%|██████████| 33/33 [00:27<00:00, 1.19it/s] c06r3n06: [INFO|modeling_utils.py:4350] 2024-03-15 10:54:39,834 >> All model checkpoint weights were used when initializing LlamaForCausalLM. c06r3n06: c06r3n06: [INFO|modeling_utils.py:4358] 2024-03-15 10:54:39,834 >> All the weights of LlamaForCausalLM were initialized from the model checkpoint at /work/home/liangjing/.cache/modelscope/hub/skyline2006/llama-7b. c06r3n06: If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training. c06r3n06: [INFO|configuration_utils.py:779] 2024-03-15 10:54:39,839 >> loading configuration file /work/home/liangjing/.cache/modelscope/hub/skyline2006/llama-7b/generation_config.json c06r3n06: [INFO|configuration_utils.py:826] 2024-03-15 10:54:39,839 >> Generate config GenerationConfig { c06r3n06: "bos_token_id": 0, c06r3n06: "eos_token_id": 1, c06r3n06: "pad_token_id": 0 c06r3n06: } c06r3n06: c06r3n06: 03/15/2024 10:54:39 - INFO - llmtuner.model.patcher - Gradient checkpointing enabled. c06r3n06: 03/15/2024 10:54:39 - INFO - llmtuner.model.adapter - Fine-tuning method: Full c06r3n06: 03/15/2024 10:54:39 - INFO - llmtuner.model.loader - trainable params: 6738415616 || all params: 6738415616 || trainable%: 100.0000 c06r3n06: 03/15/2024 10:54:39 - INFO - llmtuner.data.template - Add pad token: c06r3n06: 03/15/2024 10:54:39 - INFO - llmtuner.data.loader - Loading dataset alpaca_gpt4_data_en.json... c06r3n06: Using custom data configuration default-c71a5e5c5041e81e c06r3n08: Using custom data configuration default-c71a5e5c5041e81e c06r3n08: Loading Dataset Infos from /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/datasets/packaged_modules/json c06r3n07: Using custom data configuration default-c71a5e5c5041e81e c06r3n09: Using custom data configuration default-c71a5e5c5041e81e c06r3n06: Loading Dataset Infos from /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/datasets/packaged_modules/json c06r3n07: Loading Dataset Infos from /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/datasets/packaged_modules/json c06r3n09: Loading Dataset Infos from /work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/datasets/packaged_modules/json c06r3n06: Generating dataset json (/work/home/liangjing/.cache/huggingface/datasets/json/default-c71a5e5c5041e81e/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96) c06r3n06: Downloading and preparing dataset json/default to /work/home/liangjing/.cache/huggingface/datasets/json/default-c71a5e5c5041e81e/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96... c06r3n06: Downloading took 0.0 min c06r3n06: Checksum Computation took 0.0 min c06r3n06: Generating train split c06r3n06: Generating train split: 0 examples [00:00, ? examples/s] Generating train split: 52002 examples [00:00, 108476.99 examples/s] Generating train split: 52002 examples [00:00, 105128.14 examples/s] c06r3n06: Unable to verify splits sizes. c06r3n06: Dataset json downloaded and prepared to /work/home/liangjing/.cache/huggingface/datasets/json/default-c71a5e5c5041e81e/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96. Subsequent calls will reuse this data. c06r3n07: Found cached dataset json (/work/home/liangjing/.cache/huggingface/datasets/json/default-c71a5e5c5041e81e/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96) c06r3n07: Loading Dataset info from /work/home/liangjing/.cache/huggingface/datasets/json/default-c71a5e5c5041e81e/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96 c06r3n06: Process #0 will write at /work/home/liangjing/.cache/huggingface/datasets/json/default-c71a5e5c5041e81e/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96/cache-f55b5a094672e9db_00000_of_00002.arrow c06r3n06: Process #1 will write at /work/home/liangjing/.cache/huggingface/datasets/json/default-c71a5e5c5041e81e/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96/cache-f55b5a094672e9db_00001_of_00002.arrow c06r3n08: Found cached dataset json (/work/home/liangjing/.cache/huggingface/datasets/json/default-c71a5e5c5041e81e/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96) c06r3n08: Loading Dataset info from /work/home/liangjing/.cache/huggingface/datasets/json/default-c71a5e5c5041e81e/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96 c06r3n09: Found cached dataset json (/work/home/liangjing/.cache/huggingface/datasets/json/default-c71a5e5c5041e81e/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96) c06r3n09: Loading Dataset info from /work/home/liangjing/.cache/huggingface/datasets/json/default-c71a5e5c5041e81e/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96 c06r3n07: Process #0 will write at /work/home/liangjing/.cache/huggingface/datasets/json/default-c71a5e5c5041e81e/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96/cache-f55b5a094672e9db_00000_of_00002.arrow c06r3n07: Process #1 will write at /work/home/liangjing/.cache/huggingface/datasets/json/default-c71a5e5c5041e81e/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96/cache-f55b5a094672e9db_00001_of_00002.arrow c06r3n08: Process #0 will write at /work/home/liangjing/.cache/huggingface/datasets/json/default-c71a5e5c5041e81e/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96/cache-f55b5a094672e9db_00000_of_00002.arrow c06r3n08: Process #1 will write at /work/home/liangjing/.cache/huggingface/datasets/json/default-c71a5e5c5041e81e/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96/cache-f55b5a094672e9db_00001_of_00002.arrow c06r3n09: Process #0 will write at /work/home/liangjing/.cache/huggingface/datasets/json/default-c71a5e5c5041e81e/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96/cache-f55b5a094672e9db_00000_of_00002.arrow c06r3n09: Process #1 will write at /work/home/liangjing/.cache/huggingface/datasets/json/default-c71a5e5c5041e81e/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96/cache-f55b5a094672e9db_00001_of_00002.arrow c06r3n09: Spawning 2 processes c06r3n07: Spawning 2 processes c06r3n08: Spawning 2 processes c06r3n06: Spawning 2 processes c06r3n09: Converting format of dataset (num_proc=2): 0%| | 0/52002 [00:00 c06r3n08: main() c06r3n08: File "src/train_bash.py", line 5, in main c06r3n08: run_exp() c06r3n08: File "/work/home/liangjing/LLM/LLaMA-Factory-main/src/llmtuner/train/tuner.py", line 31, in run_exp c06r3n08: run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks) c06r3n08: File "/work/home/liangjing/LLM/LLaMA-Factory-main/src/llmtuner/train/sft/workflow.py", line 32, in run_sft c06r3n08: dataset = get_dataset(tokenizer, model_args, data_args, training_args, stage="sft") c06r3n08: File "/work/home/liangjing/LLM/LLaMA-Factory-main/src/llmtuner/data/loader.py", line 162, in get_dataset c06r3n08: all_datasets.append(load_single_dataset(dataset_attr, model_args, data_args)) c06r3n08: File "/work/home/liangjing/LLM/LLaMA-Factory-main/src/llmtuner/data/loader.py", line 111, in load_single_dataset c06r3n08: return align_dataset(dataset, dataset_attr, data_args) c06r3n08: File "/work/home/liangjing/LLM/LLaMA-Factory-main/src/llmtuner/data/aligner.py", line 125, in align_dataset c06r3n08: return dataset.map( c06r3n08: File "/work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 593, in wrapper c06r3n08: out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) c06r3n08: File "/work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 558, in wrapper c06r3n08: out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) c06r3n08: File "/work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3197, in map c06r3n08: for rank, done, content in iflatmap_unordered( c06r3n08: File "/work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 658, in iflatmap_unordered c06r3n08: raise RuntimeError( c06r3n08: RuntimeError: One of the subprocesses has abruptly died during map operation.To debug the error, disable multiprocessing. c06r3n07: Traceback (most recent call last): c06r3n07: File "src/train_bash.py", line 14, in c06r3n07: main() c06r3n07: File "src/train_bash.py", line 5, in main c06r3n07: run_exp() c06r3n07: File "/work/home/liangjing/LLM/LLaMA-Factory-main/src/llmtuner/train/tuner.py", line 31, in run_exp c06r3n07: run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks) c06r3n07: File "/work/home/liangjing/LLM/LLaMA-Factory-main/src/llmtuner/train/sft/workflow.py", line 32, in run_sft c06r3n07: dataset = get_dataset(tokenizer, model_args, data_args, training_args, stage="sft") c06r3n07: File "/work/home/liangjing/LLM/LLaMA-Factory-main/src/llmtuner/data/loader.py", line 162, in get_dataset c06r3n07: all_datasets.append(load_single_dataset(dataset_attr, model_args, data_args)) c06r3n07: File "/work/home/liangjing/LLM/LLaMA-Factory-main/src/llmtuner/data/loader.py", line 111, in load_single_dataset c06r3n07: return align_dataset(dataset, dataset_attr, data_args) c06r3n07: File "/work/home/liangjing/LLM/LLaMA-Factory-main/src/llmtuner/data/aligner.py", line 125, in align_dataset c06r3n07: return dataset.map( c06r3n07: File "/work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 593, in wrapper c06r3n07: out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) c06r3n07: File "/work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 558, in wrapper c06r3n07: out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) c06r3n07: File "/work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3197, in map c06r3n07: for rank, done, content in iflatmap_unordered( c06r3n07: File "/work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 658, in iflatmap_unordered c06r3n07: raise RuntimeError( c06r3n07: RuntimeError: One of the subprocesses has abruptly died during map operation.To debug the error, disable multiprocessing. c06r3n06: 03/15/2024 10:54:47 - INFO - llmtuner.data.loader - Loading dataset alpaca_gpt4_data_zh.json... c06r3n09: Traceback (most recent call last): c06r3n09: File "/work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 651, in iflatmap_unordered c06r3n09: yield queue.get(timeout=0.05) c06r3n09: File "", line 2, in get c06r3n09: File "/work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/multiprocess/managers.py", line 835, in _callmethod c06r3n09: kind, result = conn.recv() c06r3n09: File "/work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/multiprocess/connection.py", line 253, in recv c06r3n09: buf = self._recv_bytes() c06r3n09: File "/work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/multiprocess/connection.py", line 417, in _recv_bytes c06r3n09: buf = self._recv(4) c06r3n09: File "/work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/multiprocess/connection.py", line 386, in _recv c06r3n09: raise EOFError c06r3n09: EOFError c06r3n09: c06r3n09: During handling of the above exception, another exception occurred: c06r3n09: c06r3n09: Traceback (most recent call last): c06r3n09: File "src/train_bash.py", line 14, in c06r3n09: main() c06r3n09: File "src/train_bash.py", line 5, in main c06r3n09: run_exp() c06r3n09: File "/work/home/liangjing/LLM/LLaMA-Factory-main/src/llmtuner/train/tuner.py", line 31, in run_exp c06r3n09: run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks) c06r3n09: File "/work/home/liangjing/LLM/LLaMA-Factory-main/src/llmtuner/train/sft/workflow.py", line 32, in run_sft c06r3n09: dataset = get_dataset(tokenizer, model_args, data_args, training_args, stage="sft") c06r3n09: File "/work/home/liangjing/LLM/LLaMA-Factory-main/src/llmtuner/data/loader.py", line 162, in get_dataset c06r3n09: all_datasets.append(load_single_dataset(dataset_attr, model_args, data_args)) c06r3n09: File "/work/home/liangjing/LLM/LLaMA-Factory-main/src/llmtuner/data/loader.py", line 111, in load_single_dataset c06r3n09: return align_dataset(dataset, dataset_attr, data_args) c06r3n09: File "/work/home/liangjing/LLM/LLaMA-Factory-main/src/llmtuner/data/aligner.py", line 125, in align_dataset c06r3n09: return dataset.map( c06r3n09: File "/work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 593, in wrapper c06r3n09: out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) c06r3n09: File "/work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 558, in wrapper c06r3n09: out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) c06r3n09: File "/work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3197, in map c06r3n09: for rank, done, content in iflatmap_unordered( c06r3n09: File "/work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 665, in iflatmap_unordered c06r3n09: [async_result.get(timeout=0.05) for async_result in async_results] c06r3n09: File "/work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 665, in c06r3n09: [async_result.get(timeout=0.05) for async_result in async_results] c06r3n09: File "/work/home/liangjing/anaconda3/envs/torch2.1/lib/python3.8/site-packages/multiprocess/pool.py", line 767, in get c06r3n09: raise TimeoutError c06r3n09: multiprocess.context.TimeoutError c06r3n06: 03/15/2024 10:54:47 - INFO - llmtuner.data.loader - Loading dataset alpaca_gpt4_data_zh.json... c06r3n06: 03/15/2024 10:54:47 - INFO - llmtuner.data.loader - Loading dataset alpaca_gpt4_data_zh.json... c06r3n07: 03/15/2024 10:54:47 - INFO - llmtuner.data.loader - Loading dataset alpaca_gpt4_data_zh.json... c06r3n07: 03/15/2024 10:54:47 - INFO - llmtuner.data.loader - Loading dataset alpaca_gpt4_data_zh.json... c06r3n08: 03/15/2024 10:54:47 - INFO - llmtuner.data.loader - Loading dataset alpaca_gpt4_data_zh.json... c06r3n08: 03/15/2024 10:54:47 - INFO - llmtuner.data.loader - Loading dataset alpaca_gpt4_data_zh.json... c06r3n08: 03/15/2024 10:54:47 - INFO - llmtuner.data.loader - Loading dataset alpaca_gpt4_data_zh.json... c06r3n09: 03/15/2024 10:54:47 - INFO - llmtuner.data.loader - Loading dataset alpaca_gpt4_data_zh.json... c06r3n09: 03/15/2024 10:54:47 - INFO - llmtuner.data.loader - Loading dataset alpaca_gpt4_data_zh.json... c06r3n07: 03/15/2024 10:54:47 - INFO - llmtuner.data.loader - Loading dataset alpaca_gpt4_data_zh.json... c06r3n09: 03/15/2024 10:54:47 - INFO - llmtuner.data.loader - Loading dataset alpaca_gpt4_data_zh.json... c06r3n06: Running tokenizer on dataset (num_proc=2): 0%| | 0/100820 [00:00> c06r3n06: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. c06r3n06: c06r3n06: If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. c06r3n06: <> c06r3n06: c06r3n06: Give three tips for staying healthy. [/INST] 1. Eat a balanced and nutritious diet: Make sure your meals are inclusive of a variety of fruits and vegetables, lean protein, whole grains, and healthy fats. This helps to provide your body with the essential nutrients to function at its best and can help prevent chronic diseases. c06r3n06: c06r3n06: 2. Engage in regular physical activity: Exercise is crucial for maintaining strong bones, muscles, and cardiovascular health. Aim for at least 150 minutes of moderate aerobic exercise or 75 minutes of vigorous exercise each week. c06r3n06: c06r3n06: 3. Get enough sleep: Getting enough quality sleep is crucial for physical and mental well-being. It helps to regulate mood, improve cognitive function, and supports healthy growth and immune function. Aim for 7-9 hours of sleep each night. ⁇ c06r3n06: label_ids: c06r3n06: [-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 29871, 29896, 29889, 382, 271, 263, 6411, 8362, 322, 18254, 768, 2738, 652, 300, 29901, 8561, 1854, 596, 592, 1338, 526, 20978, 573, 310, 263, 12875, 310, 285, 21211, 322, 18655, 1849, 29892, 20793, 26823, 29892, 3353, 2646, 1144, 29892, 322, 9045, 29891, 285, 1446, 29889, 910, 6911, 304, 3867, 596, 3573, 411, 278, 18853, 18254, 374, 1237, 304, 740, 472, 967, 1900, 322, 508, 1371, 5557, 17168, 293, 10267, 2129, 29889, 13, 13, 29906, 29889, 2201, 482, 297, 4943, 9128, 6354, 29901, 1222, 6269, 895, 338, 7618, 1455, 363, 7344, 292, 4549, 289, 2873, 29892, 2301, 7799, 29892, 322, 5881, 29875, 586, 6151, 1070, 9045, 29889, 319, 326, 363, 472, 3203, 29871, 29896, 29945, 29900, 6233, 310, 17768, 403, 14911, 711, 293, 15058, 470, 29871, 29955, 29945, 6233, 310, 14877, 20657, 15058, 1269, 4723, 29889, 13, 13, 29941, 29889, 3617, 3307, 8709, 29901, 24162, 3307, 11029, 8709, 338, 7618, 1455, 363, 9128, 322, 19119, 1532, 29899, 915, 292, 29889, 739, 6911, 304, 1072, 5987, 286, 2092, 29892, 11157, 25323, 3321, 740, 29892, 322, 11286, 9045, 29891, 14321, 322, 5198, 1540, 740, 29889, 319, 326, 363, 29871, 29955, 29899, 29929, 6199, 310, 8709, 1269, 4646, 29889, 0] c06r3n06: labels: c06r3n06: 1. Eat a balanced and nutritious diet: Make sure your meals are inclusive of a variety of fruits and vegetables, lean protein, whole grains, and healthy fats. This helps to provide your body with the essential nutrients to function at its best and can help prevent chronic diseases. c06r3n06: c06r3n06: 2. Engage in regular physical activity: Exercise is crucial for maintaining strong bones, muscles, and cardiovascular health. Aim for at least 150 minutes of moderate aerobic exercise or 75 minutes of vigorous exercise each week. c06r3n06: c06r3n06: 3. Get enough sleep: Getting enough quality sleep is crucial for physical and mental well-being. It helps to regulate mood, improve cognitive function, and supports healthy growth and immune function. Aim for 7-9 hours of sleep each night. ⁇ slurmstepd: error: *** JOB 13597996 ON c06r3n06 CANCELLED AT 2024-03-15T10:58:43 *** c06r3n06: Connection to c06r3n06 closed by remote host. pdsh@c06r3n06: c06r3n06: ssh exited with exit code 255