ds_config["train_batch_size"]报错
Traceback (most recent call last): File "fine-tune.py", line 159, in train() File "fine-tune.py", line 151, in train model=model, args=training_args, train_dataset=dataset, tokenizer=tokenizer File "/home/liuhao/anaconda3/envs/py37_baichuan2/lib/python3.7/site-packages/transformers/trainer.py", line 349, in init self.create_accelerator_and_postprocess() File "/home/liuhao/anaconda3/envs/py37_baichuan2/lib/python3.7/site-packages/transformers/trainer.py", line 3970, in create_accelerator_and_postprocess gradient_accumulation_steps=self.args.gradient_accumulation_steps, File "/home/liuhao/anaconda3/envs/py37_baichuan2/lib/python3.7/site-packages/accelerate/accelerator.py", line 283, in init deepspeed_plugin.set_deepspeed_weakref() File "/home/liuhao/anaconda3/envs/py37_baichuan2/lib/python3.7/site-packages/accelerate/utils/dataclasses.py", line 678, in set_deepspeed_weakref if ds_config["train_batch_size"] == "auto": KeyError: 'train_batch_size'
在ds_config.json尝试过增加"train_batch_size": "auto",会报另外一个错误:TypeError: sdp_kernel() got an unexpected keyword argument 'enable_mem_efficient'