For IterableDataset, return DataLoader using self._train_batch_size. … (#21447)
For IterableDataset, return DataLoader using self._train_batch_size. This is consistent with how we generate a regular DataLoader, and leads to the correct args.per_device_train_batch_size eventually ending up on each GPU.
Showing
Please register or sign in to comment