distributed FSDP model initialization
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/656 Enable distributed FSDP model initialization. This iteratively moves and shards the model on GPU to allow for the training of models greater than single GPU HBM capacity and which cannot be instantiated multiple times on a single host. The flow is as follows: 1. Rank 0 will init the whole model on CPU using existing code paths, while all other ranks init an 'empty' model using fake tensors. 2. Once this is complete and initialization moves to FSDP, distributed init will traverse the model 'bottom-up', transferring all params/buffers from rank 0 to all other ranks, while simultaneously wrapping modules in FSDP whenever possible (based on the specified config). Thus modules are sharded (and memory usage distributed) at the first possible instance using the existing FSDP api/implementation. Reviewed By: XiaoliangDai Differential Revision: D54287718 fbshipit-source-id: 16d63d78065d1fca0c6baf7a385f666a4e1b2a5f
Showing
Please register or sign in to comment