1. 22 Oct, 2021 1 commit
    • Eugen Hotaj's avatar
      Extend auto shard capabilities to work around torch.fx edge cases. (#817) · 7bdf50a3
      Eugen Hotaj authored
      auto_shard.py currently uses torch.fx to create a symbolic DAG of
      operations and linearizes that DAG into an nn.Sequential so it can later
      be used for model offloading. This works in most cases but runs into
      issues for certain eager mode features, such as dynamic conditionals,
      shape-dependent computation, etc.
      
      This PR extends auto_shard.py to first run a preprocessing step which wraps
      any nn.Module which cannot be traced through. It adds a test for dynamic
      conditionals and updates existing failing test code.
      
      There are some immediate extensions to this approach which are marked as
      TODO in the code.
      7bdf50a3
  2. 26 Jun, 2021 1 commit
  3. 11 Jun, 2021 1 commit
    • anj-s's avatar
      [Offload][feature] Add auto shard functionality to remove requirement of... · cbeda830
      anj-s authored
      [Offload][feature] Add auto shard functionality to remove requirement of nn.Sequential models. (#695)
      
      * auto wrap functionality
      
      * lint and doc strings
      
      * fix lint errors
      
      * lint errors and version skips
      
      * remove mypy checking and add conditional import
      
      * another math.prod instance
      
      * another import fix
      
      * address comments
      
      * lint errors
      
      * address comments
      
      * fix lint errors
      
      * add placeholder nodes to tracker list
      cbeda830