• UmerHA's avatar
    Implements Blockwise lora (#7352) · 03024468
    UmerHA authored
    
    
    * Initial commit
    
    * Implemented block lora
    
    - implemented block lora
    - updated docs
    - added tests
    
    * Finishing up
    
    * Reverted unrelated changes made by make style
    
    * Fixed typo
    
    * Fixed bug + Made text_encoder_2 scalable
    
    * Integrated some review feedback
    
    * Incorporated review feedback
    
    * Fix tests
    
    * Made every module configurable
    
    * Adapter to new lora test structure
    
    * Final cleanup
    
    * Some more final fixes
    
    - Included examples in `using_peft_for_inference.md`
    - Added hint that only attns are scaled
    - Removed NoneTypes
    - Added test to check mismatching lens of adapter names / weights raise error
    
    * Update using_peft_for_inference.md
    
    * Update using_peft_for_inference.md
    
    * Make style, quality, fix-copies
    
    * Updated tutorial;Warning if scale/adapter mismatch
    
    * floats are forwarded as-is; changed tutorial scale
    
    * make style, quality, fix-copies
    
    * Fixed typo in tutorial
    
    * Moved some warnings into `lora_loader_utils.py`
    
    * Moved scale/lora mismatch warnings back
    
    * Integrated final review suggestions
    
    * Empty commit to trigger CI
    
    * Reverted emoty commit to trigger CI
    
    ---------
    Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
    03024468
unet_loader_utils.py 5.18 KB