1. 30 May, 2023 1 commit
  2. 28 May, 2023 1 commit
  3. 12 May, 2023 2 commits
  4. 04 May, 2023 1 commit
  5. 03 May, 2023 4 commits
  6. 02 May, 2023 1 commit
  7. 01 May, 2023 1 commit
  8. 24 Apr, 2023 1 commit
  9. 23 Apr, 2023 1 commit
  10. 19 Apr, 2023 1 commit
  11. 17 Apr, 2023 1 commit
  12. 15 Apr, 2023 2 commits
  13. 14 Apr, 2023 1 commit
  14. 13 Apr, 2023 1 commit
  15. 02 Apr, 2023 1 commit
    • comfyanonymous's avatar
      Add support for unCLIP SD2.x models. · 809bcc8c
      comfyanonymous authored
      See _for_testing/unclip in the UI for the new nodes.
      
      unCLIPCheckpointLoader is used to load them.
      
      unCLIPConditioning is used to add the image cond and takes as input a
      CLIPVisionEncode output which has been moved to the conditioning section.
      809bcc8c
  16. 31 Mar, 2023 1 commit
  17. 29 Mar, 2023 1 commit
  18. 23 Mar, 2023 2 commits
  19. 22 Mar, 2023 2 commits
  20. 21 Mar, 2023 1 commit
  21. 19 Mar, 2023 1 commit
  22. 17 Mar, 2023 1 commit
  23. 14 Mar, 2023 2 commits
  24. 13 Mar, 2023 1 commit
  25. 11 Mar, 2023 2 commits
  26. 10 Mar, 2023 1 commit
  27. 08 Mar, 2023 1 commit
  28. 07 Mar, 2023 1 commit
  29. 06 Mar, 2023 2 commits
  30. 05 Mar, 2023 1 commit
    • comfyanonymous's avatar
      Implement support for t2i style model. · 47acb3d7
      comfyanonymous authored
      It needs the CLIPVision model so I added CLIPVisionLoader and CLIPVisionEncode.
      
      Put the clip vision model in models/clip_vision
      Put the t2i style model in models/style_models
      
      StyleModelLoader to load it, StyleModelApply to apply it
      ConditioningAppend to append the conditioning it outputs to a positive one.
      47acb3d7