Commit 056e5545 authored by comfyanonymous's avatar comfyanonymous
Browse files

Don't try to get vram from xpu or cuda when directml is enabled.

parent 2ca934f7
...@@ -34,6 +34,9 @@ if args.directml is not None: ...@@ -34,6 +34,9 @@ if args.directml is not None:
try: try:
import torch import torch
if directml_enabled:
total_vram = 4097 #TODO
else:
try: try:
import intel_extension_for_pytorch as ipex import intel_extension_for_pytorch as ipex
if torch.xpu.is_available(): if torch.xpu.is_available():
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment