Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AssertionError: Torch not compiled with CUDA enabled #9

Open
EightiesPower opened this issue Mar 21, 2024 · 3 comments
Open

AssertionError: Torch not compiled with CUDA enabled #9

EightiesPower opened this issue Mar 21, 2024 · 3 comments

Comments

@EightiesPower
Copy link

I get the following error when running app.py:
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ G:\stable diffusion stuff\Smooth Diffusion\Smooth-Diffusion\app.py:939 in │
│ │
│ 936 │ # css = css_empty │
│ 937 │ css = css_version_4_11_0 │
│ 938 │ │
│ ❱ 939 │ wrapper_obj = wrapper( │
│ 940 │ │ fp16=False, │
│ 941 │ │ tag_diffuser=default.diffuser, │
│ 942 │ │ tag_lora=default.lora, │
│ │
│ G:\stable diffusion stuff\Smooth Diffusion\Smooth-Diffusion\app.py:263 in init
│ │
│ 260 │ │ │ self.torch_dtype = torch.float16 │
│ 261 │ │ else: │
│ 262 │ │ │ self.torch_dtype = torch.float32 │
│ ❱ 263 │ │ self.load_all(tag_diffuser, tag_lora, tag_scheduler) │
│ 264 │ │ │
│ 265 │ │ self.image_latent_dim = 4 │
│ 266 │ │ self.batchsize = 8 │
│ │
│ G:\stable diffusion stuff\Smooth Diffusion\Smooth-Diffusion\app.py:277 in load_all │
│ │
│ 274 │ │ self.cache_inverse_maxn = 500 │
│ 275 │ │
│ 276 │ def load_all(self, tag_diffuser, tag_lora, tag_scheduler): │
│ ❱ 277 │ │ self.load_diffuser_lora(tag_diffuser, tag_lora) │
│ 278 │ │ self.load_scheduler(tag_scheduler) │
│ 279 │ │ return tag_diffuser, tag_lora, tag_scheduler │
│ 280 │
│ │
│ G:\stable diffusion stuff\Smooth Diffusion\Smooth-Diffusion\app.py:282 in load_diffuser_lora │
│ │
│ 279 │ │ return tag_diffuser, tag_lora, tag_scheduler │
│ 280 │ │
│ 281 │ def load_diffuser_lora(self, tag_diffuser, tag_lora): │
│ ❱ 282 │ │ self.net = StableDiffusionPipeline.from_pretrained( │
│ 283 │ │ │ choices.diffuser[tag_diffuser], torch_dtype=self.torch_dtype).to(self.device │
│ 284 │ │ self.net.safety_checker = None │
│ 285 │ │ if tag_lora != 'empty': │
│ │
│ G:\miniconda3\envs\dev\lib\site-packages\diffusers\pipelines\pipeline_utils.py:727 in to │
│ │
│ 724 │ │ │ │ │ f"The module '{module.class.name}' has been loaded in 8bit a │
│ 725 │ │ │ │ ) │
│ 726 │ │ │ else: │
│ ❱ 727 │ │ │ │ module.to(torch_device, torch_dtype) │
│ 728 │ │ │ │
│ 729 │ │ │ if ( │
│ 730 │ │ │ │ module.dtype == torch.float16 │
│ │
│ G:\miniconda3\envs\dev\lib\site-packages\transformers\modeling_utils.py:1902 in to │
│ │
│ 1899 │ │ │ │ " model has already been set to the correct devices and casted to the co │
│ 1900 │ │ │ ) │
│ 1901 │ │ else: │
│ ❱ 1902 │ │ │ return super().to(*args, **kwargs) │
│ 1903 │ │
│ 1904 │ def half(self, *args): │
│ 1905 │ │ # Checks if the model has been loaded in 8-bit │
│ │
│ G:\miniconda3\envs\dev\lib\site-packages\torch\nn\modules\module.py:1145 in to │
│ │
│ 1142 │ │ │ │ │ │ │ non_blocking, memory_format=convert_to_format) │
│ 1143 │ │ │ return t.to(device, dtype if t.is_floating_point() or t.is_complex() else No │
│ 1144 │ │ │
│ ❱ 1145 │ │ return self._apply(convert) │
│ 1146 │ │
│ 1147 │ def register_full_backward_pre_hook( │
│ 1148 │ │ self, │
│ │
│ G:\miniconda3\envs\dev\lib\site-packages\torch\nn\modules\module.py:797 in _apply │
│ │
│ 794 │ │
│ 795 │ def _apply(self, fn): │
│ 796 │ │ for module in self.children(): │
│ ❱ 797 │ │ │ module._apply(fn) │
│ 798 │ │ │
│ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │
│ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │
│ │
│ G:\miniconda3\envs\dev\lib\site-packages\torch\nn\modules\module.py:797 in _apply │
│ │
│ 794 │ │
│ 795 │ def _apply(self, fn): │
│ 796 │ │ for module in self.children(): │
│ ❱ 797 │ │ │ module._apply(fn) │
│ 798 │ │ │
│ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │
│ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │
│ │
│ G:\miniconda3\envs\dev\lib\site-packages\torch\nn\modules\module.py:797 in _apply │
│ │
│ 794 │ │
│ 795 │ def _apply(self, fn): │
│ 796 │ │ for module in self.children(): │
│ ❱ 797 │ │ │ module._apply(fn) │
│ 798 │ │ │
│ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │
│ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │
│ │
│ G:\miniconda3\envs\dev\lib\site-packages\torch\nn\modules\module.py:797 in _apply │
│ │
│ 794 │ │
│ 795 │ def _apply(self, fn): │
│ 796 │ │ for module in self.children(): │
│ ❱ 797 │ │ │ module._apply(fn) │
│ 798 │ │ │
│ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │
│ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │
│ │
│ G:\miniconda3\envs\dev\lib\site-packages\torch\nn\modules\module.py:820 in _apply │
│ │
│ 817 │ │ │ # track autograd history of param_applied, so we have to use │
│ 818 │ │ │ # with torch.no_grad():
│ 819 │ │ │ with torch.no_grad(): │
│ ❱ 820 │ │ │ │ param_applied = fn(param) │
│ 821 │ │ │ should_use_set_data = compute_should_use_set_data(param, param_applied) │
│ 822 │ │ │ if should_use_set_data: │
│ 823 │ │ │ │ param.data = param_applied │
│ │
│ G:\miniconda3\envs\dev\lib\site-packages\torch\nn\modules\module.py:1143 in convert │
│ │
│ 1140 │ │ │ if convert_to_format is not None and t.dim() in (4, 5): │
│ 1141 │ │ │ │ return t.to(device, dtype if t.is_floating_point() or t.is_complex() els │
│ 1142 │ │ │ │ │ │ │ non_blocking, memory_format=convert_to_format) │
│ ❱ 1143 │ │ │ return t.to(device, dtype if t.is_floating_point() or t.is_complex() else No │
│ 1144 │ │ │
│ 1145 │ │ return self.apply(convert) │
│ 1146 │
│ │
│ G:\miniconda3\envs\dev\lib\site-packages\torch\cuda_init
.py:239 in _lazy_init │
│ │
│ 236 │ │ │ │ "Cannot re-initialize CUDA in forked subprocess. To use CUDA with " │
│ 237 │ │ │ │ "multiprocessing, you must use the 'spawn' start method") │
│ 238 │ │ if not hasattr(torch._C, '_cuda_getDeviceCount'): │
│ ❱ 239 │ │ │ raise AssertionError("Torch not compiled with CUDA enabled") │
│ 240 │ │ if _cudart is None: │
│ 241 │ │ │ raise AssertionError( │
│ 242 │ │ │ │ "libcudart functions unavailable. It looks like you have a broken build? │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
AssertionError: Torch not compiled with CUDA enabled

I tried uninstalling Torch 2.0.0 and installing 2.2.1. Afterwards it seems to run, but when opening the provided link I just get "The site can't be reached" error.

@JiayiGuo821
Copy link
Collaborator

Hi @EightiesPower,

The required torch version can be varied due to different device/CUDA settings.
For the "The site can't be reached" error, please try to comment out the last several lines in app.py.

# port = args.port if args.port is not None else default.port
# ipaddr = os.popen("curl icanhazip.com").read().strip()
# print('LINK: http://{}:{}'.format(ipaddr, port))
# demo.launch(server_name="0.0.0.0", server_port=port)
demo.launch()

Let me know if there are still some errors. I will also update the code soon.

@EightiesPower
Copy link
Author

Alright, after editing app.py I got the WebUI to launch. Thank you!
One more question though, is it possible to do drag editing using this WebUI? If so, how?

@JiayiGuo821
Copy link
Collaborator

It's not implemented in this WebUI. We integrate our smooth LoRA into the DragDiffusion framework for drag editing. You could make that by just replacing the model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants