Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: --use-zluda uses cpu, --use-directml works fine #552

Open
6 tasks done
picarica opened this issue Oct 26, 2024 · 20 comments
Open
6 tasks done

[Bug]: --use-zluda uses cpu, --use-directml works fine #552

picarica opened this issue Oct 26, 2024 · 20 comments

Comments

@picarica
Copy link

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

using zluda does not work with RX 570 on windows 10 with newest drivers

Steps to reproduce the problem

clone repository
--use-zluda wiht with these options
--opt-sub-quad-attention --lowvram --disable-nan-check --use-zluda --disable-safe-unpickle --update-check --listen --precision full --no-half --device-id 1

What should have happened?

it should use gpu it even detects correct gfx803 in logs but doesnt use it

What browsers do you use to access the UI ?

No response

Sysinfo

sysinfo-2024-10-26-14-11.json

Console logs

venv "C:\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe"
WARNING: ZLUDA works best with SD.Next. Please consider migrating to SD.Next.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1-amd-13-g7c946520
Commit hash: 7c94652083ef40c2cabe384783282c20d2979870
ROCm: agents=['gfx803']
ROCm: version=5.7
ZLUDA support: experimental
Using ZLUDA in C:\stable-diffusion-webui-amdgpu\.zluda
Installing sd-webui-controlnet requirement: changing opencv-python version from 4.10.0.84 to 4.8.0
You are up to date with the most recent release.
WARNING:xformers:WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
    PyTorch 2.1.2+cu121 with CUDA 1201 (you have 2.3.1+cpu)
    Python  3.10.11 (you have 3.10.6)
  Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
  Memory-efficient attention, SwiGLU, sparse and more won't be available.
  Set XFORMERS_MORE_DETAILS=1 for more details
C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --opt-sub-quad-attention --lowvram --disable-nan-check --use-zluda --disable-safe-unpickle --update-check --listen --precision full --no-half --device-id 1
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
ONNX failed to initialize: Failed to import diffusers.pipelines.auto_pipeline because of the following error (look up to see its traceback):
Failed to import diffusers.pipelines.aura_flow.pipeline_aura_flow because of the following error (look up to see its traceback):
cannot import name 'UMT5EncoderModel' from 'transformers' (C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\__init__.py)
ControlNet preprocessor location: C:\stable-diffusion-webui-amdgpu\extensions\sd-webui-controlnet\annotator\downloads
2024-10-26 16:05:44,242 - ControlNet - INFO - ControlNet v1.1.455
sd-webui-prompt-all-in-one background API service started successfully.
Loading weights [67ab2fd8ec] from C:\stable-diffusion-webui-amdgpu\models\Stable-diffusion\ponyDiffusionV6XL_v6StartWithThisOne.safetensors
2024-10-26 16:05:44,860 - ControlNet - INFO - ControlNet UI callback registered.
Running on local URL:  http://0.0.0.0:7860
Creating model from config: C:\stable-diffusion-webui-amdgpu\repositories\generative-models\configs\inference\sd_xl_base.yaml
C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(

To create a public link, set `share=True` in `launch()`.
Startup time: 24.3s (prepare environment: 20.9s, initialize shared: 1.2s, load scripts: 1.5s, create ui: 0.6s, gradio launch: 6.4s, add APIs: 0.2s, app_started_callback: 1.0s).
Applying attention optimization: sub-quadratic... done.
Model loaded in 11.5s (load weights from disk: 0.9s, create model: 0.4s, apply weights to model: 8.2s, apply float(): 1.5s, calculate empty prompt: 0.6s).

Additional information

i wanted to use this repo in linux but zluda never worked i heard there were some updates wanted to try it out it doesnt work either, i can only generate using directml but that isnt for linux, which is a shame. i wish i could find out how to use zluda with my gpu

@lshqqytiger
Copy link
Owner

Wipe venv folder and try again without --device-id 1.

@picarica
Copy link
Author

well it did run but on cpu

Creating venv in directory C:\stable-diffusion-webui-amdgpu\venv using python "C:\Users\picarica\AppData\Local\Programs\Python\Python310\python.exe"
Requirement already satisfied: pip in c:\stable-diffusion-webui-amdgpu\venv\lib\site-packages (22.2.1)
Collecting pip
  Using cached pip-24.2-py3-none-any.whl (1.8 MB)
Installing collected packages: pip
  Attempting uninstall: pip
    Found existing installation: pip 22.2.1
    Uninstalling pip-22.2.1:
      Successfully uninstalled pip-22.2.1
Successfully installed pip-24.2
venv "C:\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe"
WARNING: ZLUDA works best with SD.Next. Please consider migrating to SD.Next.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1-amd-13-g7c946520
Commit hash: 7c94652083ef40c2cabe384783282c20d2979870
ROCm: no agent was found
ROCm: version=6.1
ZLUDA support: experimental
Failed to load ZLUDA: Could not find module 'C:\Program Files\AMD\ROCm\6.1\bin\rocblas.dll' (or one of its dependencies). Try using the full path with constructor syntax.
Using CPU-only torch
Installing torch and torchvision
Collecting torch
  Downloading torch-2.5.0-cp310-cp310-win_amd64.whl.metadata (28 kB)
Collecting torchvision
  Downloading torchvision-0.20.0-cp310-cp310-win_amd64.whl.metadata (6.2 kB)
Collecting filelock (from torch)
  Using cached filelock-3.16.1-py3-none-any.whl.metadata (2.9 kB)
Collecting typing-extensions>=4.8.0 (from torch)
  Using cached typing_extensions-4.12.2-py3-none-any.whl.metadata (3.0 kB)
Collecting networkx (from torch)
  Downloading networkx-3.4.2-py3-none-any.whl.metadata (6.3 kB)
Collecting jinja2 (from torch)
  Using cached jinja2-3.1.4-py3-none-any.whl.metadata (2.6 kB)
Collecting fsspec (from torch)
  Downloading fsspec-2024.10.0-py3-none-any.whl.metadata (11 kB)
Collecting sympy==1.13.1 (from torch)
  Downloading sympy-1.13.1-py3-none-any.whl.metadata (12 kB)
Collecting mpmath<1.4,>=1.1.0 (from sympy==1.13.1->torch)
  Downloading mpmath-1.3.0-py3-none-any.whl.metadata (8.6 kB)
Collecting numpy (from torchvision)
  Using cached numpy-2.1.2-cp310-cp310-win_amd64.whl.metadata (59 kB)
Collecting pillow!=8.3.*,>=5.3.0 (from torchvision)
  Using cached pillow-11.0.0-cp310-cp310-win_amd64.whl.metadata (9.3 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch)
  Using cached MarkupSafe-3.0.2-cp310-cp310-win_amd64.whl.metadata (4.1 kB)
Downloading torch-2.5.0-cp310-cp310-win_amd64.whl (203.1 MB)
   ---------------------------------------- 203.1/203.1 MB 19.0 MB/s eta 0:00:00
Downloading sympy-1.13.1-py3-none-any.whl (6.2 MB)
   ---------------------------------------- 6.2/6.2 MB 31.7 MB/s eta 0:00:00
Downloading torchvision-0.20.0-cp310-cp310-win_amd64.whl (1.6 MB)
   ---------------------------------------- 1.6/1.6 MB 27.8 MB/s eta 0:00:00
Using cached pillow-11.0.0-cp310-cp310-win_amd64.whl (2.6 MB)
Using cached typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Using cached filelock-3.16.1-py3-none-any.whl (16 kB)
Downloading fsspec-2024.10.0-py3-none-any.whl (179 kB)
Using cached jinja2-3.1.4-py3-none-any.whl (133 kB)
Downloading networkx-3.4.2-py3-none-any.whl (1.7 MB)
   ---------------------------------------- 1.7/1.7 MB 31.1 MB/s eta 0:00:00
Using cached numpy-2.1.2-cp310-cp310-win_amd64.whl (12.9 MB)
Using cached MarkupSafe-3.0.2-cp310-cp310-win_amd64.whl (15 kB)
Downloading mpmath-1.3.0-py3-none-any.whl (536 kB)
   ---------------------------------------- 536.2/536.2 kB 18.3 MB/s eta 0:00:00
Installing collected packages: mpmath, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, fsspec, filelock, jinja2, torch, torchvision
Successfully installed MarkupSafe-3.0.2 filelock-3.16.1 fsspec-2024.10.0 jinja2-3.1.4 mpmath-1.3.0 networkx-3.4.2 numpy-2.1.2 pillow-11.0.0 sympy-1.13.1 torch-2.5.0 torchvision-0.20.0 typing-extensions-4.12.2
Installing clip
Installing open_clip
Installing requirements
Installing onnxruntime-gpu
Installing sd-webui-controlnet requirement: fvcore
Installing sd-webui-controlnet requirement: mediapipe
Installing sd-webui-controlnet requirement: changing opencv-python version from 4.10.0.84 to 4.8.0
Installing sd-webui-controlnet requirement: svglib
Installing sd-webui-controlnet requirement: addict
Installing sd-webui-controlnet requirement: yapf
Installing sd-webui-controlnet requirement: changing albumentations version from None to 1.4.3
Installing sd-webui-controlnet requirement: changing timm version from 1.0.11 to 0.9.5
Installing sd-webui-controlnet requirement: changing pydantic version from 1.10.18 to 1.10.17
Installing sd-webui-controlnet requirement: changing controlnet_aux version from None to 0.0.9
Installing sd-webui-controlnet requirement: insightface
Installing sd-webui-controlnet requirement: handrefinerportable
Installing sd-webui-controlnet requirement: depth_anything
Installing sd-webui-controlnet requirement: depth_anything_v2
Installing sd-webui-controlnet requirement: dsine
Installing sd-webui-prompt-all-in-one: execjs
Installing sd-webui-prompt-all-in-one: pathos
Installing sd-webui-prompt-all-in-one: cryptography
Installing sd-webui-prompt-all-in-one: openai
Installing sd-webui-prompt-all-in-one: boto3
Installing sd-webui-prompt-all-in-one: aliyunsdkcore
Installing sd-webui-prompt-all-in-one: aliyunsdkalimt
You are up to date with the most recent release.
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --opt-sub-quad-attention --lowvram --disable-nan-check --use-zluda --disable-safe-unpickle --update-check --listen --precision full --no-half
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
ONNX failed to initialize: module 'optimum.onnxruntime.modeling_diffusion' has no attribute '_ORTDiffusionModelPart'
ControlNet preprocessor location: C:\stable-diffusion-webui-amdgpu\extensions\sd-webui-controlnet\annotator\downloads
2024-10-26 17:06:54,755 - ControlNet - INFO - ControlNet v1.1.455
sd-webui-prompt-all-in-one background API service started successfully.
Loading weights [67ab2fd8ec] from C:\stable-diffusion-webui-amdgpu\models\Stable-diffusion\ponyDiffusionV6XL_v6StartWithThisOne.safetensors
2024-10-26 17:06:55,504 - ControlNet - INFO - ControlNet UI callback registered.
Running on local URL:  http://0.0.0.0:7860
Creating model from config: C:\stable-diffusion-webui-amdgpu\repositories\generative-models\configs\inference\sd_xl_base.yaml
creating model quickly: OSError
Traceback (most recent call last):
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 406, in hf_raise_for_status
    response.raise_for_status()
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\requests\models.py", line 1024, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/None/resolve/main/config.json

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\utils\hub.py", line 403, in cached_file
    resolved_file = hf_hub_download(
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 862, in hf_hub_download
    return _hf_hub_download_to_cache_dir(
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 969, in _hf_hub_download_to_cache_dir
    _raise_on_head_call_error(head_call_error, force_download, local_files_only)
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1484, in _raise_on_head_call_error
    raise head_call_error
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1376, in _get_metadata_or_catch_error
    metadata = get_hf_file_metadata(
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1296, in get_hf_file_metadata
    r = _request_wrapper(
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 277, in _request_wrapper
    response = _request_wrapper(
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 301, in _request_wrapper
    hf_raise_for_status(response)
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 454, in hf_raise_for_status
    raise _format(RepositoryNotFoundError, message, response) from e
huggingface_hub.errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-671d0590-34b6aa48449ce13f4b06a0d9;ff16833d-e2dd-4bd7-98c7-47744cb8b8fe)

Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\picarica\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\picarica\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Users\picarica\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "C:\stable-diffusion-webui-amdgpu\modules\initialize.py", line 149, in load_model
    shared.sd_model  # noqa: B018
  File "C:\stable-diffusion-webui-amdgpu\modules\shared_items.py", line 190, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "C:\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 693, in get_sd_model
    load_model()
  File "C:\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 831, in load_model
    sd_model = instantiate_from_config(sd_config.model, state_dict)
  File "C:\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 775, in instantiate_from_config
    return constructor(**params)
  File "C:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\models\diffusion.py", line 61, in __init__
    self.conditioner = instantiate_from_config(
  File "C:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\util.py", line 175, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "C:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\encoders\modules.py", line 88, in __init__
    embedder = instantiate_from_config(embconfig)
  File "C:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\util.py", line 175, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "C:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\encoders\modules.py", line 361, in __init__
    self.transformer = CLIPTextModel.from_pretrained(version)
  File "C:\stable-diffusion-webui-amdgpu\modules\sd_disable_initialization.py", line 68, in CLIPTextModel_from_pretrained
    res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\modeling_utils.py", line 3505, in from_pretrained
    resolved_config_file = cached_file(
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\utils\hub.py", line 426, in cached_file
    raise EnvironmentError(
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`

Failed to create model quickly; will retry using slow method.
open_clip_pytorch_model.bin:   2%|█▏                                               | 241M/10.2G [00:03<02:37, 63.1MB/s]
To create a public link, set `share=True` in `launch()`.
open_clip_pytorch_model.bin:   2%|█▏                                               | 252M/10.2G [00:04<06:16, 26.3MB/s]Startup time: 320.7s (prepare environment: 325.8s, initialize shared: 1.1s, load scripts: 1.7s, create ui: 0.6s, gradio launch: 6.6s, app_started_callback: 1.0s).
open_clip_pytorch_model.bin: 100%|████████████████████████████████████████████████| 10.2G/10.2G [02:44<00:00, 61.8MB/s]
C:\stable-diffusion-webui-amdgpu\modules\safe.py:156: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  return unsafe_torch_load(filename, *args, **kwargs)
Applying attention optimization: sub-quadratic... done.
Model loaded in 191.1s (load weights from disk: 0.9s, create model: 175.4s, apply weights to model: 12.1s, apply float(): 2.0s, calculate empty prompt: 0.6s).
100%|███████████████████████████████████████████████████████████████████████████████████| 2/2 [05:07<00:00, 153.58s/it]
Total progress: 100%|████████████████████████████████████████████████████████████████████| 2/2 [02:03<00:00, 61.53s/it]

@lshqqytiger
Copy link
Owner

lshqqytiger commented Oct 26, 2024

ROCm: no agent was found
ROCm: version=6.1
Failed to load ZLUDA: Could not find module 'C:\Program Files\AMD\ROCm\6.1\bin\rocblas.dll' (or one of its dependencies). Try using the full path with constructor syntax.

Why is the version of HIP SDK changed? As I know, HIP SDK 6.1 does not support your gpu, gfx803. Try again with 5.7.

@picarica
Copy link
Author

tried fixed it still cpu only



venv "C:\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe"
WARNING: ZLUDA works best with SD.Next. Please consider migrating to SD.Next.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1-amd-13-g7c946520
Commit hash: 7c94652083ef40c2cabe384783282c20d2979870
ROCm: agents=['gfx803']
ROCm: version=5.7, using agent gfx803
ZLUDA support: experimental
Using ZLUDA in C:\ZLUDA
Installing sd-webui-controlnet requirement: changing opencv-python version from 4.10.0.84 to 4.8.0
Installing sd-webui-controlnet requirement: changing pydantic version from 1.10.18 to 1.10.17
You are up to date with the most recent release.
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --opt-sub-quad-attention --lowvram --disable-nan-check --use-zluda --disable-safe-unpickle --update-check --listen --precision full --no-half
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
ONNX failed to initialize: module 'optimum.onnxruntime.modeling_diffusion' has no attribute '_ORTDiffusionModelPart'
ControlNet preprocessor location: C:\stable-diffusion-webui-amdgpu\extensions\sd-webui-controlnet\annotator\downloads
2024-10-26 18:30:31,817 - ControlNet - INFO - ControlNet v1.1.455
sd-webui-prompt-all-in-one background API service started successfully.
Loading weights [67ab2fd8ec] from C:\stable-diffusion-webui-amdgpu\models\Stable-diffusion\ponyDiffusionV6XL_v6StartWithThisOne.safetensors
2024-10-26 18:30:32,540 - ControlNet - INFO - ControlNet UI callback registered.
Running on local URL:  http://0.0.0.0:7860
Creating model from config: C:\stable-diffusion-webui-amdgpu\repositories\generative-models\configs\inference\sd_xl_base.yaml
creating model quickly: OSError
Traceback (most recent call last):
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_errors.py", line 304, in hf_raise_for_status
    response.raise_for_status()
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\requests\models.py", line 1024, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/None/resolve/main/config.json

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\utils\hub.py", line 403, in cached_file
    resolved_file = hf_hub_download(
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_deprecation.py", line 101, in inner_f
    return f(*args, **kwargs)
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1240, in hf_hub_download
    return _hf_hub_download_to_cache_dir(
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1347, in _hf_hub_download_to_cache_dir
    _raise_on_head_call_error(head_call_error, force_download, local_files_only)
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1854, in _raise_on_head_call_error
    raise head_call_error
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1751, in _get_metadata_or_catch_error
    metadata = get_hf_file_metadata(
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1673, in get_hf_file_metadata
    r = _request_wrapper(
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 376, in _request_wrapper
    response = _request_wrapper(
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 400, in _request_wrapper
    hf_raise_for_status(response)
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_errors.py", line 352, in hf_raise_for_status
    raise RepositoryNotFoundError(message, response) from e
huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-671d192a-7aca03e441b9b19a5272c06b;2a81f9ea-6a93-4ae0-a019-767663b82bf1)

Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\picarica\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\picarica\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Users\picarica\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "C:\stable-diffusion-webui-amdgpu\modules\initialize.py", line 149, in load_model
    shared.sd_model  # noqa: B018
  File "C:\stable-diffusion-webui-amdgpu\modules\shared_items.py", line 190, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "C:\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 693, in get_sd_model
    load_model()
  File "C:\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 831, in load_model
    sd_model = instantiate_from_config(sd_config.model, state_dict)
  File "C:\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 775, in instantiate_from_config
    return constructor(**params)
  File "C:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\models\diffusion.py", line 61, in __init__
    self.conditioner = instantiate_from_config(
  File "C:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\util.py", line 175, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "C:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\encoders\modules.py", line 88, in __init__
    embedder = instantiate_from_config(embconfig)
  File "C:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\util.py", line 175, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "C:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\encoders\modules.py", line 361, in __init__
    self.transformer = CLIPTextModel.from_pretrained(version)
  File "C:\stable-diffusion-webui-amdgpu\modules\sd_disable_initialization.py", line 68, in CLIPTextModel_from_pretrained
    res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\modeling_utils.py", line 3505, in from_pretrained
    resolved_config_file = cached_file(
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\utils\hub.py", line 426, in cached_file
    raise EnvironmentError(
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`

Failed to create model quickly; will retry using slow method.
C:\stable-diffusion-webui-amdgpu\modules\safe.py:156: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  return unsafe_torch_load(filename, *args, **kwargs)

To create a public link, set `share=True` in `launch()`.
Startup time: 27.1s (prepare environment: 25.7s, initialize shared: 1.3s, load scripts: 1.8s, create ui: 0.7s, gradio launch: 6.6s, add APIs: 0.2s, app_started_callback: 1.1s).
Applying attention optimization: sub-quadratic... done.
C:\stable-diffusion-webui-amdgpu\modules\safe.py:156: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  return unsafe_torch_load(filename, *args, **kwargs)
Model loaded in 21.3s (load weights from disk: 0.9s, create model: 9.7s, apply weights to model: 8.6s, apply float(): 1.6s, calculate empty prompt: 0.5s).
 13%|███████████                                                                        | 4/30 [06:21<38:39, 89.22s/it]
Total progress:  13%|████████▉                                                          | 4/30 [04:06<29:02, 67.02s/it]


@lshqqytiger
Copy link
Owner

CPU-only torch is already installed at the first launch that failed after wiping venv. You can manually uninstall torch (a bit more steps) or just wipe venv again.

@picarica
Copy link
Author

oaky i think i got it, it was compiling all that stuff but it feels slow and i dont see any gpu usage but neither cpu, its faster that cpu so i ahve no diea whats it doing, also i nstalled zluda on ian C:\ folder from this guide
https://github.com/CS1o/Stable-Diffusion-Info/wiki/Webui-Installation-Guides#amd-forge-webui-with-zluda
should i use builtin zluda that it downloads?

venv "C:\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe"
WARNING: ZLUDA works best with SD.Next. Please consider migrating to SD.Next.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1-amd-13-g7c946520
Commit hash: 7c94652083ef40c2cabe384783282c20d2979870
ROCm: agents=['gfx803']
ROCm: version=5.7, using agent gfx803
ZLUDA support: experimental
Using ZLUDA in C:\ZLUDA
Installing requirements
Installing sd-webui-controlnet requirement: changing opencv-python version from 4.10.0.84 to 4.8.0
You are up to date with the most recent release.
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --opt-sub-quad-attention --lowvram --disable-nan-check --use-zluda --disable-safe-unpickle --update-check --listen --precision full --no-half
ONNX failed to initialize: module 'optimum.onnxruntime.modeling_diffusion' has no attribute '_ORTDiffusionModelPart'
ControlNet preprocessor location: C:\stable-diffusion-webui-amdgpu\extensions\sd-webui-controlnet\annotator\downloads
2024-10-26 20:47:03,830 - ControlNet - INFO - ControlNet v1.1.455
sd-webui-prompt-all-in-one background API service started successfully.
Loading weights [67ab2fd8ec] from C:\stable-diffusion-webui-amdgpu\models\Stable-diffusion\ponyDiffusionV6XL_v6StartWithThisOne.safetensors
2024-10-26 20:47:04,557 - ControlNet - INFO - ControlNet UI callback registered.
Running on local URL:  http://0.0.0.0:7860
Creating model from config: C:\stable-diffusion-webui-amdgpu\repositories\generative-models\configs\inference\sd_xl_base.yaml
creating model quickly: OSError
Traceback (most recent call last):
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 406, in hf_raise_for_status
    response.raise_for_status()
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\requests\models.py", line 1024, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/None/resolve/main/config.json

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\utils\hub.py", line 403, in cached_file
    resolved_file = hf_hub_download(
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 862, in hf_hub_download
    return _hf_hub_download_to_cache_dir(
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 969, in _hf_hub_download_to_cache_dir
    _raise_on_head_call_error(head_call_error, force_download, local_files_only)
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1484, in _raise_on_head_call_error
    raise head_call_error
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1376, in _get_metadata_or_catch_error
    metadata = get_hf_file_metadata(
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1296, in get_hf_file_metadata
    r = _request_wrapper(
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 277, in _request_wrapper
    response = _request_wrapper(
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 301, in _request_wrapper
    hf_raise_for_status(response)
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 454, in hf_raise_for_status
    raise _format(RepositoryNotFoundError, message, response) from e
huggingface_hub.errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-671d392a-59c5d4755eeae9bf371d5c5c;e1860495-f1c9-4f31-9c5e-b050c639f39d)

Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\picarica\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\picarica\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Users\picarica\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "C:\stable-diffusion-webui-amdgpu\modules\initialize.py", line 149, in load_model
    shared.sd_model  # noqa: B018
  File "C:\stable-diffusion-webui-amdgpu\modules\shared_items.py", line 190, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "C:\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 693, in get_sd_model
    load_model()
  File "C:\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 831, in load_model
    sd_model = instantiate_from_config(sd_config.model, state_dict)
  File "C:\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 775, in instantiate_from_config
    return constructor(**params)
  File "C:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\models\diffusion.py", line 61, in __init__
    self.conditioner = instantiate_from_config(
  File "C:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\util.py", line 175, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "C:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\encoders\modules.py", line 88, in __init__
    embedder = instantiate_from_config(embconfig)
  File "C:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\util.py", line 175, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "C:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\encoders\modules.py", line 361, in __init__
    self.transformer = CLIPTextModel.from_pretrained(version)
  File "C:\stable-diffusion-webui-amdgpu\modules\sd_disable_initialization.py", line 68, in CLIPTextModel_from_pretrained
    res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\modeling_utils.py", line 3505, in from_pretrained
    resolved_config_file = cached_file(
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\utils\hub.py", line 426, in cached_file
    raise EnvironmentError(
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`

Failed to create model quickly; will retry using slow method.

To create a public link, set `share=True` in `launch()`.
Startup time: 29.9s (prepare environment: 25.9s, initialize shared: 3.1s, load scripts: 1.9s, create ui: 0.5s, gradio launch: 6.9s, app_started_callback: 1.2s).
Applying attention optimization: sub-quadratic... done.
Model loaded in 24.3s (load weights from disk: 1.0s, create model: 10.5s, apply weights to model: 8.8s, apply float(): 2.2s, calculate empty prompt: 1.7s).
Compiling in progress. Please wait...
  0%|                                                                                           | 0/30 [00:00<?, ?it/s]Compiling in progress. Please wait...
Compiling in progress. Please wait...
Compiling in progress. Please wait...
Compiling in progress. Please wait...
 80%|█████████████████████████████████████████████████████████████████▌                | 24/30 [18:44<03:51, 38.63s/it]
Total progress:  80%|████████████████████████████████████████████████████▊             | 24/30 [15:17<03:51, 38.61s/it]


@picarica
Copy link
Author

i have no ide why zluda is trying to use so much of my gpu memory
OutOfMemoryError: CUDA out of memory. Tried to allocate 15.06 GiB. GPU 0 has a total capacity of 8.00 GiB of which 0 bytes is free. Of the allocated memory 4.12 GiB is allocated by PyTorch, and 6.71 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

i have been genrating same images using directml, and i heard zluda uses less memory, not more, also tried zluda with amdgpu-forge repository same issue, cant generate second image because it crashes

@lshqqytiger
Copy link
Owner

ZLUDA does use less memory on the latest AMD cards such as RX 7000 series or RX 6000 series, and it is much faster. (specifically, high-end cards) However, with older cards, the gap between DirectML and ZLUDA is very small. With such cards, in some cases, DirectML outperforms ZLUDA.

@picarica
Copy link
Author

picarica commented Oct 27, 2024

got it well i couldnt make it work so iw anted to rollback to directml but it doesnt launch, did something got broken on my end?


venv "C:\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1-amd-13-g7c946520
Commit hash: 7c94652083ef40c2cabe384783282c20d2979870
Installing sd-webui-controlnet requirement: changing opencv-python version from 4.10.0.84 to 4.8.0
You are up to date with the most recent release.
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --opt-sub-quad-attention --lowvram --disable-nan-check --use-directml --disable-safe-unpickle --update-check --listen
ONNX failed to initialize: module 'optimum.onnxruntime.modeling_diffusion' has no attribute '_ORTDiffusionModelPart'
ControlNet preprocessor location: C:\stable-diffusion-webui-amdgpu\extensions\sd-webui-controlnet\annotator\downloads
2024-10-27 11:07:13,946 - ControlNet - INFO - ControlNet v1.1.455
sd-webui-prompt-all-in-one background API service started successfully.
Loading weights [9b3733e5e4] from C:\stable-diffusion-webui-amdgpu\models\Stable-diffusion\lemonMix_v30.safetensors
ERROR:fastapi:Form data requires "python-multipart" to be installed.
You can install "python-multipart" with:

pip install python-multipart

*** Error calling: C:\stable-diffusion-webui-amdgpu\extensions-builtin\extra-options-section\scripts\extra_options_section.py/ui
    Traceback (most recent call last):
      File "C:\stable-diffusion-webui-amdgpu\modules\scripts.py", line 535, in wrap_call
        return func(*args, **kwargs)
      File "C:\stable-diffusion-webui-amdgpu\extensions-builtin\extra-options-section\scripts\extra_options_section.py", line 30, in ui
        with gr.Blocks() as interface:
      File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\gradio\blocks.py", line 1531, in __exit__
        self.app = routes.App.create_app(self)
      File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\gradio\routes.py", line 221, in create_app
        def login(form_data: OAuth2PasswordRequestForm = Depends()):
      File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\fastapi\routing.py", line 659, in decorator
        self.add_api_route(
      File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\fastapi\routing.py", line 598, in add_api_route
        route = route_class(
      File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\fastapi\routing.py", line 452, in __init__
        self.body_field = get_body_field(dependant=self.dependant, name=self.unique_id)
      File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\fastapi\dependencies\utils.py", line 766, in get_body_field
        check_file_field(final_field)
      File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\fastapi\dependencies\utils.py", line 111, in check_file_field
        raise RuntimeError(multipart_not_installed_error) from None
    RuntimeError: Form data requires "python-multipart" to be installed.
    You can install "python-multipart" with:

    pip install python-multipart


---
ERROR:fastapi:Form data requires "python-multipart" to be installed.
You can install "python-multipart" with:

pip install python-multipart

Traceback (most recent call last):
  File "C:\stable-diffusion-webui-amdgpu\launch.py", line 48, in <module>
    main()
  File "C:\stable-diffusion-webui-amdgpu\launch.py", line 44, in main
    start()
  File "C:\stable-diffusion-webui-amdgpu\modules\launch_utils.py", line 717, in start
    webui.webui()
  File "C:\stable-diffusion-webui-amdgpu\webui.py", line 64, in webui
    shared.demo = ui.create_ui()
  File "C:\stable-diffusion-webui-amdgpu\modules\ui.py", line 303, in create_ui
    with gr.Blocks(analytics_enabled=False) as txt2img_interface:
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\gradio\blocks.py", line 1531, in __exit__
    self.app = routes.App.create_app(self)
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\gradio\routes.py", line 221, in create_app
    def login(form_data: OAuth2PasswordRequestForm = Depends()):
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\fastapi\routing.py", line 659, in decorator
    self.add_api_route(
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\fastapi\routing.py", line 598, in add_api_route
    route = route_class(
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\fastapi\routing.py", line 452, in __init__
    self.body_field = get_body_field(dependant=self.dependant, name=self.unique_id)
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\fastapi\dependencies\utils.py", line 766, in get_body_field
    check_file_field(final_field)
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\fastapi\dependencies\utils.py", line 111, in check_file_field
    raise RuntimeError(multipart_not_installed_error) from None
RuntimeError: Form data requires "python-multipart" to be installed.
You can install "python-multipart" with:

pip install python-multipart

Creating model from config: C:\stable-diffusion-webui-amdgpu\repositories\generative-models\configs\inference\sd_xl_base.yaml
creating model quickly: OSError
Traceback (most recent call last):
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 406, in hf_raise_for_status
    response.raise_for_status()
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\requests\models.py", line 1024, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/None/resolve/main/config.json

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\utils\hub.py", line 403, in cached_file
    resolved_file = hf_hub_download(
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 862, in hf_hub_download
    return _hf_hub_download_to_cache_dir(
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 969, in _hf_hub_download_to_cache_dir
    _raise_on_head_call_error(head_call_error, force_download, local_files_only)
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1484, in _raise_on_head_call_error
    raise head_call_error
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1376, in _get_metadata_or_catch_error
    metadata = get_hf_file_metadata(
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1296, in get_hf_file_metadata
    r = _request_wrapper(
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 277, in _request_wrapper
    response = _request_wrapper(
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 301, in _request_wrapper
    hf_raise_for_status(response)
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 454, in hf_raise_for_status
    raise _format(RepositoryNotFoundError, message, response) from e
huggingface_hub.errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-671e10d2-33d3246b05c83e6c29b5c944;54373e17-9cab-4a7b-9817-2fa61dd1af86)

Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\picarica\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\picarica\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Users\picarica\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "C:\stable-diffusion-webui-amdgpu\modules\initialize.py", line 149, in load_model
    shared.sd_model  # noqa: B018
  File "C:\stable-diffusion-webui-amdgpu\modules\shared_items.py", line 190, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "C:\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 693, in get_sd_model
    load_model()
  File "C:\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 831, in load_model
    sd_model = instantiate_from_config(sd_config.model, state_dict)
  File "C:\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 775, in instantiate_from_config
    return constructor(**params)
  File "C:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\models\diffusion.py", line 61, in __init__
    self.conditioner = instantiate_from_config(
  File "C:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\util.py", line 175, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "C:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\encoders\modules.py", line 88, in __init__
    embedder = instantiate_from_config(embconfig)
  File "C:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\util.py", line 175, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "C:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\encoders\modules.py", line 361, in __init__
    self.transformer = CLIPTextModel.from_pretrained(version)
  File "C:\stable-diffusion-webui-amdgpu\modules\sd_disable_initialization.py", line 68, in CLIPTextModel_from_pretrained
    res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\modeling_utils.py", line 3505, in from_pretrained
    resolved_config_file = cached_file(
  File "C:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\utils\hub.py", line 426, in cached_file
    raise EnvironmentError(
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`

Failed to create model quickly; will retry using slow method.
C:\stable-diffusion-webui-amdgpu\modules\safe.py:156: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  return unsafe_torch_load(filename, *args, **kwargs)
Applying attention optimization: sub-quadratic... done.
Model loaded in 10.5s (load weights from disk: 0.1s, create model: 5.6s, apply weights to model: 3.5s, apply half(): 0.2s, calculate empty prompt: 1.0s).
Press any key to continue . . .

i tireed t ry multiple times removing venv folder

@CS1o
Copy link

CS1o commented Oct 28, 2024

dont use these options:
--opt-sub-quad-attention --lowvram --disable-nan-check --use-zluda --disable-safe-unpickle --update-check --listen --precision full --no-half --device-id 1
A lot of them slow you down massivly.

Only use the ones provided by my guide:
--use-zluda --update-check --skip-ort
and additional for your GPU --medvram-sdxl or --medvram

If you want to use SDXL Models fast then you need to make sure you have 32gb of RAM.
If not you have to increase your Windows Pagefile to 24000 Max and 16000 Min.
Then rebbot the PC.

The error from your Log:

ERROR:fastapi:Form data requires "python-multipart" to be installed.
You can install "python-multipart" with:

can be fixed by uninstalling all Python Versions via System Control Panel and then only install Python 3.10.11 64bit.
Then delete the venv folder and relaunch the webui-user.bat.

@picarica
Copy link
Author

without all then my vram fills up instantly even with lowvram idk why id doesnt get freed up

@CS1o
Copy link

CS1o commented Oct 28, 2024

without all then my vram fills up instantly even with lowvram idk why id doesnt get freed up

Can you show a cmd log when using only --use-zluda --update-check --skip-ort --medvram-sdxl
delete the venv folder before launching.

@picarica
Copy link
Author

got it @CS1o used your launch arguments, but well doesnt work fine


./webui.sh 

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################

################################################################
Running on picarica user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
Create and activate python venv
################################################################
Requirement already satisfied: pip in ./venv/lib/python3.10/site-packages (24.2)
WARNING: Cache entry deserialization failed, entry ignored
Collecting pip
  Using cached pip-24.3.1-py3-none-any.whl.metadata (3.7 kB)
Using cached pip-24.3.1-py3-none-any.whl (1.8 MB)
Installing collected packages: pip
  Attempting uninstall: pip
    Found existing installation: pip 24.2
    Uninstalling pip-24.2:
      Successfully uninstalled pip-24.2
Successfully installed pip-24.3.1

################################################################
Launching launch.py...
################################################################
glibc version is 2.39
Check TCMalloc: libtcmalloc_minimal.so.4
libtcmalloc_minimal.so.4 is linked with libc.so,execute LD_PRELOAD=/usr/lib64/libtcmalloc_minimal.so.4
WARNING: ZLUDA works best with SD.Next. Please consider migrating to SD.Next.
Python 3.10.15 (main, Oct 26 2024, 14:58:03) [GCC 13.3.1 20240614]
Version: v1.10.1-amd-15-gcf6c4e97
Commit hash: cf6c4e9765abe987e68a94006cf61672a076042c
ROCm: agents=['gfx803']
ROCm: version=5.7, using agent gfx803
Installing torch and torchvision
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121
Collecting torch==2.3.1
  Using cached https://download.pytorch.org/whl/cu121/torch-2.3.1%2Bcu121-cp310-cp310-linux_x86_64.whl (781.0 MB)
Collecting torchvision
  Downloading https://download.pytorch.org/whl/cu121/torchvision-0.20.1%2Bcu121-cp310-cp310-linux_x86_64.whl (7.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.3/7.3 MB 46.2 MB/s eta 0:00:00
Collecting filelock (from torch==2.3.1)
  Using cached filelock-3.16.1-py3-none-any.whl.metadata (2.9 kB)
Collecting typing-extensions>=4.8.0 (from torch==2.3.1)
  Using cached typing_extensions-4.12.2-py3-none-any.whl.metadata (3.0 kB)
Collecting sympy (from torch==2.3.1)
  Using cached sympy-1.13.3-py3-none-any.whl.metadata (12 kB)
Collecting networkx (from torch==2.3.1)
  Using cached networkx-3.4.2-py3-none-any.whl.metadata (6.3 kB)
Collecting jinja2 (from torch==2.3.1)
  Using cached jinja2-3.1.4-py3-none-any.whl.metadata (2.6 kB)
Collecting fsspec (from torch==2.3.1)
  Using cached fsspec-2024.10.0-py3-none-any.whl.metadata (11 kB)
Collecting nvidia-cuda-nvrtc-cu12==12.1.105 (from torch==2.3.1)
  Using cached https://download.pytorch.org/whl/cu121/nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (23.7 MB)
Collecting nvidia-cuda-runtime-cu12==12.1.105 (from torch==2.3.1)
  Using cached https://download.pytorch.org/whl/cu121/nvidia_cuda_runtime_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (823 kB)
Collecting nvidia-cuda-cupti-cu12==12.1.105 (from torch==2.3.1)
  Using cached https://download.pytorch.org/whl/cu121/nvidia_cuda_cupti_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (14.1 MB)
Collecting nvidia-cudnn-cu12==8.9.2.26 (from torch==2.3.1)
  Using cached https://download.pytorch.org/whl/cu121/nvidia_cudnn_cu12-8.9.2.26-py3-none-manylinux1_x86_64.whl (731.7 MB)
Collecting nvidia-cublas-cu12==12.1.3.1 (from torch==2.3.1)
  Using cached https://download.pytorch.org/whl/cu121/nvidia_cublas_cu12-12.1.3.1-py3-none-manylinux1_x86_64.whl (410.6 MB)
Collecting nvidia-cufft-cu12==11.0.2.54 (from torch==2.3.1)
  Using cached https://download.pytorch.org/whl/cu121/nvidia_cufft_cu12-11.0.2.54-py3-none-manylinux1_x86_64.whl (121.6 MB)
Collecting nvidia-curand-cu12==10.3.2.106 (from torch==2.3.1)
  Using cached https://download.pytorch.org/whl/cu121/nvidia_curand_cu12-10.3.2.106-py3-none-manylinux1_x86_64.whl (56.5 MB)
Collecting nvidia-cusolver-cu12==11.4.5.107 (from torch==2.3.1)
  Using cached https://download.pytorch.org/whl/cu121/nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl (124.2 MB)
Collecting nvidia-cusparse-cu12==12.1.0.106 (from torch==2.3.1)
  Using cached https://download.pytorch.org/whl/cu121/nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl (196.0 MB)
Collecting nvidia-nccl-cu12==2.20.5 (from torch==2.3.1)
  Using cached https://download.pytorch.org/whl/cu121/nvidia_nccl_cu12-2.20.5-py3-none-manylinux2014_x86_64.whl (176.2 MB)
Collecting nvidia-nvtx-cu12==12.1.105 (from torch==2.3.1)
  Using cached https://download.pytorch.org/whl/cu121/nvidia_nvtx_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (99 kB)
Collecting triton==2.3.1 (from torch==2.3.1)
  Using cached https://download.pytorch.org/whl/triton-2.3.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (168.1 MB)
Collecting nvidia-nvjitlink-cu12 (from nvidia-cusolver-cu12==11.4.5.107->torch==2.3.1)
  Using cached nvidia_nvjitlink_cu12-12.6.77-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)
Collecting numpy (from torchvision)
  Using cached numpy-2.1.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (60 kB)
INFO: pip is looking at multiple versions of torchvision to determine which version is compatible with other requirements. This could take a while.
Collecting torchvision
  Using cached https://download.pytorch.org/whl/cu121/torchvision-0.20.0%2Bcu121-cp310-cp310-linux_x86_64.whl (7.3 MB)
  Using cached torchvision-0.20.0-cp310-cp310-manylinux1_x86_64.whl.metadata (6.1 kB)
  Using cached https://download.pytorch.org/whl/cu121/torchvision-0.19.1%2Bcu121-cp310-cp310-linux_x86_64.whl (7.1 MB)
  Using cached torchvision-0.19.1-cp310-cp310-manylinux1_x86_64.whl.metadata (6.0 kB)
  Using cached https://download.pytorch.org/whl/cu121/torchvision-0.19.0%2Bcu121-cp310-cp310-linux_x86_64.whl (7.1 MB)
  Using cached torchvision-0.19.0-cp310-cp310-manylinux1_x86_64.whl.metadata (6.0 kB)
  Using cached https://download.pytorch.org/whl/cu121/torchvision-0.18.1%2Bcu121-cp310-cp310-linux_x86_64.whl (7.0 MB)
Collecting pillow!=8.3.*,>=5.3.0 (from torchvision)
  Using cached pillow-11.0.0-cp310-cp310-manylinux_2_28_x86_64.whl.metadata (9.1 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch==2.3.1)
  Using cached MarkupSafe-3.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.0 kB)
Collecting mpmath<1.4,>=1.1.0 (from sympy->torch==2.3.1)
  Using cached https://download.pytorch.org/whl/mpmath-1.3.0-py3-none-any.whl (536 kB)
Using cached pillow-11.0.0-cp310-cp310-manylinux_2_28_x86_64.whl (4.4 MB)
Using cached typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Using cached filelock-3.16.1-py3-none-any.whl (16 kB)
Using cached fsspec-2024.10.0-py3-none-any.whl (179 kB)
Using cached jinja2-3.1.4-py3-none-any.whl (133 kB)
Using cached networkx-3.4.2-py3-none-any.whl (1.7 MB)
Using cached numpy-2.1.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.3 MB)
Using cached sympy-1.13.3-py3-none-any.whl (6.2 MB)
Using cached MarkupSafe-3.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (20 kB)
Using cached nvidia_nvjitlink_cu12-12.6.77-py3-none-manylinux2014_x86_64.whl (19.7 MB)
Installing collected packages: mpmath, typing-extensions, sympy, pillow, nvidia-nvtx-cu12, nvidia-nvjitlink-cu12, nvidia-nccl-cu12, nvidia-curand-cu12, nvidia-cufft-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, numpy, networkx, MarkupSafe, fsspec, filelock, triton, nvidia-cusparse-cu12, nvidia-cudnn-cu12, jinja2, nvidia-cusolver-cu12, torch, torchvision
Successfully installed MarkupSafe-3.0.2 filelock-3.16.1 fsspec-2024.10.0 jinja2-3.1.4 mpmath-1.3.0 networkx-3.4.2 numpy-2.1.2 nvidia-cublas-cu12-12.1.3.1 nvidia-cuda-cupti-cu12-12.1.105 nvidia-cuda-nvrtc-cu12-12.1.105 nvidia-cuda-runtime-cu12-12.1.105 nvidia-cudnn-cu12-8.9.2.26 nvidia-cufft-cu12-11.0.2.54 nvidia-curand-cu12-10.3.2.106 nvidia-cusolver-cu12-11.4.5.107 nvidia-cusparse-cu12-12.1.0.106 nvidia-nccl-cu12-2.20.5 nvidia-nvjitlink-cu12-12.6.77 nvidia-nvtx-cu12-12.1.105 pillow-11.0.0 sympy-1.13.3 torch-2.3.1+cu121 torchvision-0.18.1+cu121 triton-2.3.1 typing-extensions-4.12.2
Installing clip
Installing open_clip
Installing requirements
Skipping onnxruntime installation.
--------------------------------------------------------
| You are not up to date with the most recent release. |
| Consider running `git pull` to update.               |
--------------------------------------------------------
/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
  warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/pytorch_lightning/utilities/distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --use-zluda --medvram --update-check --skip-ort --listen
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled
Checkpoint ponyDiffusionV6XL_v6StartWithThisOne.safetensors [67ab2fd8ec] not found; loading fallback v1-5-pruned-emaonly.safetensors [6ce0161689]
Loading weights [6ce0161689] from /home/picarica/stable-diffusion-webui-amdgpu/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Running on local URL:  http://0.0.0.0:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 104.9s (prepare environment: 107.2s, initialize shared: 0.2s, other imports: 0.3s, load scripts: 0.2s, create ui: 0.4s, gradio launch: 0.1s).
Creating model from config: /home/picarica/stable-diffusion-webui-amdgpu/configs/v1-inference.yaml
creating model quickly: OSError
Traceback (most recent call last):
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 406, in hf_raise_for_status
    response.raise_for_status()
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/requests/models.py", line 1024, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/None/resolve/main/config.json

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/transformers/utils/hub.py", line 403, in cached_file
    resolved_file = hf_hub_download(
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 862, in hf_hub_download
    return _hf_hub_download_to_cache_dir(
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 969, in _hf_hub_download_to_cache_dir
    _raise_on_head_call_error(head_call_error, force_download, local_files_only)
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1484, in _raise_on_head_call_error
    raise head_call_error
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1376, in _get_metadata_or_catch_error
    metadata = get_hf_file_metadata(
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1296, in get_hf_file_metadata
    r = _request_wrapper(
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 277, in _request_wrapper
    response = _request_wrapper(
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 301, in _request_wrapper
    hf_raise_for_status(response)
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 454, in hf_raise_for_status
    raise _format(RepositoryNotFoundError, message, response) from e
huggingface_hub.errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-6720888f-105295ae28c3825c49668071;db1d3816-c041-4cb2-af64-21b75aabc7f5)

Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/lib/python3.10/threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "/home/picarica/stable-diffusion-webui-amdgpu/modules/initialize.py", line 149, in load_model
    shared.sd_model  # noqa: B018
  File "/home/picarica/stable-diffusion-webui-amdgpu/modules/shared_items.py", line 190, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "/home/picarica/stable-diffusion-webui-amdgpu/modules/sd_models.py", line 693, in get_sd_model
    load_model()
  File "/home/picarica/stable-diffusion-webui-amdgpu/modules/sd_models.py", line 831, in load_model
    sd_model = instantiate_from_config(sd_config.model, state_dict)
  File "/home/picarica/stable-diffusion-webui-amdgpu/modules/sd_models.py", line 775, in instantiate_from_config
    return constructor(**params)
  File "/home/picarica/stable-diffusion-webui-amdgpu/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 563, in __init__
    self.instantiate_cond_stage(cond_stage_config)
  File "/home/picarica/stable-diffusion-webui-amdgpu/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage
    model = instantiate_from_config(config)
  File "/home/picarica/stable-diffusion-webui-amdgpu/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "/home/picarica/stable-diffusion-webui-amdgpu/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 104, in __init__
    self.transformer = CLIPTextModel.from_pretrained(version)
  File "/home/picarica/stable-diffusion-webui-amdgpu/modules/sd_disable_initialization.py", line 68, in CLIPTextModel_from_pretrained
    res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3505, in from_pretrained
    resolved_config_file = cached_file(
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/transformers/utils/hub.py", line 426, in cached_file
    raise EnvironmentError(
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`

Failed to create model quickly; will retry using slow method.
Applying attention optimization: InvokeAI... done.
Model loaded in 6.6s (load weights from disk: 0.6s, create model: 1.2s, apply weights to model: 2.0s, apply half(): 0.3s, calculate empty prompt: 2.5s).

  0%|                                                                                                               | 0/20 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(j8x23p9jbd32odu)', <gradio.routes.Request object at 0x7f60150b3a30>, 'testing little guy', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "/home/picarica/stable-diffusion-webui-amdgpu/modules/call_queue.py", line 74, in f
        res = list(func(*args, **kwargs))
      File "/home/picarica/stable-diffusion-webui-amdgpu/modules/call_queue.py", line 53, in f
        res = func(*args, **kwargs)
      File "/home/picarica/stable-diffusion-webui-amdgpu/modules/call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "/home/picarica/stable-diffusion-webui-amdgpu/modules/txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "/home/picarica/stable-diffusion-webui-amdgpu/modules/processing.py", line 849, in process_images
        res = process_images_inner(p)
      File "/home/picarica/stable-diffusion-webui-amdgpu/modules/processing.py", line 1083, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "/home/picarica/stable-diffusion-webui-amdgpu/modules/processing.py", line 1441, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "/home/picarica/stable-diffusion-webui-amdgpu/modules/sd_samplers_kdiffusion.py", line 233, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "/home/picarica/stable-diffusion-webui-amdgpu/modules/sd_samplers_common.py", line 272, in launch_sampling
        return func()
      File "/home/picarica/stable-diffusion-webui-amdgpu/modules/sd_samplers_kdiffusion.py", line 233, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "/home/picarica/stable-diffusion-webui-amdgpu/repositories/k-diffusion/k_diffusion/sampling.py", line 594, in sample_dpmpp_2m
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/picarica/stable-diffusion-webui-amdgpu/modules/sd_samplers_cfg_denoiser.py", line 249, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/picarica/stable-diffusion-webui-amdgpu/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "/home/picarica/stable-diffusion-webui-amdgpu/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "/home/picarica/stable-diffusion-webui-amdgpu/modules/sd_hijack_utils.py", line 22, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "/home/picarica/stable-diffusion-webui-amdgpu/modules/sd_hijack_utils.py", line 34, in __call__
        return self.__sub_func(self.__orig_func, *args, **kwargs)
      File "/home/picarica/stable-diffusion-webui-amdgpu/modules/sd_hijack_unet.py", line 50, in apply_model
        result = orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs)
      File "/home/picarica/stable-diffusion-webui-amdgpu/modules/sd_hijack_utils.py", line 22, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "/home/picarica/stable-diffusion-webui-amdgpu/modules/sd_hijack_utils.py", line 36, in __call__
        return self.__orig_func(*args, **kwargs)
      File "/home/picarica/stable-diffusion-webui-amdgpu/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1582, in _call_impl
        result = forward_call(*args, **kwargs)
      File "/home/picarica/stable-diffusion-webui-amdgpu/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/picarica/stable-diffusion-webui-amdgpu/modules/sd_unet.py", line 91, in UNetModel_forward
        return original_forward(self, x, timesteps, context, *args, **kwargs)
      File "/home/picarica/stable-diffusion-webui-amdgpu/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 797, in forward
        h = module(h, emb, context)
      File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/picarica/stable-diffusion-webui-amdgpu/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 86, in forward
        x = layer(x)
      File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/picarica/stable-diffusion-webui-amdgpu/extensions-builtin/Lora/networks.py", line 599, in network_Conv2d_forward
        return originals.Conv2d_forward(self, input)
      File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 460, in forward
        return self._conv_forward(input, self.weight, self.bias)
      File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
        return F.conv2d(input, weight, bias, self.stride,
    RuntimeError: Input type (float) and bias type (c10::Half) should be the same

---

@CS1o
Copy link

CS1o commented Oct 29, 2024

Why are you now using Linux and the webui.sh ?

@picarica
Copy link
Author

oh did i forgot to mention it? or i mixed up the issues, i wanted to make zluda work on linux, but i was ttrying it in obth windows and linux, and in none it worked, i want to switch my SD install to linux so i can use it there

@CS1o
Copy link

CS1o commented Oct 31, 2024

Yea you didnt mentioned it.
ZLUDA is not that practical for Linux as on Linux you have ROCm that supports AMD GPUs nativly.
With a RX570 you would need to install ROCm 5.7 drivers as they dropped support in 6.1

@picarica
Copy link
Author

picarica commented Nov 3, 2024

intresting i was trying it one time but i couldnt figure out which libraries i need for native rocm support? do i need only rocm-smi working? or some rocBLAS rfftBLAS and stuff like that? not sure what i need or how do i debug it, how do i find out, i have it under gentoo

@picarica
Copy link
Author

picarica commented Nov 4, 2024

tried it but i have those issues with HIP stuff idk why i have hip merged and rocminfo and rocm-smi is using correct info


./webui.sh 

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################

################################################################
Running on picarica user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
python venv already activate or run without venv: /home/picarica/stable-diffusion-webui-amdgpu/venv
################################################################

################################################################
Launching launch.py...
################################################################
glibc version is 2.39
Check TCMalloc: libtcmalloc_minimal.so.4
libtcmalloc_minimal.so.4 is linked with libc.so,execute LD_PRELOAD=/usr/lib64/libtcmalloc_minimal.so.4
Python 3.10.15 (main, Oct 26 2024, 14:58:03) [GCC 13.3.1 20240614]
Version: v1.10.1-amd-16-g4730df18
Commit hash: 4730df185b557f1453a0f5f79ffd1fa7b36aae54
ROCm: agents=['gfx803']
ROCm: version=5.7, using agent gfx803
Installing sd-webui-controlnet requirement: changing opencv-python version from 4.10.0.84 to 4.8.0
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/pytorch_lightning/utilities/distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: 
ONNX: version=1.20.0.dev20240815001+rocm57 provider=DmlExecutionProvider, available=['MIGraphXExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider']
*** Error loading script: main.py
    Traceback (most recent call last):
      File "/home/picarica/stable-diffusion-webui-amdgpu/modules/scripts.py", line 515, in load_scripts
        script_module = script_loading.load_module(scriptfile.path)
      File "/home/picarica/stable-diffusion-webui-amdgpu/modules/script_loading.py", line 13, in load_module
        module_spec.loader.exec_module(module)
      File "<frozen importlib._bootstrap_external>", line 883, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "/home/picarica/stable-diffusion-webui-amdgpu/extensions/openpose-editor/scripts/main.py", line 14, in <module>
        from basicsr.utils.download_util import load_file_from_url
    ModuleNotFoundError: No module named 'basicsr'

---
ControlNet preprocessor location: /home/picarica/stable-diffusion-webui-amdgpu/extensions/sd-webui-controlnet/annotator/downloads
2024-11-04 11:59:08,744 - ControlNet - INFO - ControlNet v1.1.455
sd-webui-prompt-all-in-one background API service started successfully.
Checkpoint ponyDiffusionV6XL_v6StartWithThisOne.safetensors [67ab2fd8ec] not found; loading fallback v1-5-pruned-emaonly.safetensors
Calculating sha256 for /home/picarica/stable-diffusion-webui-amdgpu/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors: 2024-11-04 11:59:12,025 - ControlNet - INFO - ControlNet UI callback registered.
Running on local URL:  http://127.0.0.1:7861

To create a public link, set `share=True` in `launch()`.
Startup time: 61.4s (prepare environment: 58.9s, initialize shared: 1.7s, load scripts: 7.6s, create ui: 2.2s, gradio launch: 3.1s, app_started_callback: 10.2s).
6ce0161689b3853acaa03779ec93eafe75a02f4ced659bee03f50797806fa2fa
Loading weights [6ce0161689] from /home/picarica/stable-diffusion-webui-amdgpu/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Creating model from config: /home/picarica/stable-diffusion-webui-amdgpu/configs/v1-inference.yaml
creating model quickly: OSError
Traceback (most recent call last):
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 406, in hf_raise_for_status
    response.raise_for_status()
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/requests/models.py", line 1024, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/None/resolve/main/config.json

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/transformers/utils/hub.py", line 403, in cached_file
    resolved_file = hf_hub_download(
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 862, in hf_hub_download
    return _hf_hub_download_to_cache_dir(
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 969, in _hf_hub_download_to_cache_dir
    _raise_on_head_call_error(head_call_error, force_download, local_files_only)
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1484, in _raise_on_head_call_error
    raise head_call_error
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1376, in _get_metadata_or_catch_error
    metadata = get_hf_file_metadata(
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1296, in get_hf_file_metadata
    r = _request_wrapper(
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 277, in _request_wrapper
    response = _request_wrapper(
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 301, in _request_wrapper
    hf_raise_for_status(response)
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 454, in hf_raise_for_status
    raise _format(RepositoryNotFoundError, message, response) from e
huggingface_hub.errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-67289b09-1311f3cb394f657d292c7633;85f53a85-b862-4932-89b3-da35d8de5c20)

Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/lib/python3.10/threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "/home/picarica/stable-diffusion-webui-amdgpu/modules/initialize.py", line 149, in load_model
    shared.sd_model  # noqa: B018
  File "/home/picarica/stable-diffusion-webui-amdgpu/modules/shared_items.py", line 190, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "/home/picarica/stable-diffusion-webui-amdgpu/modules/sd_models.py", line 693, in get_sd_model
    load_model()
  File "/home/picarica/stable-diffusion-webui-amdgpu/modules/sd_models.py", line 831, in load_model
    sd_model = instantiate_from_config(sd_config.model, state_dict)
  File "/home/picarica/stable-diffusion-webui-amdgpu/modules/sd_models.py", line 775, in instantiate_from_config
    return constructor(**params)
  File "/home/picarica/stable-diffusion-webui-amdgpu/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 563, in __init__
    self.instantiate_cond_stage(cond_stage_config)
  File "/home/picarica/stable-diffusion-webui-amdgpu/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage
    model = instantiate_from_config(config)
  File "/home/picarica/stable-diffusion-webui-amdgpu/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "/home/picarica/stable-diffusion-webui-amdgpu/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 104, in __init__
    self.transformer = CLIPTextModel.from_pretrained(version)
  File "/home/picarica/stable-diffusion-webui-amdgpu/modules/sd_disable_initialization.py", line 68, in CLIPTextModel_from_pretrained
    res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3506, in from_pretrained
    resolved_config_file = cached_file(
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/transformers/utils/hub.py", line 426, in cached_file
    raise EnvironmentError(
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`

Failed to create model quickly; will retry using slow method.
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
  File "/usr/lib/python3.10/threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "/home/picarica/stable-diffusion-webui-amdgpu/modules/initialize.py", line 149, in load_model
    shared.sd_model  # noqa: B018
  File "/home/picarica/stable-diffusion-webui-amdgpu/modules/shared_items.py", line 190, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "/home/picarica/stable-diffusion-webui-amdgpu/modules/sd_models.py", line 693, in get_sd_model
    load_model()
  File "/home/picarica/stable-diffusion-webui-amdgpu/modules/sd_models.py", line 856, in load_model
    load_model_weights(sd_model, checkpoint_info, state_dict, timer)
  File "/home/picarica/stable-diffusion-webui-amdgpu/modules/sd_models.py", line 440, in load_model_weights
    model.load_state_dict(state_dict, strict=False)
  File "/home/picarica/stable-diffusion-webui-amdgpu/modules/sd_disable_initialization.py", line 223, in <lambda>
    module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs))
  File "/home/picarica/stable-diffusion-webui-amdgpu/modules/sd_disable_initialization.py", line 221, in load_state_dict
    original(module, state_dict, strict=strict)
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2175, in load_state_dict
    load(self, state_dict)
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2163, in load
    load(child, child_state_dict, child_prefix)  # noqa: F821
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2163, in load
    load(child, child_state_dict, child_prefix)  # noqa: F821
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2163, in load
    load(child, child_state_dict, child_prefix)  # noqa: F821
  [Previous line repeated 1 more time]
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2157, in load
    module._load_from_state_dict(
  File "/home/picarica/stable-diffusion-webui-amdgpu/modules/sd_disable_initialization.py", line 225, in <lambda>
    linear_load_from_state_dict = self.replace(torch.nn.Linear, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(linear_load_from_state_dict, *args, **kwargs))
  File "/home/picarica/stable-diffusion-webui-amdgpu/modules/sd_disable_initialization.py", line 191, in load_from_state_dict
    module._parameters[name] = torch.nn.parameter.Parameter(torch.zeros_like(param, device=device, dtype=dtype), requires_grad=param.requires_grad)
  File "/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/_meta_registrations.py", line 4964, in zeros_like
    res.fill_(0)
RuntimeError: HIP error: invalid device function
HIP kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing AMD_SERIALIZE_KERNEL=3.
Compile with `TORCH_USE_HIP_DSA` to enable device-side assertions.



Stable diffusion model failed to load
Applying attention optimization: Doggettx... done.

rocBLAS error: Cannot read /home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx803
 List of available TensileLibrary Files : 
"/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx90a.dat"
"/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx1030.dat"
"/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx906.dat"
"/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx1100.dat"
"/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx908.dat"
"/home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx900.dat"
./webui.sh: line 304: 74406 Aborted                 (core dumped) "${python_cmd}" -u "${LAUNCH_SCRIPT}" "$@"

@picarica
Copy link
Author

picarica commented Nov 4, 2024

and also the last part also the last part why is gfx803 file doesnt exist?
rocBLAS error: Cannot read /home/picarica/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx803

@picarica
Copy link
Author

picarica commented Nov 4, 2024

okay i fixed last error with


[picarica@gentoo ~/stable-diffusion-webui-amdgpu/venv/lib/python3.10/site-packages/torch/lib/rocblas]
$ ln -s /usr/lib64/rocblas/library library

i just symlinked it to my system librarier where i compiled rocm for gfx803

but it cannot create model

RuntimeError: HIP error: invalid device function
HIP kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing AMD_SERIALIZE_KERNEL=3.
Compile with `TORCH_USE_HIP_DSA` to enable device-side assertions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants