-
Notifications
You must be signed in to change notification settings - Fork 193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: --use-zluda uses cpu, --use-directml works fine #552
Comments
Wipe |
well it did run but on cpu
|
Why is the version of HIP SDK changed? As I know, HIP SDK 6.1 does not support your gpu, gfx803. Try again with 5.7. |
tried fixed it still cpu only
|
CPU-only torch is already installed at the first launch that failed after wiping |
oaky i think i got it, it was compiling all that stuff but it feels slow and i dont see any gpu usage but neither cpu, its faster that cpu so i ahve no diea whats it doing, also i nstalled zluda on ian C:\ folder from this guide
|
i have no ide why zluda is trying to use so much of my gpu memory i have been genrating same images using directml, and i heard zluda uses less memory, not more, also tried zluda with amdgpu-forge repository same issue, cant generate second image because it crashes |
ZLUDA does use less memory on the latest AMD cards such as RX 7000 series or RX 6000 series, and it is much faster. (specifically, high-end cards) However, with older cards, the gap between DirectML and ZLUDA is very small. With such cards, in some cases, DirectML outperforms ZLUDA. |
got it well i couldnt make it work so iw anted to rollback to directml but it doesnt launch, did something got broken on my end?
i tireed t ry multiple times removing venv folder |
dont use these options: Only use the ones provided by my guide: If you want to use SDXL Models fast then you need to make sure you have 32gb of RAM. The error from your Log:
can be fixed by uninstalling all Python Versions via System Control Panel and then only install Python 3.10.11 64bit. |
without all then my vram fills up instantly even with lowvram idk why id doesnt get freed up |
Can you show a cmd log when using only |
got it @CS1o used your launch arguments, but well doesnt work fine
|
Why are you now using Linux and the webui.sh ? |
oh did i forgot to mention it? or i mixed up the issues, i wanted to make zluda work on linux, but i was ttrying it in obth windows and linux, and in none it worked, i want to switch my SD install to linux so i can use it there |
Yea you didnt mentioned it. |
intresting i was trying it one time but i couldnt figure out which libraries i need for native rocm support? do i need only rocm-smi working? or some rocBLAS rfftBLAS and stuff like that? not sure what i need or how do i debug it, how do i find out, i have it under gentoo |
tried it but i have those issues with HIP stuff idk why i have hip merged and rocminfo and rocm-smi is using correct info
|
and also the last part also the last part why is gfx803 file doesnt exist? |
okay i fixed last error with
i just symlinked it to my system librarier where i compiled rocm for gfx803 but it cannot create model
|
Checklist
What happened?
using zluda does not work with RX 570 on windows 10 with newest drivers
Steps to reproduce the problem
clone repository
--use-zluda wiht with these options
--opt-sub-quad-attention --lowvram --disable-nan-check --use-zluda --disable-safe-unpickle --update-check --listen --precision full --no-half --device-id 1
What should have happened?
it should use gpu it even detects correct gfx803 in logs but doesnt use it
What browsers do you use to access the UI ?
No response
Sysinfo
sysinfo-2024-10-26-14-11.json
Console logs
Additional information
i wanted to use this repo in linux but zluda never worked i heard there were some updates wanted to try it out it doesnt work either, i can only generate using directml but that isnt for linux, which is a shame. i wish i could find out how to use zluda with my gpu
The text was updated successfully, but these errors were encountered: