[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


Tell me why when I run
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113
, it returns with
ERROR: No matching distribution found for torch==1.12.1+cu113
?

I mean, it looks like the stuff I need is in the links, so why won't it install? No, I don't know how this works, and AI isn't helping me get this working, either. Please help me out?

I'm trying to install Stable Diffusion WebUI reForge on Linux Mint, and my outdated computer happens to use an Nvidia 470 driver. And no, I can't afford something better right now.
>>
>>107591064
>Linux Mint
22.1, that is.
>>
>>107591064
>it looks like the stuff I need is in the links
Or not, I am fucking blind!

I think I finally found a link, I'm gonna try that real quick. This thread is a clusterfuck already, all hope is lost, God is dead!
>>
use venv
>>
>>107591125
I'm already doing that, I wouldn't have gotten this far if I wasn't! It still doesn't solve the issue of the files literally not being in the link provided by PyTorch.
>>
I just need a repository with these files, all I'm getting is:
>ERROR: Could not find a version that satisfies the requirement torch==1.12.1+cu113 (from versions: 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.1, 2.4.0, 2.4.1, 2.5.0, 2.5.1, 2.6.0, 2.7.0, 2.7.1, 2.8.0, 2.9.0, 2.9.1)
>ERROR: No matching distribution found for torch==1.12.1+cu113

Turns out the link I mentioned earlier didn't have what I need, and I don't know why this has to be so difficult for a known issue that was apparently never solved. But wasn't it?...
>>
>>107591064
There is 1.12.1+cu113 wheel for Python 3.7 -> 3.10
Looks like Mint 22.1 has 3.12 by default, install compatible Python version
Retard
>>
>>107591651
What a coincidence, that's what I'm doing right now, I didn't even need you to tell me!!! Let's see if it works once I'm actually finished setting everything up!
>>
Because pyshit is made by updooters, and they remove old versions from their server. So you need to find the .whl file somewhere.
Just give up on that Python crap and install koboldcpp. It has stable diffusion support with cuda and it's one .exe that just werks.
>>
File: 1662355545048695.jpg (60 KB, 460x460)
60 KB
60 KB JPG
>>107591768
>Just give up on that Python crap and install koboldcpp. It has stable diffusion support with cuda and it's one .exe that just werks.
it doesn't fucking work with my GPU...
>>
>>107591788
>it doesn't fucking work with my GPU...
it's not your GPU, it's your OS. Get rid of Linux crap and install Windows.
By the way if your GPU has less than 4GB of VRAM then there's no point using it. It won't work well even for SD1.5-based models.
If your GPU has less than 4GB VRAM then you have to use your CPU. It will be extremely slow. Like 5-10 minutes for a single image kind of slow.
>>
>>107591814
I got rid of my Windows crap to install Linux! I got tired of it breaking itself, and didn't feel like upgrading to 11.

Also, I just did this >>107591687 and it's still giving me the "The NVIDIA driver on your system is too old (found version 11040)." error, which I've had from the beginning. I installed a .whl file from:
> https://download.pytorch.org/whl/cu113/torch-1.12.1%2Bcu113-cp310-cp310-linux_x86_64.whl
But I don't even know if this is enough, sure doesn't look like it, and I don't know what the fuck any of this even is. You're right about PyShit, removing the old versions is bullshit!

>If your GPU has less than 4GB VRAM then you have to use your CPU. It will be extremely slow. Like 5-10 minutes for a single image kind of slow.
I have 2GB, but even if it's marginally faster, that's still better than nothing. As it is currently, KoboldCpp takes over an hour to generate a single image on CPU only mode, and I'm using an Intel i7 4910MQ. I told it to give it 7 threads, but who knows if it actually did, sure didn't look like it. But it does give it 7 with text, and Kobold generates text at like one word per second, it can't get any slower than this. Ollama works faster than this, and I'm not even sure if that's using my GPU, so I don't know what's happening, and I'm too sleep deprived for this.
>>
>>107591814
>>107591968
>But I don't even know if this is enough, sure doesn't look like it
Actually, I decided to read what the fuck I was doing, and I found the other two files. I'm installing them now, to see if I could get reForge to finally work.
>>
>>107591814
>>107591968
>>107592028
Well, the UI loads now, but I'm getting a "TypeError: 'NoneType' object is not iterable", so who knows what that means...

My question is if the files were still in the PyTorch repo, why the fuck doesn't the pip command work?
>>
I'm going to tell you from experience that you're a lot better off using a dedicated ML conda environment in order to have all the dependencies at the correct version

Also, as much as it pains me, use poetry over raw pip.
>>
>>107591814
>>107592231
I tried KoboldCpp again, the "oldpc" version, and it's just giving me a "ggml was not compiled with any CUDA arch warning" error, and I don't know how to go about fixing that right now. I'm going to bed.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.