[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Settings Mobile Home
/t/ - Torrents

4chan Pass users can bypass this verification. [Learn More] [Login]
  • Please read the Rules and FAQ before posting.
  • There are 33 posters in this thread.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]

[Advertise on 4chan]

File: 1670434172475387.png (3.09 MB, 1024x1664)
3.09 MB
3.09 MB PNG
Share any Stable Diffusion related torrents!

SD 2.0 models:

SD 2.1 models:

Anything-v3.0 model (https://www.bilibili.com/read/cv19603218)
Also this is a decent guide:

NovelAI model
I dont see this setting and I followed the page step by step
for more

my personal favourites:
Freckle Mix

Rando Mando Mix (poster over on /b/ /sdg/)

Berry Mix
File: clip skip.png (82 KB, 607x661)
82 KB
Clip skip isn't a big deal, but it should be toward the bottom of the settings page.
OP, can you explain what the differences are between the 4 models provided in the two separate torrents?
Threads on /g/: https://boards.4channel.org/g/catalog#s=Stable%20Diffusion
Guide: https://rentry.org/voldy
News: https://rentry.org/sdupdates3
A curated collection of up to date links and information: https://rentry.org/sdgoldmine
Thanks, but why are there differences in our UIs?
Some of the text is subtly different, why? Also how do I enable dark mode lol
It was either already in dark mode when I launched it or I set that shit so early I forgot that I did. Could be an auto-detect of system-wide dark mode idk.
Retard here, why 2.0 and 2.1? Is 2.1 not just outright better? Are there more models in the 2.0 torrent?
Some say 1.5 is better, some say 2.0 or 2.1 is better, they supposedly fucked up 2.0 so they released 2.1, you might as well have all the models so you can compare results when generating images.
File: file.png (32 KB, 916x406)
32 KB
there's a huge list of all the various models people have made here, with download links.
I can vouch for SD 1.5 being a very good "complete" model, this is the last one released before people threw a shit fit and started demanding stuff be removed from the training data.
you can also use all the specialized models in the above link and combine them (picrelated) using the "voldy" frontend that most people use for making images.
I didn't download the OP torrent but here's a little about the 3 most popular models for anyone looking for what to seed/use.

SD 1.5 is the core model, see here >>1192964

NovelAI's model is also a very good model, it is a commercial product that was apparently leaked. if you want to generate character art this is probably a straight upgrade over SD 1.5

Waifu Diffusion is the other one that can be considered essential, with training on substantial anime content.
It's my personal opinion that Anything-V3.0 has surpassed the other anime specialized models. But everyone should give all the models a spin, they each have their place depending on how you use them.

For those interested in further specializing for porn, gape60 is worth checking out. Despite being trained for niche large insertions and gape, the fact that it was trained on basically only genitalia means that it can improve other models' porn drawing capabilities. The merge with Anything-V3.0 is very good.
File: 1602097111173.gif (904 KB, 220x208)
904 KB
904 KB GIF
/TV/'s official top ranked movies of 2022!
Vote for the top movies you saw this year

File: tmppfj38k4s.png (1.48 MB, 1280x960)
1.48 MB
1.48 MB PNG
Imo anythingv3 is pretty much the best and most consistently good model!

If you don't mind the anime aesthetic you can try inputting song lyrics and it will generate a great looking character on it's own.

Other models usually need more elaborate prompting to get good looking results.

Pic related was made by me inputting song lyrics and nothing else.
Just started using this and its pretty amazing. Eyes get fucked a lot however but i managed to get around it.


I have no idea what merges to do however.
I had eye issues a lot when I first started.

The way I got around it was:
>(detailed eyes and face:1.3)
>Turn off face restore (it breaks things more often than it helps)
>Crank up the resolution. (Of course you need the high res fix then)

The last one helps especially well. I personally usually used 960x1280 (or the other way around if I wanted a pose that fits a landscape image better)

The face tag can cause the pictures to be of only a characters head and shoulders. Put (full body:1.4) at the beginning of your prompt to fix that.

It also helps a lot to ask somebody who makes good pictures for a prompt and then slowly modify it to get what you want.
These prompts will usually already include quality, artstyle, and artist prompts which help a lot.
Keep those near the front of the prompt.
My first dabbles were with image2image, using a pre existing photo. and its pretty neat.

The ai is dumb as fuck but eventually gets it. A taste of the future in a way.

Pic related was my 3 real attempt and the og photo is a very shity one from a videogame with breastplate. Th Ai knew immediately to draw tits.
When will there be an executable file that I can just download and use? I've failed a couple of times to install it because I'm an idiot. And I have an AMD card....
When they finally decide to stop using python and use a proper programming language instead
I am pretty dumb anon and It took like 5 minutes to install shit.
Check this out, works alright
It requires some more finagling on AMD but it boils down to pytorch. Basically you have to manually swap out the default Nvidia pytorch with the ROCM version. It's kind of a hack that tricks python into thinking the AMD card is an Nvidia card.
Try this either after you activate your conda environment or you activate your venv:
pip install --upgrade torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.2
You may have to try rocm5.1.1 instead of rocm5.2 but the idea is the same.
>When will there be an executable file that I can just download and use?
they have a website for you guys
What were the lyrics and other settings?
File: self-improvement-guide.png (84 KB, 1123x1204)
84 KB
What's the best model for realistic-ish/3dcg cosplays? Like nuns or nurses.
>>>/aco/asdg is infatuated with realistic styles so peruse those threads and see what people are using.
File: thamks anon.png (381 KB, 512x512)
381 KB
381 KB PNG
This is fuckin' sweet
Are there any guides for voice acting AIs?
I'm seeing a lot of posts about them on other channels.
Had a lot of fun with everything here.
File: 00002-1809894734.png (1.22 MB, 1024x1024)
1.22 MB
1.22 MB PNG
This was surprisingly easy to do. I can't believe it's only 2023 and I can say that I've pirated an Artificial Intelligence
For voice AI, see TortoiseTTS - https://github.com/neonbjb/tortoise-tts
>SD 2.1 models
how are the new versions better than the previous ones?
just follow the guide you fucking retard. my 9yo son was able to install it by himself
What do I need to download first to get started making waifus?
they are not lol

just stick to 1.5
File: 00911-2676609277-.png (1.01 MB, 768x1152)
1.01 MB
1.01 MB PNG
Anything V3.0. It makes waifus even if you literally don't put anything in the prompt. Pic related. After you get the hang of it, branch out to other popular models, but you can't go wrong with Anything.
How is this in comparison in to the Waifu Diffusion 1.4 that kind of fetches on danbooru images?
Pretty much blows Waifu Diffusion out of the water. Most danbooru tags will work, and if you're used to prompting with WD, Anything will be familiar.
On a related note, this isn't a torrent but it's a free direct download anyway. AbyssOrangeMix 3 A1 is new on the scene and incorporates a lot of models including Anything 3 and Gape60 >>1193150
It's certainly a replacement for AnythingGape and gives Anything 3 a run for its money.
Oh, I see. I guess AOM3 goes in the models/Lora folder?
Nah it's not a lora, it's a model in its own right. Put it in the models/Stable-diffusion folder.
Oh, the file name tricked me. I thought i read that safetensors go in that file. So models/stablediffusion. Got it.
Wish the CPU method worked, the tutorial was easy to follow but shit just aint working. Why do I even have 32GB of RAM
Is there a reason beyond the passing mention that isn't helpful in the troubleshooting section on the guide why Torch isn't able to use my GPU? Is this common?
Depends on if your GPU is too old (or too new sometimes) or AMD.

Search for AUTOMATIC1111
I found it works where voldy doesn't.
Is it just me or is NAI oversaturated? How can I get it to look more like Anything V3? I'm only using it because some of the more obscure tags I use work better with NAI.
Just a shot in the dark, but have you tried using the Anything VAE with NAI?
Tried to download but only getting a ckpt file I don't know what to do about
Not sure what you mean. You put it in the models/Stable-Diffusion folder just like any other model. I'm not sure what you're lost on, you gotta have Stable Diffusion first and preferably already one of the other models like SD1.5/SD2.1 set up.
Ohh, I see! I kinda misunderstood. I don't have Stable diffusion and was looking for a torrent for it!
The torrents posted here are models. OP didn't explain jack shit. Stable Diffusion itself is free and open source. It's a bit dated but the voldy guide should still be relevant >>1190882
File: 00074-108465056.png (749 KB, 768x768)
749 KB
749 KB PNG
Been playing around with Anything v3 a few days now and been happy with some of the results. However, I see many ai artists making like very "shiny" and much better looking pictures than I've been able to. What's the secret?
Pic related to what results I'm useally getting
For these kinds of questions you're probably gonna get more engagement on the generals rather than /t/.
oh also forgot >>>/e/sdg which seems to match the style of your image the most
Aahh, I can see that! I'll ask around there! Thank you!

[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.