Previously, on /asdg/ >>8578561The cartoony side: >>>/aco/csdgNo DALL-E, Meta AI or hailuo AI generated stuff.No futa.>>>/aco/futasdgReport trolls.>GuidesGeneral: github.com/AUTOMATIC1111/stable-diffusion-webui/wiki | rentry.org/sdg-link | rentry.org/rentrysdHires Fix: rentry.org/hiresfixjan23Inpainting/Outpainting: rentry.org/drfar | rentry.org/inpainting-guide-SDAnimation: rentry.org/AnimAnonCreate LoRAs, TIs: rentry.org/59xed3 | rentry.org/simplified-embed-trainingList of models/LoRAs:rentry.org/acorepo>Auto1111 extensionsControlNet: github.com/Mikubill/sd-webui-controlnet | rentry.org/dummycontrolnetRegional Prompter: github.com/hako-mikan/sd-webui-regional-prompterTag autocomplete: github.com/DominikDoom/a1111-sd-webui-tagcomplete>Related>>>/b/degen>>>/h/hdg>>>/e/edg>>>/d/ddg>>>/g/sdg>>>/trash/sdg>>>/trash/slop>>>/vg/aids>>>/vt/vtai>>>/y/ydg
And that was the archive dump.
Lets fix the fix:>>8614027The actual previous thread: >>8591458
>>8614027Would it be too much to ask you to check your files for prompt, pretty please? Or catbox
>>8614092Sorry, sutff picked around, so no clue about the prompt.Most likely realistic pony plus a Jessica Rabitt LoRA.
is rtx4060 8gbvram ok for good local ai? what about rtx4070?
>>8614195It´s fine. The more VRAM the better, especially with the newer models like Flux and SD3.5, but still more than enough for SDXL/Pony and older stuff
DeviantArt: HornylandAdoptables
At least Flux is good for text
Off topicDoes anyone have a link to anonib?
>>8614588>>8614612Hot damn, nice!
>>8612827it's pretty solid on amd with linux if you don't mind the slight frustration with rocm
>>8614210nice. exactly for my budget
>>8614788>>8614790>>8614793>>8614796>>8614797This is what AI slop looks like
>>86141958 gigs of VRAM is fine for gaming, but it's a little low for machine learning stuff. You can do some generations, but you'll be heavily limited by resolution. I'd look for a 12GB card at least. 3060 12GB are available for <= the 4060 8GB, and if you're lucky you may be able to find a 4060 12GB for around the same price as the 8. For a little more you could get a 4060 Ti 16GB.
>>8614872i know. but i only use laptops. and laptops with more than 8gb vram are very expensive for nothing
>>8614905>i know. but i only use laptops.So you're talking about a mobile chipset GPU? VRAM is still the most important factor, but be aware that mobile chips are much worse than actual cards.
>>8614905use an eGPU
these threads are huge advertisements for nividea cards. phones and laptops just wont work well enough at all
>>8615718As always, an excellent set.
>>8615736i'd actually consider getting a bigger gb amd card desu, amd sd is good enough for me.
>>8615754i bought an amd laptop when i was a teenager, and i had bad time. nvidia forever now lol
>>8615707Holy fucking shit. This is my first time seeing AI images and I'm at a loss for words. We have unironically come so far as a society, we can generate girls like this with the click of a button. I wish she was real :(
>>8615903this. and we still have the good flux, not really nsfw. imagine pony flux or bigasp flux