Discussion of Free and Open Source Diffusion ModelsPrev: >>108240824https://rentry.org/ldg-lazy-getting-started-guide>UIComfyUI: https://github.com/comfyanonymous/ComfyUISwarmUI: https://github.com/mcmonkeyprojects/SwarmUIre/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneoSD.Next: https://github.com/vladmandic/sdnextWan2GP: https://github.com/deepbeepmeep/Wan2GP>Checkpoints, LoRAs, Upscalers, & Workflowshttps://civitai.comhttps://civitaiarchive.com/https://openmodeldb.infohttps://openart.ai/workflows>Tuninghttps://github.com/spacepxl/demystifying-sd-finetuninghttps://github.com/ostris/ai-toolkithttps://github.com/Nerogar/OneTrainerhttps://github.com/kohya-ss/musubi-tunerhttps://github.com/tdrussell/diffusion-pipe>Zhttps://huggingface.co/Tongyi-MAI/Z-Imagehttps://huggingface.co/Tongyi-MAI/Z-Image-Turbo>Animahttps://huggingface.co/circlestone-labs/Animahttps://thetacursed.github.io/Anima-Style-Explorer/>Kleinhttps://huggingface.co/collections/black-forest-labs/flux2>LTX-2https://huggingface.co/Lightricks/LTX-2>Wanhttps://github.com/Wan-Video/Wan2.2>Chromahttps://huggingface.co/lodestones/Chroma1-Basehttps://rentry.org/mvu52t46>Illustrioushttps://rentry.org/comfyui_guide_1girlhttps://tagexplorer.github.io/>MiscLocal Model Meta: https://rentry.org/localmodelsmetaShare Metadata: https://catbox.moe | https://litterbox.catbox.moe/GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-oneTxt2Img Plugin: https://github.com/Acly/krita-ai-diffusionArchive: https://rentry.org/sdg-linkCollage: https://rentry.org/ldgcollage>Neighbors>>>/aco/csdg>>>/b/degen>>>/r/realistic+parody>>>/gif/vdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/trash/slop>>>/vt/vtai>>>/u/udg>Local Text>>>/g/lmg
>Maintain Thread Qualityhttps://rentry.org/debohttps://rentry.org/animanonhttps://www.anistudio.ai/
Do not post any gens in this trollbake.
>>108248215elaborate?
>>108248224nah
repoastan:Do I need to include the>the focus of the image is..>it's a medium closeup shot blahin the captions or is it just token bloat?Also what lora training settings are good for a large dataset? 200+ images.
>julienbake
Local is dead
>>108248364>Local is deadwe just got a llm revolution with qwen 3.5, Alibaba is cooking, wait for them to finish their Chinese new Year vacation and they'll give us Qwen Image 2.0 and Z-image edit, trust the plan
wan 2.6 is coming soon too! just keep praying. hell, maybe we should buy some cloud credits just to show alibaba how much we love their models. i just donated stardust for illustrious 3.5 open source. vote with your wallets!!!!
>>108248382bot
>>108248399I'm not a bot, I can say faggot kek
>>108248382i very worried about that qwen image 2.0. The results seems to be somewhat inferior than to qwen image 2512 and it has a serious east Asian bias,
>>108248199Why is AniStudio not in OP?
>>108248472the important part will be the edit, like klein, completly useless as a pure image model, great at editing
>>108248382how did qwen 3.5 revolutionize llms?
>>108248490>>108248368>>108248401
>>108248496No, maybe 0.01% better than Haiku or GPT mini at some hyper specific task.As an RP and ERP, "local models = API models" is the biggest cope meme ever.Local LLMs will never match API model overall quality reasoning and capabilities.
Has anyone tried a duel 3090 setup? i'm thinking about doing it for a meaty 48 GB of VRAM. could i do it on my 10th gen intel motherboard?
>Local textgen eating good with Qwen 3.5>Local video still stuck on a 8 month old model.sigh...
>>108248597I know someone with a 3x 3090 setup.2x is probably the max you can fit in a normal PC provided you have something like a 1200w PSU.anything more and you'll need to start doing somekind of mining rig setup.
>>108248597I'm on a 24+32GB setup, but I'm also using low power workstation cards. As the other anon points out, your biggest worry is going to be power.
>>108248654>>108248691I can always buy a bigger PSU. i'm worried if my motherboard's chipset will cuck the cards and make them run significantly slower.i'm using an Asus Z490 F.
>>108248751Check dimensions first if the cards even fit.
>>108248597to do what? has there been a breakthrough for layer distribution?
>>108248811>I'm thinking about doing it for a meaty 48 GB of VRAMdoesn't work like that, it will still be two 24gb cards for all intents and purposes
>>108248811>>108248826So how are these fags on youtube building multi 5090 setups and taking full advantage of it for AI?
>>108248844Maybe for LLMs it works because they don't require that much speed, but if the whole model doesn't fit on each GPU by itself they will have to exchange information, creating a colossal bottleneck (just like how RAM fallback does)
>>108248844The youtube fags are mostly doing LLM stuff, which does benefit pooling multiple GPUs together. With diffusion models, you're still limited by the VRAM of a single card, but you can split components up (model/clip/vae) to avoid offloading shenanigans.
>>108248874shiiiiiet. back to the drawing board, i guess.
>>108248751Your mobo won't cuck the card. the PCIe bandwidth is fast enough.
>>108248751I had a 4090 and 3090 in a 9900K system, both in waterblocks so they were in slot 1 and 2 (the bottom slot is controlled by the southbridge, which is 4x). Model load times feel slow, and time to first token responses are higher than a pcie 4.0 setup but not that bad. Actual tok/s were really good, apparently with way ollama and lm studio parallel the layers on each card means that during generation there is barely any crosstalk. I imagine for training the performance would be shit but I never tried that. I took the 3090 out a few weeks back to run a few tests on the single 4090 and have been too lazy to drain/refill the loop to put it back in.