iscussion of Free and Open Source Diffusion ModelsPrev: >>108066594https://rentry.org/ldg-lazy-getting-started-guide>UIComfyUI: https://github.com/comfyanonymous/ComfyUISwarmUI: https://github.com/mcmonkeyprojects/SwarmUIre/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneoSD.Next: https://github.com/vladmandic/sdnextWan2GP: https://github.com/deepbeepmeep/Wan2GP>Checkpoints, LoRAs, Upscalers, & Workflowshttps://civitai.comhttps://civitaiarchive.com/https://openmodeldb.infohttps://openart.ai/workflows>Tuninghttps://github.com/spacepxl/demystifying-sd-finetuninghttps://github.com/ostris/ai-toolkithttps://github.com/Nerogar/OneTrainerhttps://github.com/kohya-ss/musubi-tunerhttps://github.com/tdrussell/diffusion-pipe>Zhttps://huggingface.co/Tongyi-MAI/Z-Imagehttps://huggingface.co/Tongyi-MAI/Z-Image-Turbo>Animahttps://huggingface.co/circlestone-labs/Anima>Kleinhttps://huggingface.co/collections/black-forest-labs/flux2>LTX-2https://huggingface.co/Lightricks/LTX-2>Wanhttps://github.com/Wan-Video/Wan2.2>Chromahttps://huggingface.co/lodestones/Chroma1-Basehttps://rentry.org/mvu52t46>Illustrioushttps://rentry.org/comfyui_guide_1girlhttps://tagexplorer.github.io/>MiscLocal Model Meta: https://rentry.org/localmodelsmetaShare Metadata: https://catbox.moe | https://litterbox.catbox.moe/GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-oneTxt2Img Plugin: https://github.com/Acly/krita-ai-diffusionArchive: https://rentry.org/sdg-linkBakery: https://rentry.org/ldgcollage>Neighbors>>>/aco/csdg>>>/b/degen>>>/r/realistic+parody>>>/gif/vdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/trash/slop>>>/vt/vtai>>>/u/udg>Local Text>>>/g/lmg
Oh neat a troll bake
>>108071064I thought you were banned?
Why change the OP from the template in OP rentry?
>>108071084It's called a hijack
>>108071073>he respects themI just say no and post more.
>>108071088why didn't you bake?
>>108071097ME??!!!
>julienbake
>>108071052>Maintain Thread Qualityhttps://rentry.org/debohttps://rentry.org/animanon
>>108071110that's hot
>>108071052Why is AniStudio not in OP?
>>108070488if he had posted it with a chart showing a same-seed example of the like 2000 "style clusters" to begin with people might have taken it better. Without that though the model is just gotcha AF, I'll never understand why he even bothered to do the clusters if he wasn't gonna document them
apache2 anima when?
Any tutorials on video gens? I want to make good bouncing tits
All rights reserved models when?
>>108071327>write what you want to happen in the prompt, simply and direct>load appropriate loras if applicable>????>profit
I'm getting impatient for z-edit, bros
give me kinosovl or give me death
>>108071500cool
Does base klein model use more vram for whatever reason? when I tried it took so much more longer per step, it was abhorrent
I heard a rumor about Animabooru tags first then natural language = anime stylenatural language then booru tags = cartoony
>>108071574My very rough guess as to why would be due to the undertraining? so the nl bleeds more from the base model, the nai leaked model was like that too
>>108071557well cfg >1 so it would double compared to cfg =1
So our thread got hijacked, as this post says >>108071088.Is there a protocol for how to proceed?
>>108071702To fix it we need to post various 1boys kneeling
>>108071608cute
>>108071702what do you mean?oh wait I see>iscussion of Free and Open Source Diffusion Modelsthey intentionally deleted the "D". I can't believe they would vandalize the OP like this
shitting myself saars
Blessed thread of frenship
i'm once again asking for Anima NAG implementation.
>>108071899vibe code it
>>108071923vibes was off
base is a shit
How did they make Anima so good?
>>108071911try with someone not famous.
>>108071110He needs to go to the horspital.
>>108072095just imagine if they used an actual llm, the 4b one. why would they use 0.6b. text encoder would offload to cpu anyway and barely slow things down. a real shame.
>>108071775lmao
>>108072139well now they will have the funds to try a bigger text encoder if they want
>>108072095examples?
>>108072139>why would they use 0.6bthere is an extremely good reason for this and if you use your brain you can figure it out
>>108071052Is the arc B580 a good slopping card? I found one pretty cheap(and every other +12g card is already as expensive as a 5070 msrp)I read the nip benchmark and it said >"If you're allergic to python or git or don't feel like you can create a workflow for VAE splitting, I wouldn't recommend it."Does it mean I have to fiddle with files instead of the UI or something? The bench is also pretty old so I wanna know if intel slopping is still maintained and improving.
>>108072176enlighten us, faglord
>>108072238Text encoder offloads to CPU only in inference, and only in comfy. You make the TE 4b and now people with 8GB GPUs can't train loras because the training script can't load the TE.SDXL won because people with dogshit gaming GPUs can still train loras. None of you want to admit that for some reason.
>>108072431like it
do we have to do a second exodus to get away from avatarfags again?
>>108072193i think it would be really cool if you posted this or a variation of it three to four times a day for the next month or so
>>108071327Don't use tit bouncing LoRAs unless you really, really want bouncing tits. They make movement all weird otherwise.
>>108072489it's culture
>>108072507i think it would be really cool if you posted this or a variation of it three to four times a day for the next month or so
>>108072453how many total members does this server actually have lmao? it can't be very many, it's not linked from anywhere as far as I can tell
>>108072507bruh, no offense and shit, but she's ugly. she looks like chucky daughter or something. she's certainly cool, but still ugly
>>108072507sexo with jebby
>>108072095>how did they make a snake oil that isn't as bad as other snake oils?They didn't, reminder that local has 0 researchers. Your entire finetune pipeline is shoveling garbage into a model and crying when it doesn't magically surpass SDXL. You still relies on outdated LoRA tech,. You'll probably get a bloated $5M Pony v10 trained on even more slop before you ever see real innovation in local, because improving takes technical knowledge, not a couple rented GPUs and money from Comfy.
>>108072617>outdated lora techIs there something better than lora released to add the newest data quickly? Genuine question
>>108072631i think it would be really cool if you posted this or a variation of it three to four times a day for the next month or so
>>108072634No. And add outdated tagging models to the list. The entire local finetuning workflow is neolithic. Lodestones could burn through $5M and still push out failbakes because he doesn't have dedicated researchers on payroll like any SaaS company does.
>>108072665You are pretty clever.
>>108072670for you
>>108072124
>>108072634>>108072665>outdated lora tech>is there something better?>noLmao
>>108072453>no image found in the archiveso you are the mentally ill catjaknigger who is avoiding his ban... not the first time this is happening.
>>108072697
>>108072846Nice
>>108072846>redacted pussy
>>108072835i just looked at this shit for the first time and it's so outdated lmao. Why is SarahPeterson in community profiles kek? Multiple other users 404 on CivitAI now also
>>108072861he turned into a furry within a year.
What are some translation models available for comfy that can do nsfw, especially chinese?
Okay, I’m reading /ldg/cord and I’m 100% Team AnimAnon. AnimAnon is the only one with the balls to push back against /ldg/cord, which is just Comfy + Comfy simps. I’d make a fake email just to join and back him up.
>>108073005yes "anon" you're 100% on team thread spammer nothing suspicious about that
>>108072697>>108072846clean style
Catjak is the typical snitch, teacher’s pet cartoon kid.
>>108073010Fuck off. At least he has more balls than you, he dares to stand up to your Node overlord. Siding with Comfy doesn’t make you successful like him. You’re still the same loser and retard.
>>108073035lol
>>108073035ani you are too retarded to pull this off
Best local image to video with low VRAM?Grok doesn't work since it's not local. I tried LTX-2 but it doesn't load for some reason. WAN 2.5/2.6 seems to use credits. I tried WAN 2.2 but it also doesn't seem to load.What else can I try?
Oh mmfh Comfy… schllrrp thanks for yet another half baked anime model… slrps almost as good as SDXL… shlrrrk~mff two more fine tunings… sluuurp and*hah*thanks for the Apache license too… schlrrp*pop*.
>>108073101Well LTX-2 and Wan 2.2 are the best you're getting for a while so you better figure it out.
>>108073005this discord literally is not linked from anywhere and it's not like we can somehow DM each other here, there's no way anyone new has joined it recently
>>108073101animatediff
Am I fucking retarded? I can't find anything on how to use qwen to translate text in comfy.
>>108073101>I tried LTX-2 but it doesn't load for some reasonSo you just said "Oh I guess it doesn't work then" and fucked off?How do people bumble through life with this little curiosity?
>>108073181Well, if you must ask I did try debugging it but nothing fixed it.>>108073113I got WAN 2.2 to work. I just had to lower the resolution. I guess I might have reached a limitation from my 1050 Ti. I will continue to experiment.
>>108072611>facebookroastieshatingoncutejapanesevollyballplayer.jpg>>108072615This guy has the right idea!
>>108073174Maybe I am feeding the troll but why aren't you using llama.cpp or text-generation-webui for this?
>>108073349I'd rather keep it to a software I know. Iirc I installed something with llama.What webui would I use?
>>108071052ty 4 fagollage
>>108073375>Local Text>>>/g/lmg
>>108071270>I'll never understand why he even bothered to do the clusters if he wasn't gonna document themThere was a rumor that he had a version of v6 without the artists obfuscated, for his own use. Regardless of if that's actually true, his reasoning doesn't matter. He's a huge fag who fucked with artist tags of both models on purpose. At least now he's withered away into obscurity.
>mfw I still have ponyv7 saved in my nvmemight declass it to archival NAS soon
cute girls doing cute things
>>108073375Well Comfy isn't designed to run llms, the existing llm capabilities are mainly for supplementing image generation (e.g. prompt enhance), as such they are limited and run far more clunky than an llm frontend.I use text-generation-webui for local models, as I mentioned. There are also other frontends like kobold.cpp that you may want to try.For API I use open-webui.
calm down passfag
I apologize for posting a r-ddit screenshot.I found this comment discussing why anima uses 0.6b. That has also been discussed here before, so I thought maybe some will be interested in reading this.I can not verify correctness of anything besides anima being based on Cosmos-Predict2-2B-Text2Image, but it seems to make sense.>>108073406It's not even worth archiving.
>>108073406>>108073426It was doomed before it even started https://desuarchive.org/g/thread/101382433/#q101387228 https://desuarchive.org/g/thread/101375708/#101384102
>>108073444yeah bro I remember, I was the one posting the halloween lain pics and doing tests with samplers/style clusters.A complete fucking disaster.
>>108073444Lol.The only thing the brony achieved was to prove again that you can't unfuck a poorly trained shitty base model with finetuning.
fuck poorsfuck vramletsfuck anima
fuck artists fuck coomers fuck furries fuck devs fuck avatartroons fuck sloppers fuck trainers
>>108073450>A complete fucking disaster.Well at least we got some laffs from that retard who couldn't figure out which seed had the cat kekkk
>>108073469but anima filters out the poors with older cards
is there a way to match the vae encoded output of ltx without using the vae? can the process be reverse engineered and replicated using a simple filter or shader
>>108073441catbox?
>>108073497What the fuck does this mean?Do you mean not using vae compression for i2v? No, because diffusion works on the compressed latent space and needs the exact dimensions to work properly.
>ani you are too retarded to pull this off
My first ZIM lora wasn't very successful. I used stock settings on onetrainer with a tried dataset and while it was picking up on the likeness a bit it feels underbaked. I'll retry with higher LR and timestep shift. Feels like no one knows good settings yet, suggestions are varying wildly
>My first ZIM lora wasn't very successful.
How do quadrillion b blob model enjoyers deal with the fact that we are returning to 8GB VRAM tradition?
>>108073564>Feels like no one knows good settings yet, suggestions are varying wildlyLittle to no public discussion about such in newer models. That has been the trend for a while. It seems most people just shit out garbage loras with civit defaults and don't investigate further. The handful of people interested in training higher quality ones, seem to discuss their findings behind the discord information black hole.Oh well. I would honestly just not train loras than subject myself to a trannycord channel.