Discussion of Free and Open Source Diffusion ModelsPrev: >>108290374https://rentry.org/ldg-lazy-getting-started-guide>UIComfyUI: https://github.com/comfyanonymous/ComfyUISwarmUI: https://github.com/mcmonkeyprojects/SwarmUIre/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneoSD.Next: https://github.com/vladmandic/sdnextWan2GP: https://github.com/deepbeepmeep/Wan2GP>Checkpoints, LoRAs, Upscalers, & Workflowshttps://civitai.comhttps://civitaiarchive.com/https://openmodeldb.infohttps://openart.ai/workflows>Tuninghttps://github.com/spacepxl/demystifying-sd-finetuninghttps://github.com/ostris/ai-toolkithttps://github.com/Nerogar/OneTrainerhttps://github.com/kohya-ss/musubi-tunerhttps://github.com/tdrussell/diffusion-pipe>Zhttps://huggingface.co/Tongyi-MAI/Z-Imagehttps://huggingface.co/Tongyi-MAI/Z-Image-Turbo>Animahttps://huggingface.co/circlestone-labs/Animahttps://tagexplorer.github.io/>Kleinhttps://huggingface.co/collections/black-forest-labs/flux2>LTX-2https://huggingface.co/Lightricks/LTX-2>Wanhttps://github.com/Wan-Video/Wan2.2>Chromahttps://huggingface.co/lodestones/Chroma1-Basehttps://rentry.org/mvu52t46>Illustrioushttps://rentry.org/comfyui_guide_1girlhttps://tagexplorer.github.io/>MiscLocal Model Meta: https://rentry.org/localmodelsmetaShare Metadata: https://catbox.moe | https://litterbox.catbox.moe/GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-oneTxt2Img Plugin: https://github.com/Acly/krita-ai-diffusionArchive: https://rentry.org/sdg-linkCollage: https://rentry.org/ldgcollage>Neighbors>>>/aco/csdg>>>/b/degen>>>/r/realistic+parody>>>/gif/vdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/trash/slop>>>/vt/vtai>>>/u/udg>Local Text>>>/g/lmg>Maintain Thread Qualityhttps://rentry.org/debohttps://rentry.org/animanon
Blessed thread of frenship
dude weed
>mfw Resource news03/04/2026>Helios: Real Real-Time Long Video Generation Model https://pku-yuangroup.github.io/Helios-Page/>Toward Early Quality Assessment of Text-to-Image Diffusion Modelshttps://github.com/Guhuary/ProbeSelect>CFG-Ctrl: Control-Based Classifier-Free Diffusion Guidancehttps://hanyang-21.github.io/CFG-Ctrl>SIGMark: Scalable In-Generation Watermark with Blind Extraction for Video Diffusionhttps://jeremyzhao1998.github.io/SIGMark-release>Flimmer: Video LoRA training toolkit for diffusion transformer modelsgithub.com/alvdansen/flimmer-trainer03/03/2026>Alibaba’s Qwen tech lead steps down after major AI pushhttps://techcrunch.com/2026/03/03/alibabas-qwen-tech-lead-steps-down-after-major-ai-push>Adaptive Spectral Feature Forecasting for Diffusion Sampling Accelerationhttps://hanjq17.github.io/Spectrum>Kiwi-Edit: Versatile Video Editing via Instruction and Reference Guidancehttps://github.com/showlab/Kiwi-Edit>Let Your Image Move with Your Motion! -- Implicit Multi-Object Multi-Motion Transferhttps://ethan-li123.github.io/FlexiMMT_page>Neural Discrimination-Prompted Transformers for Efficient UHD Image Restoration and Enhancementhttps://github.com/supersupercong/uhdpromer>OmniLottie: Generating Vector Animations via Parameterized Lottie Tokenshttps://openvglab.github.io/OmniLottie>Flow-Factory: A Unified Framework for Reinforcement Learning in Flow-Matching Modelshttps://github.com/X-GenGroup/Flow-Factory03/02/2026>Accelerating Masked Image Generation by Learning Latent Controlled Dynamicshttps://github.com/Kaiwen-Zhu/MIGM-Shortcut>Open-sourced a one-click ComfyUI setup for RTX 50-series on Windowshttps://github.com/hiroki-abe-58/ComfyUI-Win-Blackwell>stable-diffusion-webui-codex v0.2.0-alphahttps://github.com/sangoi-exe/stable-diffusion-webui-codex>ComfyUI SeedVR2 Tilerhttps://github.com/BacoHubo/ComfyUI_SeedVR2_Tiler
>>108295980why is this schizo lurking here now? go back to /sdg/ you fucking parasitehttps://rentry.org/debo
Now that local is dead, which cloud model should we switch to? Seedream is less censored, but Nano Banana is overall better.
>>108295994He has nobody to talk to in his thread
>>108295929Thank you for baking this thread, anon >>108295944>>108295957Thank you for blessing this thread, anon
>>108296012the only one being a bitch about him is you
>>108296036everyone hates you debo, you aren't welcome here, get the fuck out to your /sdg/ asylum and see your fellow schizos avatarfags
>>108296036Odd thing to post but facts and statistics prove otherwise.
What's the cheapest way to build a machine devoted to Stable Diffusion? One of those AI mini PCs?
>>108296008seedream 5.0 lite is absolute shit and even more censored than 4.0/4.5. 5.0 still produces that same face seedream look that baked into the previous models.
>mfw Research news03/04/2026>BrandFusion: A Multi-Agent Framework for Seamless Brand Integration in Text-to-Video Generationhttps://arxiv.org/abs/2603.02816>From "What" to "How": Constrained Reasoning for Autoregressive Image Generationhttps://arxiv.org/abs/2603.02712>TC-Padé: Trajectory-Consistent Padé Approximation for Diffusion Accelerationhttps://arxiv.org/abs/2603.02943>DREAM: Where Visual Understanding Meets Text-to-Image Generationhttps://arxiv.org/abs/2603.02667>Generative Visual Chain-of-Thought for Image Editinghttps://pris-cv.github.io/GVCoT>SemanticDialect: Semantic-Aware Mixed-Format Quantization for Video Diffusion Transformershttps://arxiv.org/abs/2603.02883>StepVAR: Structure-Texture Guided Pruning for Visual Autoregressive Modelshttps://arxiv.org/abs/2603.01757>Conditioned Activation Transport for T2I Safety Steeringhttps://arxiv.org/abs/2603.03163>NOVA: Sparse Control, Dense Synthesis for Pair-Free Video Editinghttps://arxiv.org/abs/2603.02802>Beyond Language Modeling: An Exploration of Multimodal Pretraininghttps://beyond-llms.github.io>FiDeSR: High-Fidelity and Detail-Preserving One-Step Diffusion Super-Resolutionhttps://arxiv.org/abs/2603.02692>Preconditioned Score and Flow Matchinghttps://arxiv.org/abs/2603.02337>Kling-MotionControl Technical Reporthttps://arxiv.org/abs/2603.03160>Cultural Counterfactuals: Evaluating Cultural Biases in Large Vision-Language Models with Counterfactual Exampleshttps://arxiv.org/abs/2603.02370>Semantic Similarity is a Spurious Measure of Comic Understanding: Lessons Learned from Hallucinations in a Benchmarking Experimenthttps://arxiv.org/abs/2603.01950>ProGIC: Progressive and Lightweight Generative Image Compression with Residual Vector Quantizationhttps://arxiv.org/abs/2603.02897>RealOSR: Latent Guidance Boosts Diffusion-based Real-world Omnidirectional Image Super-Resolutionshttps://arxiv.org/abs/2412.09646
>>108296008>>108296094>This post is off topic.
>>108296094True, even the arenas say Seedream 4 was their best model. What is going on with China? ByteDance making Seedream worse and censoring Seedance. WAN going closed, and Alibaba canceling Qwen now too.
>>108296008>>108296094>>108296116
he just doesn't stop crying, constantly refreshing the thread to cry about every post. what causes such behavior?
How fucked am I to be stuck with a 12GB GPU for this in the current year?
>>108296129that's fine for non video stuff
>>108296129it's great for running SDXL-based models, which are the peak of local development anyway.
>>108296129You should be able to run anima with that much vram which is the best anime model
>>108296139I thought you needed 24GB to be in-spec for SDXL especially with lora workflows.
>>108296129That'll be able to handle thee two best models, Z Base and Anima
>>108296146>tamzaiyKVNO
>>108296146omg bruh another schizo? it never stops or what?
>>108296143Having a lot of Lora's downloaded along with using stuff like regional prompter wt 1024x resooution though... it makes me consider getting rid of the 5070 and just getting a 3090 from someone reputable like I originally planned to just for peace of mind.
>>108296146>tamzaiy:https://www.pixiv.net/en/users/38224130Hello, Kino department?
I think the sad part is he gave up on being witty or even making gens. He lacks the cognitive ability to make anything half way decent so he only post his garbage in his containment to cope with feeling like he belongs in his dead thread. I expect people to be able to use models to make interesting stuff at least not resorting to spamming and seething all day.>>108296156It's the same one he just wants to go all out because he's been outshined by the dev, who to be honest is way better at being a schizo than he is because he can at least hide in the crowd better. He's not great at it but still better than this guy>>108296166Why not try to get a TI and sell the regular model?I think a 5000 series card will be better for you if they start expanding on the new ai features that I keep forgetting about.
>>108296174>Why not try to get a TI and sell the regular model?Unobtanium by this point I fear. The market is scarce.
>this collage>the images being posted>whatever the fuck is happening with these thread starting postsWhat schizo meltdown is happening this week?
>>108296174Bro, your general is dying by /adt/ hands, you are the only one who is carring the torch, your losted, forget about /ldg/ and go back to discord, let the titanic sink.
>>108296182Damn, personally for me the 3090 seems like a hardsell outside of LLM stuff and stacking multiple.>>108296194OG schizo is trying to reclaim his throne and failing at it. He can't even make coherent gens or post anymore.He's washed I tell you completely washed up
>>108296146>BakerAnon:https://www.pixiv.net/en/users/110320313Holy fuck this anon cooks anime!
>>108296174>I think a 5000 series card will be better for you if they start expanding on the new ai features that I keep forgetting about.I figured even those could not make up for the VRAM deficit on the order of 12GB versus 24GB, something to do with network layers I was told.
>>108296213>outside of LLM stuff and stacking multiple.Why not for prompting images?
>>108296215"anime" kek
>>108296225I guess you can but the newer cards are faster and you're less vram constrained with image gen compared LLMs. You also can't stack GPU for image generation like you can with LLMs. I think 24gb is the sweet spot and if the 5000 super cards actually launched those cards would have been the best imo. Still read about vram on the models you use.
>>108296174>I expect people to be able to use models to make interesting stuff at least not resorting to spammingthe irony, you're spamming the same exact shit over and over
>>108296262I see you bitching but I never see any gensI think this fail gen is still better than anything you can make btw
>>108295943>>108296048>>108296174>>108296213>>108296261>Avatar or Signature useYou know what to do sirs, avatarfaggots aren't welcome here
When will local reach this level of kino?
>>108296327>https://www.pixiv.net/en/users/76374114This is what i cal "good anime"! they surely deserve a general on their own!
You're really crashing out today>>108296271Part of why you failed is your misuse of terms, I'm not sure what to call your condition but this inability to grasp basic concepts has been hurting you for years. I'm protesting a entity that has been harassing people he dislikes for years and can't even enjoy having the thread he fought so hard for all to himself.I amaze myself with how fast I can make these in 60 steps, I typically do 150, might see how 300 looks with this model.>>108296327Really crashing out today I see
>>108296348>I'm protestingspamming the same exact image is a "signature use" so yeah, you also always starts your filename with "00", that means everyone can easily recognize you, if you want to play the avatarfag, you have /sdg/ for that, this place is not made for self centered drama queens such as you
>>108296420Are you done crying yet?
>>108296348Are you done crying yet?
>>108296348>>108296441/ldg/ is gettig dismemebered by /adt/ since november, loser, but you're still clinging to Ani and Debo, you're incapable of seeing the present as it is faggot.
>>108296302>>108296102>>108296484>you're still clinging to Ani and Deboto be fair, debo is still trying to invade this place>>108295980
>>108296484He is a pussy, he still lives in 2024
Oh, alright.It's 3 schizos all intertwining together
>>108296493Shut up, any anon could impersoante debo, /adt/ rider
>name dropping /adt/>now we pretend to be rational and mask the problem
>>108296505hard to impersonate the news spam, only a turboautist like him would make the effort to do something like that, the troll would be way too sophisticated to be worth the kek
DEBO DEBO DEBOANI ANI ANIBASEDJACK BASEDJACK BASEDJACKthat's all this thread has become
>>108296441>>108296348You are a pussy clinging to the past if you are incapable to see the present
>>108296518And why not /adt/ ? Why are you omiting /adt/?
>>108296540He readed the book gaslight for dummies
>>108296518>that's all this thread has becomeif only we could enforce words filters on generals so that schizos wouldn't be able to spam their favorite words, oh and IDs would help a lot too
>>108296551/adt/ rider uses ecker, he can bypass that
>>108296540>>108296549>exactly one minute aparttry to be more patient the next time you want to do some samefagging "anon"
>>108296093No, you need a dedicated graphics card. And as for cheap, there's nothing cheap here. If you're trying to buy a system now you're fucked.
>>108296569>He was even less effective when we had the IP counter. Do we know why 4chan removed that btw?
>>108296564Shut up "mr /adt/ white knight"/adt/ raider. We are being botted by you since november. People know it and thats why your general is dead.
>>108296569>He follows the same pattern>"This is why /ldg/ is dead">>108296582>thats why your general is dead.LMAOOOO, the jokes write themselves
>>108296575No idea, I miss it desu>>108296591I know...I think what makes it genuinely hilarious is that he's resentful of his position and can't comprehend why after I think we're almost at 4 years, things have gotten worse for him.
local diffusion
lot of crying in this thread when you could just be using API nodes
>>108296650I like these gens
>>108295929I'd like to use AI to assist me in comic production.Like "draw me cocked HK P7 M13 9mm with it's magazine, here's the rough sketch, and here's the callout sheet of HK P7 M13 for your reference" way of usage.Is there any way to achieve this in local UI without having to train new LoRA for every single type of gun?
>>108296886local models are dumb, they barely know any kind of proper nouns. you're better off just asking nano banana for something like this sadly.
>>108296886most likely not, you'll need to make loras
>>108296958cutie
>>108296982You're like the pysho ex-girlfriend completely obsessed with the guy that dumped her after cumming all over her face. Get some help, man...
Imagine being this obsessed with imaginary boogeymen.
>>108296979the beauty of qwen 2512.
>>108295929hey guys, is ESRGAN still king for upscaling ?what do you recommend ?
I struck a nerve
bix nood
I don't see the difference with /sdg/ anymore, this place has become a garbage bin for avatarfags to spam their slop on
>>108297394yeah. mods should delete /sdg/ at this point. no reason to have two of the same thread on the board
>>108297394it's literally just three schizoids spamming constantly day after day.
>>108297446There's only a couple actual anons posted here to keep them entertained and thus contained. All the old posters have moved to a different thread. You'd think they'd realize seeing how slow the thread is without the spam.
ACK
>>108297575why are you guys obsessively stalking every post I make
>>108297713>why are schizos from /ldg/ obsessive about anything
>>108297517>All the old posters have moved to a different threadYou wish kek
>>108297713I saw it on discord uwu
He's still at it?
yeah he's got TDS (tdrussell derangement syndrome)
real question not trolling. do newfrens actually fall for posts like >>108297394 and >>108297517 ? also related why does anon make such posts? it seems like they have sour grapes or are upset by this thread existing no?
>>108297754>mfw
>>108297517where can i find state-of-the-art technical discussion of free and open source local diffusion models, tell me anon
>>108297765huggingface anima discussion page, unironically
TDRUSSEL!!! UPDATE YOUR DAMN OUTDATED BOORU DATASET!!!!
>>108297775but the anon who made that post doesnt even use anima so he wasnt talking about that obviously
>>108297778Let him cook
>>108297789don't care
>>108297743It's his own personal hell. His fate lies in endlessly seething and coping. I almost feel bad for him. Almost.
>>108297778>outdatedIt was only 2 months outdated when training started and it's a huge pain in the ass to update it. Especially since I would have to scrape everything including metadata myself, since I don't think there's any public dataset more up to date than the stuff on HF.
>>108297750more.
Wow this general so shit
>>108297815cared enough to reply :]
Worst general in this board desu
He's chimping out.
Something smells fishy.
>>108297878Yes, it's this general, is dead
>>108297822Your right but this general is dead, you know?
>>108297436>mods should delete /sdg/ at this point.do you really want ran to win? kek
>>108297575Why do you post in a dead general?
>>108297778Yes but this general is dead, so there is nothing we can do
>>108297743Do you know what happened to /ldg/? Why it's so dead?
absolute meltie
Yep, we are dying
>>108297855
>>108297754Real question why it so dead(ldg)?
>>108297885>>108297900>>108297906what kind of mental illness is this?
>>108297765Where i can find a real thread? This one is dead
>>108297942He used to try harder now it's just boring
>>108297446How bad, but this general is dead
>>108297853Just ask in NoobAI, everyone there has one ready
Is he upset because based tdrussel posts here?
>>108297945Now it just dead, like this general
Dead
>>108297713hungry for any sort of news
>>108297969Who?
>>108297980This general, this general is dead
>>108297982Yeah, because of ram. I paid $130 for 2x32gb dd4 cl18. It's now over $700.Everything's fucked.
>>108297993>Everything's fucked.This general is fucked desu
>>108298006Nah, this general is more like a dead general
>>108298009shut up zoomer
>>108298023>shut upLike this dead general?
>>108298030when will the mods clean up this spam shit?
The funny thing is the schizo samefag is not entirely wrong. This thread would move pretty slow if not for their efforts to make it the most active general on this dying board.
So dead, what happpened?
r u done crying now
>>108296619This
>>108298100True
Dead general
https://github.com/Comfy-Org/ComfyUI/pull/12773Oh shit, we'll soon get an improved version of LTX2
>>108298164https://ltx.io/model/ltx-2-3
>>108298087>>108298095Very interesting
>>108298223Nah outpainting has been around for eons
>>108298223Wow, that post changed my life /adt/ has some talented individuals.
Klein is such a sleeper.
>>108298243If you think that's good you should see their ANCHOR, it's genuinely a pleasure
>>108298223Most interesting post I saw in my life, thanks.
>>108298181> What is the difference between LTX-2 and LTX-2.3?> LTX-2.3 brings four major improvements over LTX-2.>> A redesigned VAE produces sharper fine details, more realistic textures, and cleaner edges.>> A new gated attention text connector means prompts are followed more closely — descriptions of timing, motion, and expression translate more faithfully into the output.>> Native portrait video support lets you generate vertical (1080×1920) content without cropping from landscape.>> And audio quality is significantly cleaner, with silence gaps and noise artifacts filtered from the training set.> Should I upgrade from LTX-2 to LTX-2.3?> Yes — LTX-2.3 delivers sharper output, better prompt adherence, cleaner audio, and significantly improved image-to-video across the board. The one exception: if your workflow relies on custom LoRAs, those will need to be retrained for the 2.3 latent space before you migrate. See the Migration Guide for details.
>>108298223> style was murderedThat's generic anime style, what's the difference?
>>108282473Very interesting
>>108298295He can't tell the difference
>>108298266
>>108298306Wow, does he has a Patreon? I want to commission him
>>108298306>>108298223What the fuck, why is their thread has such high quality and our is infested with tran and avatar troons?
>>108298222nice quote anon
>>108298306Again, talented individuals this /adt/ guys, never cease to amaze me.
>>108298334I dunno, if I were an anime genner I'd post there, don't you think?
>>108298306That anon is real? Is that a leak from some anime studio?
>>108298223>>108298306I don't get it, why does it seem like they're talking to the void ?
>>108298386I think you have to speak sanskrit to understand them.
>>108298376Is that made by diffusers?
>>108298312
>>10829840310/10
>>108298403yes
>>108298295This is a question I've wondered for a long time. Is anon incapable of telling the difference between "ai style" and not? Or is he able to but chooses to not acknowledge it because more authentic styles are "difficult" to achieve and thus out of his ability?
>>108298403Kino
>>108298403That's an artist with all the bells and whistles
>>108298403Nah, I think that he draw it and then digitalized it
>>108296132What can I do with 24GB running XL and whatever the lstest SD models are for image generation that I simply cannot with 12 or even 16GB? I have a lot of Lora's downloaded, but I would like to train my own perhaos or at least experiment with doing so to prevent reliance on civitai stuff. Maybe.
>>108298435>that I simply cannot with 12 or even 16GB?Run video models. >but I would like to train my ownYou can even train on 8gb. >XLAntiquated lineage. For anime use Anima https://huggingface.co/circlestone-labs/Anima
>>108292876And who would've realized it? Hard to when you're looking at such quality chil-I mean art
>>108298463
>>108298306>>108298463Their gens are so interesting! Impossible to tell them they're sublimating pedophilic sexual impulses into drawings!
>>108298450Seriously? I was under the impression based on 8GB experience that in order to combine a workflow consisting of many tokens/tags, 1024x resolutions, hires fix, regional prompt, using the many lora's downloaded, and not compromise on image quality or resolution you need 16-24GB.i was also told (albeitv 2 years ago by now) that low rank and high rank loras are simply impossible unless you got 16 or ideally 24.I suppose if I had shelled out 800 euros for a 3090, I would have had fun dabbling in video gen too by now.
>>108298407
>>108298498>many tokens/tagsDoes not affect VRAM usage. >1024x resolutions, hires fixTiled sampling fixed this. >regional promptDoes not affect VRAM usage. >using the many lora's downloadedBarely affects VRAM usage. >and not compromise on image quality Quality is unaffected by VRAM. >or resolutionSee second answer. >low rank and high rank loras are simply impossibleMost if not all trainers have low VRAM presets for every model. Offloading to regular RAM can also help.
>>108298498>regional prompt,modern models can do a lot without it because they have spacial reasoning
>>108298295Hair and shirt of middle is shaded differently than the rest. Might not be the best example here, but Banana is incapable of reproducing styles that aren't from its training data, and makes things look extra generic and "soulless", at the upside of anatomical correctness which is half the reason the side view looks inoffensively plausible.Another thing that didn't out well is multi views of room interior so I had to go autistic with manual editing, reprompting and rerolling to get workable results, reminding me that my hypocrite ass wanted to become an artist.
>>108298606do you actually like any banana gens? Leaving aside that it's "le pro slop"
>>108298626yeah they're funny
>>108298533>Tiled sampling fixed this.Doesn't this cause problems of its own, like edge seams and attention context loss besides much slower sequential speed? You can run full latent diffusion without tiling, preserving full context on 24 no? I noticed the loss of attention context a lot in my use of regional prompts.>regional promptI am under the impression this depends on resolution, on the count of attention masks and multiple conditioning passes increasingh memory and adding overhead especially at 1024x resolution.>lora's barely affect vram usageThis seems misleading when you take into account low rank versus high rank loras. And their number and modifier layers still create GPU tensors and modifu attention layers. Though UI's like Forge sometimes merges LoRAs into the model temporarily, reducing overhead at the cost of time and offloading to system RAM.>quality unaffected by VRAMQuantization/reduced precision? Attention slicing? CPU offloading? Tiled diffusion?>trainers have low vram presetsI suppose you are right that high rank is possible today on low vram, but there may be instability and the training times will spike.>offload to RAMStill a significant penalty (10x slower) and seems to risk crashes or instability.
>>108298667I'm struggling to come to grips with someone who can reference concepts like rank, attention, quantization, conditioning, etc. while also being under the impression that you need 24gb to get the most out of something as old as XL. Are you feeding these replies into Gemini and regurgitating its answers? The bottom line is that 8gb is painful but doable, 12-16 is fine for image generation (except Qwen and another large model that I'm forgetting), and 24+ clears everything easily.
>xhe forgets that going from unet to diffusion transformer models has increased vram requirements>flux has way more parameters than sdxl mixes ever will
desu no one uses flux anymore
>>108298626I don't mind it in faux traditional style. Plus the fact it can do logical backgrounds sends it a mile ahead of n** in the background department, since "things don't make sense" is a bigger issue to me. Can be used to make comics too.
>>108298728>something as old as XLMy setup has not changed that significantly since 2023-2025 as much as I'd like, so I've heen stuck in a bit of an awkward spot.I like to generate SDXL model/shitmix based images (especially retro styled anime or realistic game textures) at 1024x1024 resolution with 35-65 steps with 1-3 lora triggers with adetailer and hires passes for final generations along with inpainting, with some regional prompting if multiple subjects are involved for example. I do this in Forge/NeoForge. I have not tested newer models yet so alot of my questions are speculation or guesswork based on what I have looked at to squeeze as much out of low VRAM GPUs on Ampere as opposed to Blackwell or high VRAM Ampere based on what updares I have made. I have felt the limitations in several instances though.Brute force capacity or suffer severe time penalties and quality loss haa been my understanding since the original spec for SD that I recall lists 24GB aa a requirement to run with no optimizations, which to my understanding are all tradeoffs.
>>108297853Is this really you? Accept the invite in my email :p
summoned here? ahh fuck
>>108298523
>>108298774A robust XL workflow is no match for a 16gb card. A 4gb card would be excruciating nearing (if not) impossibility, but with 16gb you are not suffering _severe_ penalties. Arguably the biggest loss of quality comes from quants and even then the "loss" of q8 is virtually nonexistent. In addition, quanted XL models aren't really a thing. A small number do exist but most quanting efforts have been focused on larger models because there has been little if any need for it with XL. To say "you will suffer greatly using XL with anything less than 24gb" is highly dubious.The only difficult and painful part of your defined goal would be regional prompting and not for its compute requirements but rather the fact that it kinda sucks and has not received much focus since modern models have a degree of spacial awareness.
>>108298908and they give the other thread shit for bad gens lmao
>>108298884>>108298917dont you have an alimony payment to make
>>108298916What about if I wanted to make the jump to modern models and higher resolution gens, escape my little niche?
>>108298908
>>108298921what a strange thing to say
Your tastes are those of a pleb. For this reason, your opinions are disregarded.
i dont like the lack of 1girlsfag do better fagollages
>>108298945
>>108298924If you had less than 12 then I would feel sorry for you. But you'll be fine with 16. >>108298924>higher resolutionYou'll run into the limitations of a given models ability to generate extreme resolutions itself before you hit a wall due to lack of compute, honestly.
>>108298933
>>108298989so real king
>decide to check out reddit for AI>make a fake account>start looking at AI shit>reddit starts recommending me srilankan subredditsNo wonder everyone thinks we're all indian
>>108298979>But you'll be fine with 16.But not completely SOL for images only on 12?
>>108298979>You'll run into the limitations of a given models ability to generate extreme resolutions itselfIs 1024x1024 or ditto but hires fixed by 2x not extreme?
worse samplers are more gooder, in mysterious ways. just one of the reasons local is beettten ahem, better.
>>108299175this one is what I meant.
>>108295929what can I realistically expect to run with 8gb of vram, 16gb regular ram? What kind of results would I get?I'm wondering if it's worth the time and effort to set this up now or if I should wait until I can upgrade my ram and then do it after.
we are hungry :)https://suno.com/s/xBlXU9lhOP3hBdwM
>>108299180
>>108299292lcm, cfg=10
>>108299198I don't know about what >>108298916 has to say about it but when I was on a 3070 I would regularly exceed 16GB of system RAM for my image prompting.
>>108299478*readings taken from the console in taskmgr
best nsfw realistic amateur 1girl model for vramlets?
>>108296094Do the ridges of an NES cart feel good on female nipples?
Jesus, I leave this bread for 3 years and the same avatarfags are still fagging it up. Imagine being that pathetic
>>108298222It's interesting: The slop gets more and more accessible (I don't care, it's fine, whatever. If it's fun do it, idgaf). But the promise of easily creating great stuff (or "pseudo-great" if that term offends you) quickly escape the grasp of the average user. I genned a bunch of shit for maybe a year or so before burning out, now it feels like I've been left in the dust and all the shitty tool-operating knowledge I gained from tweaking the knobs and levers is useless. I see some great amazing gens made by people and think "fuck man, how the hell am I gonna catch up with this?". Even if this is all hilarious because wErE nOt aCtuAlLy cReAtInG aNyThiNg, it still branches off into schools of thought and sifting through reams and reams of eldritch scrolls (models) and components that quickly get dizzying in scope unless you've been studying every single advancement every day something changes.
>>108299843which ones are the avatarfags? i only check this thread like once every 6 months
>>108299876The gay ones with their avatars.
>>108299860Other than proompting, everything else is still the same. It's genuinely better for prompting, it can interpret my retarded thoughts pretty well now.
>>108298728So what I'm hearing is, get a 3090 at minimum if you want to YOLO prompt everything with no compromises.
>gen>assets don't show upThanks Comfy.
>>108300061>update>get blue screens while genningit's like the fennec faggot is in cahoots with nvidia and saas corpos or somethingno, no way, he'd never betray local
>>108299903i don't know what that meanscan you point to the posts
works on my machine :)
gave wan2.2animate-move a try on the hf space. source video: >>>/pw/19984525source image: https://files.catbox.moe/tzvwne.jpegcan I run wan2.2animate on a 3090 with 32gb ram? what limitations would 64GB ram lift (longer videos)? what would gen times be like? on the hf space 4 seconds took 11 minutes, so I assume about 30mins on a 3090 at 480p?
what is the best method to generate a lora for z image, what is best? I'm using ai-toolkit. Do i generate the lora using the base model and the use the loras with the base image or with the turbo?Or do I generate the lora directly for the turbo using the workarounds in ai-toolkit?
anyone else experienced severe slowdowns with anima after last comfy pull? had to do git reset to make it usable again. also had some python errors showing up in console but i don't understand this shit
>>108300285no, but i believe you, it gets shittier with every pullanyways, i asked the free claude to vibecode a custom node based on this >>108291118 and i was surprised it actually worked for anima. not lossless with default settings (apparently it's uses some sort of clever mechanism to select which steps to skip, but only of flux models), but if you adjust settings, i get honest x1.4 speed increase
>>108296008I can save local I just need a few GPUs and people to trust me to cook, I have quite interesting plans if I had the GPUs...
LolJulien is so desperate and bad at falseflaggingIm thinking suicide soon
Trellis 2
>>108300351Agentic dataset collection and LoRA training + agentic image generation with fine tuned model VLLM with multiple personalities specializing in Conducting, Control Net Posing, LoRA selection, and judging result. Why the fuck hasn't anyone done this? You basically almost get proprietary model quality for free
>>108300244>so I assume about 30mins on a 3090 at 480p?Nah it's not that long. Like 5-8 minutes depending on how long you want to stomach a few extra steps for quality. Generally you want to use a lighting LoRA with it.Most workflows also come with a continue node so you can kind of keep it going forever. The real limitations of the model is that it kind of sucks for anything that isn't 1 girl dancing.
>>108298966That's an old photo. This cutie must be dead by now.
>>108300354he doesn't do it at all. you just see the at his success and false flag a punching bag due to your mental illness
>>108295929>Local Model Meta: https://rentry.org/localmodelsmeta>I haven't updated this in awhile. Sorry. I've been busy. I'll try to get back to it over the next couple of weeks, same with the Wan rentry. If not, someone else can take over.time to stop including it
what the fuck is this comfy.aimdo shit, cumfart? and why do i get illegal address errors?
>>108300578Sure you don'tThanks for yet again admitting we are talking about you and not "your coworker"
GuysJulien is an expert in advanced topics like linkedlistsTherefore he is an expert in high performance inference computingHe even build a full ui wrapper all by himself in just over 2 years
sad
>>108300654ani lied about being his coworker? no way! ani never lies after all, right??
>>108300356Nice.
Trellis 2.
Nippon vacation status?
>>108300314catbox that node please?
>>108300665this so much thislol
>>108300923be aware it's broken at the core (100% made by free claude). use lite nodehttps://files.catbox.moe/g07g91.zip
>>108300952thank you
>>108300665The fact that such a subhuman retard is giving conference talks despite not having written even a single line of inference code is hilarious
>>108301072yet he's out there doing stuff and you're just here seething for no reason at all
>>108301072in JAPAN senpai, they give space to anyone willing to go. did you check the other guys who gave speech? it's literally a bunch of nobodies, women and browns
>>108300665Thoughts? Even GPU manufacturers don't want anything to do with Python. Why Comfy insist on it?
I wanna buy a PC Gamer to generate easily on local. What graphic card would be enough to generate fast SDXL pictures using Illustrious loras?If it could be minimal expensive it would help me.
>>108301217python was good when hardware was cheap and abundant. we're in new reality now and people are finally starting to pay attention to efficiency. adversity breeds opportunity and i'm unironically hyped for the future of ai without bloat
>samefagging intensifies
>>108301217
>>10830133312GB 3060 RTXt. VRAMlet connoisseur
>>108296306seedance 2 will soon be censored, and they'll add a filter for celebrities...btw, i'd like to take this opportunity to tell you that i've created my own discord server called bchan. :-)discord VAaTvbH7ldg sisters, you are welcome :)
>tfw no thicc big frap latina gf
>>108301333I would say get a 5060ti at minimum
>>108301333a 16gb vram card with 32gb of ram will be good enough but anything higher than sdxl and z image turbo will have issues. I would recommend you spend big and buy a 4090 or 5090. Look for beefy 4090/5090 prebuilds that have 64gb of ram. I spent nearly $4000 on my 5090/64gb ddr5 ram build last summer and have no regrets.
>>108301144>you're just here seething for no reason at allyet another masterclass in projection from Ani
>still replying to himlol
>>108301749don't think it's ani but anon's right. you're seething at someone more successful because you're a nobody that didn't achieve anything in this field. you contribute nothing but useless drama to the threads
>>108301217What's the discord server?
>needs over 2 years to figure out how to build an imgui wrapper>still crashes all the time and compilation is shit>zero contributions to sd.cpp (which is doing all the work and MIT)I even respect that turk furkan more, he contributes more to the ecosystem lol
>>108301802i would say most of the work is GGML anyway, which is ggernagov and co
>>108301781>don't thinkyou certainly don'tif you did, you'd have hung yourself a long time ago, Julien
giga ot widely known technique for image quality: uspscale only to then downscale for sharpness
Fresh>>108301867>>108301867>>108301867Fresh
it's over
We're so back