1-2 Debloat LiteGraph EditionDiscussion of Free and Open Source Diffusion ModelsPrevious: >>108424386https://rentry.org/ldg-lazy-getting-started-guide>UIComfyUI: https://github.com/comfyanonymous/ComfyUISwarmUI: https://github.com/mcmonkeyprojects/SwarmUIre/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneoSD.Next: https://github.com/vladmandic/sdnextWan2GP: https://github.com/deepbeepmeep/Wan2GP>Checkpoints, LoRAs, Upscalers, & Workflowshttps://civitai.comhttps://civitaiarchive.com/https://openmodeldb.infohttps://openart.ai/workflows>Tuninghttps://github.com/spacepxl/demystifying-sd-finetuninghttps://github.com/ostris/ai-toolkithttps://github.com/Nerogar/OneTrainerhttps://github.com/kohya-ss/musubi-tunerhttps://github.com/tdrussell/diffusion-pipe>Zhttps://huggingface.co/Tongyi-MAI/Z-Imagehttps://huggingface.co/Tongyi-MAI/Z-Image-Turbo>Animahttps://huggingface.co/circlestone-labs/Animahttps://tagexplorer.github.io/>Qwenhttps://huggingface.co/collections/Qwen/qwen-image>Kleinhttps://huggingface.co/collections/black-forest-labs/flux2>LTX-2https://huggingface.co/Lightricks/LTX-2>Wanhttps://github.com/Wan-Video/Wan2.2>Chromahttps://huggingface.co/lodestones/Chroma1-Basehttps://rentry.org/mvu52t46>Illustrioushttps://rentry.org/comfyui_guide_1girl>MiscLocal Model Meta: https://rentry.org/localmodelsmetaShare Metadata: https://catbox.moe | https://litterbox.catbox.moe/GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-oneTxt2Img Plugin: https://github.com/Acly/krita-ai-diffusionArchive: https://rentry.org/sdg-linkCollage: https://rentry.org/ldgcollage>Neighbors>>>/aco/csdg>>>/b/degen>>>/r/realistic+parody>>>/gif/vdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/trash/slop>>>/vt/vtai>>>/u/udg>Local Text>>>/g/lmg>Maintain Thread Qualityhttps://rentry.org/debohttps://rentry.org/animanon
>mfw Resource news03/22/2026>SAMA: Factorized Semantic Anchoring and Motion Alignment for Instruction-Guided Video Editing https://huggingface.co/syxbb/SAMA-14B>ID-LoRA: Identity-Driven Audio-Video Personalization with In-Context LoRAhttps://id-lora.github.io>Michael Hafftka – Catalog Raisonné Dataset (1970s–2025) https://huggingface.co/datasets/Hafftka/michael-hafftka-catalog-raisonne>ComfyUI-FeatherOps: Fast fp16-fp8 mixed precision matmul on RDNA3/3.5 GPUs without native fp8 https://github.com/woct0rdho/ComfyUI-FeatherOps03/21/2026>LoRA Pilot: Docker image for all Stable Diffusion LoRA trainershttps://github.com/vavo/lora-pilot>Advanced Model Manager for ComfyUI: Browse, search and download from HuggingFacehttps://github.com/BISAM20/ComfyUI-advanced-model-manager>LumosX: Relate Any Identities with Their Attributes for Personalized Video Generation https://huggingface.co/Alibaba-DAMO-Academy/LumosX>PixyToon: Local SOTA pixel art generation and animation for Asepritehttps://github.com/FeelTheFonk/pixytoon03/20/2026>NeuralGraft: Zero-Training Capability Transfer & LoRA Construction for Diffusion Modelshttps://github.com/alokickstudios-coder/neuralgraft>Nvidia SANA Video 2Bhttps://huggingface.co/Efficient-Large-Model/SANA-Video_2B_720p>Measuring 3D Spatial Geometric Consistency in Dynamic Generated Videoshttps://github.com/tj12323/SGC>Cubic Discrete Diffusion: Discrete Visual Generation on High-Dimensional Representation Tokenshttps://github.com/YuqingWang1029/CubiD>SSP-SAM: SAM with Semantic-Spatial Prompt for Referring Expression Segmentationhttps://github.com/WayneTomas/SSP-SAM>PromptHub: Enhancing Multi-Prompt Visual In-Context Learning with Locality-Aware Fusion, Concentration and Alignmenthttps://github.com/luotc-why/ICLR26-PromptHub>EffectErase: Joint Video Object Removal and Insertion for High-Quality Effect Erasinghttps://henghuiding.com/EffectErase
>mfw Research news03/22/2026>PhysVideo: Physically Plausible Video Generation with Cross-View Geometry Guidancehttps://arxiv.org/abs/2603.18639>OneWorld: Taming Scene Generation with 3D Unified Representation Autoencoderhttps://arxiv.org/abs/2603.16099>AvatarForcing: One-Step Streaming Talking Avatars via Local-Future Sliding-Window Denoisinghttps://cuiliyuan121.github.io/AvatarForcing>Fanar 2.0: Arabic Generative AI Stackhttps://arxiv.org/abs/2603.16397>SMAL-pets: SMAL Based Avatars of Pets from Single Imagehttps://arxiv.org/abs/2603.17131>Training-Only Heterogeneous Image-Patch-Text Graph Supervision for Advancing Few-Shot Learning Adaptershttps://arxiv.org/abs/2603.18101>LaDe: Unified Multi-Layered Graphic Media Generation and Decompositionhttps://arxiv.org/abs/2603.17965>TransText: Transparency Aware Image-to-Video Typography Animationhttps://arxiv.org/abs/2603.17944>CycleCap: Improving VLMs Captioning Performance via Self-Supervised Cycle Consistency Fine-Tuninghttps://arxiv.org/abs/2603.18282>TexEditor: Structure-Preserving Text-Driven Texture Editinghttps://arxiv.org/abs/2603.18488>Revisiting Autoregressive Models for Generative Image Classificationhttps://arxiv.org/abs/2603.19122>ViT-AdaLA: Adapting Vision Transformers with Linear Attentionhttps://arxiv.org/abs/2603.16063>PaAgent: Portrait-Aware Image Restoration Agent via Subjective-Objective Reinforcement Learninghttps://wyjgr.github.io/PaAgent.html>Fast-WAM: Do World Action Models Need Test-time Future Imagination?https://yuantianyuan01.github.io/FastWAM>Visual Distraction Undermines Moral Reasoning in Vision-Language Modelshttps://arxiv.org/abs/2603.16445>Soft-Di[M]O: Improving One-Step Discrete Image Generation with Soft Embeddingshttps://arxiv.org/abs/2509.22925
Is it me or does cg-use-everywhere not work with get/set nodes for bools? it never connects
>did a git pull>everything worksThis feels wrong.
Ah ahh~ my girly clit is hardening~Ahh the silence of /ldg/, ahh~ *rubs*So intense~ my little toes are twitching with pleasure~Ohh /ldg/, what happened? Why is it so quiet? Ah ahh~ *rubs rubs rubs*
>>108433668i git pull daily and haven't broken anything in ages.
*shuffles closer to your dead body on my knees, tiny skirt puffing out, socks bunching at the ankles*Ohhh~ /ldg/you’re so still now~*dips fingertip in, tracing slow circles in my girlhood*No posts, no pulse.. *i giggle softly between soft moans of pleasure as I brought my face close to your cold ear*Are you sleeping forever~My little dead thread? Ahh~*i say with bated breaths, my girly hot breath tickling your cold ear*
So quiet now~So still~All mine to keep warm…ahh~ *my hips tilt with pleasure, tiny shiver up my spine*
>>108433668>>108433707I just use stability matrix. even if something does break, easy to rollback with a simple click.
i have been playing with ltx2.3 distilled all day and i am not really sure what is wrong with my setup. i am using wan2gp and when i do a video extension, the motion gets changed and the audio does not match the source clip. i tested the model on their official api and it doesn't have these problems
>comfy works I swear!>you just need six different external apps/package managers to keep it from blowing up>you also need ten different flags to disable all the default footguns as well>you also need to use legacy UI>you also have to....FUCKING STOP. REEEEEE
>>108433998>>108434007>>108434022I recognize this girl!
>>108434045I recognize this facial abuse hapa!
>>108434022I recognize this horse.
>>108434123I recognize this joke.
>>108434123he's a gallant steed indeed
Sup gooners. i used grok to generate this image of Mei from Overwatch, then dressed her in Dark Magician Girl's outfitI have a beginner understanding of ComfyUI and node workflows. I want to undress her, have her get fucked, etc. I've already done some test prompts with the standard Wan2.2 Image To VideoWhat are the best NSFW Loras/workflows/models/etc to be able to achieve this through Image To Video?Likewise, what are the best loras/workflows/models etc to ImageEdit this image to put her into other fun outfits>3070TI 8GB
>>108434167video models are almost all originally trained on SFW materials so you mostly need a lora for whatever sexual actions you want, possibly also one for the character or other details it doesn't understand
>>108434167>8 GB VRAM for video modelsgood luck.
>>108434167with 1 image you are going to be pretty limited in what you can do, flux 2 klein or qwen can get her naked. for i2v the easiest gooner workflow is just get a mei lora and an nsfw checkpoint(civitai has some mei loras), generate nsfw images and use them in wan. like anon said, wan isn't really trained for nsfw. it's uncensored, so it understand the anatomy but not the motion, so grab some nsfw motion loras from civitai.also in comfyui, you can enable animated latent space previews so you can see the general motion of the clip without waiting for it to complete.
>>108434150kekd
>>108432709You deserve a free lootbox node for this. Amazing work, famalam.
Using Wan2gpKeep getting errorCUDA error: out of memory CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.started using this in anaconda python wgp.py --i2v --perc-reserved-mem-max=0.25and it worked for a few gens, but now its back to giving me the same error no matter how short or low res I make the video.
>>108434167>>3070TI 8GBjust fucking stop right now and get a job retard
>>108433924kill yourself frog
Didn't know Z-image turbo could do Lady Diana out of the box
>>108434501It knows more people, brands, IPs than you may think
>>108434460gb2r
WTF klein>change her skin to match the skin tone of image 2
>>108434713>skin tone of image 2skin tone of the woman in image 2
>>108434715>change the skin tone of the woman in image 1 to match the skin tone of the woman in image 2Still not quite right.
>>108434732based ai knows that woman is an undercover poo
>>108433707same, I even pull the FE and build it.
1girl
>>108433998>>108434007>>108434022desu little slopped
take the survey pls
so when I upscale an image with seedvr2(used that workflow from the previous thread) and then the image gets upscaled but the filesize of the upscaled image is like 14mb, can I set it in some node to reduce filesize so it fits into this thread?
>>108434794>can I set it in some nodeim sure the manager or a chatbot can point you to a node. you can also use 4chan xt which does it automatically at upload
>>108434794yeah use the jpg node
>>108434794use gimp or whatever to convert it to a jpeg, you fuckin dunce
i listen to ai music while i wait for my ai videos to generate. who else is do this?
>>108434794>open in Irfanview>save as jpgYou might need to rent a B300 for this though.
>>108434794>this is the technical prowess of ai wf copypasterslmao
>>108434810>>108434818>>108434827>>108434874>>108434886all I know is that png is the good shit
muh shinobi gf
>>108435104
>>108433569that korean chick gen is fukin A with a cherry on top damn
long dick general
>>108435104Is she laying on the bed... or kneeling on the floor?
>video model randomly generated the most kino background song ever