Previous /sdg/ thread : >>108330259>Beginner UIEasyDiffusion: https://easydiffusion.github.ioSwarmUI: https://github.com/mcmonkeyprojects/SwarmUI>Advanced UIComfyUI: https://github.com/comfyanonymous/ComfyUIForge Classic: https://github.com/Haoming02/sd-webui-forge-classicStability Matrix: https://github.com/LykosAI/StabilityMatrix>Z-Imagehttps://comfyanonymous.github.io/ComfyUI_examples/z_imagehttps://huggingface.co/Tongyi-MAI/Z-Imagehttps://huggingface.co/Tongyi-MAI/Z-Image-Turbo>Flux.2 Dev/Kleinhttps://comfyanonymous.github.io/ComfyUI_examples/flux2https://huggingface.co/black-forest-labs/FLUX.2-devhttps://huggingface.co/black-forest-labs/FLUX.2-klein-4Bhttps://huggingface.co/black-forest-labs/FLUX.2-klein-9B>Chromahttps://comfyanonymous.github.io/ComfyUI_examples/chromahttps://huggingface.co/lodestones/Chroma1-HDhttps://huggingface.co/silveroxides/Chroma-GGUF>Animahttps://huggingface.co/circlestone-labs/Anima>Qwen Image & Edithttps://docs.comfy.org/tutorials/image/qwen/qwen-imagehttps://huggingface.co/Qwen/Qwen-Image>Text & image to video - Wan 2.2https://docs.comfy.org/tutorials/video/wan/wan2_2>Models, LoRAs & upscalinghttps://civitai.comhttps://huggingface.cohttps://tungsten.runhttps://yodayo.com/modelshttps://www.diffusionarc.comhttps://miyukiai.comhttps://civitaiarchive.comhttps://civitasbay.orghttps://www.stablebay.orghttps://openmodeldb.info>Index of guides and other toolshttps://rentry.org/sdg-link>Related boards>>>/aco/sdg>>>/b/degen>>>/d/ddg>>>/e/edg>>>/gif/vdg>>>/h/hdg>>>/r/realistic+parody>>>/tg/slop>>>/trash/sdg>>>/u/udg>>>/vp/napt>>>/vt/vtaiOP https://rentry.co/twkuk8tz
>mfw Resource news03/11/2026>anima-preview2.safetensorshttps://huggingface.co/circlestone-labs/Anima/tree/main/split_files/diffusion_models>Reviving ConvNeXt for Efficient Convolutional Diffusion Modelshttps://github.com/star-kwon/FCDM>BinaryAttention: One-Bit QK-Attention for Vision and Diffusion Transformershttps://github.com/EdwardChasel/BinaryAttention>QUSR: Quality-Aware and Uncertainty-Guided Image Super-Resolution Diffusion Modelhttps://github.com/oTvTog/QUSR>InternVL-U: Democratizing Unified Multimodal Models for Understanding, Reasoning, Generation and Editinghttps://github.com/OpenGVLab/InternVL-U>SD Forge Nvidia VFX: VideoSuperRes extension for Forge Neo implementshttps://github.com/Haoming02/sd-forge-nvidia-vfx>Nvidia_RTX_Nodes_ComfyUI: "RTX Video Super Resolution"https://github.com/Comfy-Org/Nvidia_RTX_Nodes_ComfyUI>Gemini Embedding 2: Natively multimodal embedding modelhttps://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-embedding-203/10/2026>HiAR: Efficient Autoregressive Long Video Generation via Hierarchical Denoisinghttps://jacky-hate.github.io/HiAR>Scaling Test-Time Robustness of Vision-Language Models via Self-Critical Inference Frameworkhttps://github.com/KaihuaTang/Self-Critical-Inference-Framework>FVG-PT: Adaptive Foreground View-Guided Prompt Tuning for Vision-Language Modelshttps://github.com/JREion/FVG-PT>Kaleidescope - Index, Search, Invoke Comfyui Workflow https://github.com/svenhimmelvarg/kaleidoscope>App Mode, App Builder, and ComfyHubhttps://blog.comfy.org/p/from-workflow-to-app-introducing03/09/2026>DiffiT: Diffusion Vision Transformers for Image Generationhttps://github.com/nvlabs/diffit>Self-Supervised Flow Matching for Scalable Multi-Modal Synthesishttps://bfl.ai/research/self-flow>MatAnyone 2: Scaling Video Matting via a Learned Quality Evaluatorhttps://pq-yang.github.io/projects/MatAnyone2
>mfw Research news03/11/2026>Streaming Autoregressive Video Generation via Diagonal Distillationhttps://arxiv.org/abs/2603.09488>Prompt-Driven Color Accessibility Evaluation in Diffusion-based Image Generation Modelshttps://arxiv.org/abs/2603.09832>FrameDiT: Diffusion Transformer with Frame-Level Matrix Attention for Efficient Video Generationhttps://arxiv.org/abs/2603.09721>When to Lock Attention: Training-Free KV Control in Video Diffusionhttps://arxiv.org/abs/2603.09657>SODA: Sensitivity-Oriented Dynamic Acceleration for Diffusion Transformerhttps://arxiv.org/abs/2603.07057>CogBlender: Towards Continuous Cognitive Intervention in Text-to-Image Generationhttps://arxiv.org/abs/2603.09286>Component-Aware Sketch-to-Image Generation Using Self-Attention Encoding and Coordinate-Preserving Fusionhttps://arxiv.org/abs/2603.09484>TIDE: Text-Informed Dynamic Extrapolation with Step-Aware Temperature Control for Diffusion Transformershttps://arxiv.org/abs/2603.08928>RubiCap: Rubric-Guided Reinforcement Learning for Dense Image Captioninghttps://arxiv.org/abs/2603.09160>Prune Redundancy, Preserve Essence: Vision Token Compression in VLMs via Synergistic Importance-Diversityhttps://arxiv.org/abs/2603.09480>The Coupling Within: Flow Matching via Distilled Normalizing Flowshttps://arxiv.org/abs/2603.09014>Training-Free Coverless Multi-Image Steganography with Access Controlhttps://arxiv.org/abs/2603.09390>IntroSVG: Learning from Rendering Feedback for Text-to-SVG Generation via an Introspective Generator-Critic Frameworkhttps://arxiv.org/abs/2603.09312>Evolving Prompt Adaptation for Vision-Language Modelshttps://arxiv.org/abs/2603.09493>Reasoning-Oriented Programming: Chaining Semantic Gadgets to Jailbreak Large Vision Language Modelshttps://arxiv.org/abs/2603.09246>B-DENSE: Branching For Dense Ensemble Network Supervision Efficiencyhttps://arxiv.org/abs/2602.15971
>>108351402ty for bake>>108351357huh?>>108351494sai is dead as far as I can tell. sdg has no relation tho. our stable is lower case
>>108351594the real stability was the frens we made along the way
>>108351638hmmbut all of our frens leftwhat does that mean?
>>108351895idk i hope they're happy tho
>>108351912based and trufrenpilled
Last one from meGood night anons
>>108352052night
>>108352052gn
>>108351594>>108351895>>108351920>>108352080Which model? I also want to try genning cool space stuff
>>108352098z-image-turbo. the raw model does an ok job with space stuff but these gens have loras that make the scenes work betterI think you can get really cool space stuff with almost any model tho
gn