Previous /sdg/ thread : >>107891148>Beginner UIEasyDiffusion: https://easydiffusion.github.ioSwarmUI: https://github.com/mcmonkeyprojects/SwarmUI>Advanced UIComfyUI: https://github.com/comfyanonymous/ComfyUIForge Classic: https://github.com/Haoming02/sd-webui-forge-classicStability Matrix: https://github.com/LykosAI/StabilityMatrix>Z-Image Turbohttps://comfyanonymous.github.io/ComfyUI_examples/z_imagehttps://huggingface.co/Tongyi-MAI/Z-Image-Turbohttps://huggingface.co/jayn7/Z-Image-Turbo-GGUF>Flux.2 Devhttps://comfyanonymous.github.io/ComfyUI_examples/flux2https://huggingface.co/black-forest-labs/FLUX.2-devhttps://huggingface.co/city96/FLUX.2-dev-gguf>Qwen Image & Edithttps://docs.comfy.org/tutorials/image/qwen/qwen-imagehttps://huggingface.co/Qwen/Qwen-Imagehttps://huggingface.co/QuantStack/Qwen-Image-GGUFhttps://huggingface.co/QuantStack/Qwen-Image-Distill-GGUFhttps://huggingface.co/QuantStack/Qwen-Image-Edit-2509-GGUF>Text & image to video - Wan 2.2https://docs.comfy.org/tutorials/video/wan/wan2_2https://huggingface.co/QuantStack/Wan2.2-TI2V-5B-GGUFhttps://huggingface.co/QuantStack/Wan2.2-T2V-A14B-GGUFhttps://huggingface.co/QuantStack/Wan2.2-I2V-A14B-GGUF>Chromahttps://comfyanonymous.github.io/ComfyUI_examples/chromahttps://github.com/maybleMyers/chromaforgehttps://huggingface.co/lodestones/Chroma1-HDhttps://huggingface.co/silveroxides/Chroma-GGUF>Models, LoRAs & upscalinghttps://civitai.comhttps://huggingface.cohttps://tungsten.runhttps://yodayo.com/modelshttps://www.diffusionarc.comhttps://miyukiai.comhttps://civitaiarchive.comhttps://civitasbay.orghttps://www.stablebay.orghttps://openmodeldb.info>Index of guides and other toolshttps://rentry.org/sdg-link>Related boards>>>/h/hdg>>>/e/edg>>>/d/ddg>>>/b/degen>>>/vt/vtai>>>/aco/sdg>>>/u/udg>>>/tg/slop>>>/trash/sdg>>>/vp/napt>>>/r/realistic+parody
i miss schizo anon
G'mornin Anons, sup.
>>107900395nicemodel?
>>107900524Ty, https://www.reddit.com/r/StableDiffusion/comments/1qg5ph5/flux_2_klein_4b_vs_9b_multi_camera_angles_one/
>>107899825hes watching tv but he also is the tv
Is it normal that with Forge Neo and a RTX 3060, ZIT is rather fast to render, the fans of the GPU don't even start to spin, and the computer stays smooth, but with Flux or Qwen it takes three million years to render a picture, the fans get crazy and the computer even freezes for several seconds here and there? Are these models a lot more heavy for you too?
>mfw Resource news01/18/2026>VIBE: Visual Instruction Based Editor https://huggingface.co/iitolstykh/VIBE-Image-Edit>Arthemy Live Tuner SDXL ComfyUIhttps://github.com/aledelpho/Arthemy_Live-Tuner-SDXL-ComfyUI>Pixel-Perfect Aligner (AI Fix) for GIMP 3https://github.com/CombinEC-R/Pixel-Perfect-Aligner>Stable AI Flow: Phase-Locked Live AI Filterhttps://github.com/anttiluode/StableAIflow>ComfyUI-Flux2Klein-Enhancer: Conditioning enhancement node for FLUX.2 Klein 9B https://github.com/capitan01R/ComfyUI-Flux2Klein-Enhancer>DiffusionDesk: Self-hosted Creative AI server integrating stable-diffusion.cpp and llama.cpphttps://github.com/Danmoreng/diffusion-desk>WAN 2.6 Reference-to-Video is available in ComfyUIhttps://blog.comfy.org/p/wan26-reference-to-video01/17/2026>FLUX.2 Prompting: Prompting Guide - FLUX.2 [klein]https://docs.bfl.ai/guides/prompting_guide_flux2_klein01/16/2026>ComfyUI-CapitanFlowMatch: Optimal samplers and schedulers for rectified flow modelshttps://github.com/capitan01R/ComfyUI-CapitanFlowMatch01/15/2026>FLUX.2 [klein]: Generate and edit in less than a second with state-of-the-art qualityhttps://bfl.ai/models/flux-2-klein>ComfyUI-TBG-ETUR: 100MP Enhanced Tiled Upscaler & Refiner Pro. Enhance Your Images with TBG's Upscalerhttps://github.com/Ltamann/ComfyUI-TBG-ETUR>Comfy-Org/flux2-klein-9B split fileshttps://huggingface.co/Comfy-Org/flux2-klein-9B/tree/main/split_files>GGUF quantized version of FLUX.2-klein-9Bhttps://huggingface.co/unsloth/FLUX.2-klein-9B-GGUF>NVIDIA Reportedly Ends GeForce RTX 5070 Ti Production, RTX 5060 Ti 16 GB Nexthttps://www.techpowerup.com/345224/nvidia-reportedly-ends-geforce-rtx-5070-ti-production-rtx-5060-ti-16-gb-next>Preprocessor and Frame Interpolation Workflows in ComfyUIhttps://blog.comfy.org/p/preprocessor-and-frame-interpolation01/14/2026>GLM-Image: Auto-regressive for Dense-knowledge and High-fidelity Image Generationhttps://z.ai/blog/glm-image
>mfw Research news01/18/2026>EditEmoTalk: Controllable Speech-Driven 3D Facial Animation with Continuous Expression Editinghttps://arxiv.org/abs/2601.10000>VERHallu: Evaluating and Mitigating Event Relation Hallucination in Video Large Language Modelshttps://arxiv.org/abs/2601.10010>Action100M: A Large-scale Video Action Datasethttps://arxiv.org/abs/2601.10592>RSATalker: Realistic Socially-Aware Talking Head Generation for Multi-Turn Conversationhttps://arxiv.org/abs/2601.10606>CURVE: A Benchmark for Cultural and Multilingual Long Video Reasoninghttps://arxiv.org/abs/2601.10649>Where Does Vision Meet Language? Understanding and Refining Visual Fusion in MLLMs via Contrastive Attentionhttps://arxiv.org/abs/2601.08151>VideoHEDGE: Entropy-Based Hallucination Detection for Video-VLMs via Semantic Clustering and Spatiotemporal Perturbationshttps://arxiv.org/abs/2601.08557>Ground What You See: Hallucination-Resistant MLLMs via Caption Feedback, Diversity-Aware Sampling, and Conflict Regularizationhttps://arxiv.org/abs/2601.06224>Object-WIPER : Training-Free Object and Associated Effect Removal in Videoshttps://sakshamsingh1.github.io/object_wiper_webpage>QCaption: Video Captioning and Q&A through Fusion of Large Multimodal Modelshttps://arxiv.org/abs/2601.06566>RenderFlow: Single-Step Neural Rendering via Flow Matchinghttps://arxiv.org/abs/2601.06928>Evaluating the encoding competence of visual language models using uncommon actionshttps://arxiv.org/abs/2601.07737