Previous /sdg/ thread : >>107899825>Beginner UIEasyDiffusion: https://easydiffusion.github.ioSwarmUI: https://github.com/mcmonkeyprojects/SwarmUI>Advanced UIComfyUI: https://github.com/comfyanonymous/ComfyUIForge Classic: https://github.com/Haoming02/sd-webui-forge-classicStability Matrix: https://github.com/LykosAI/StabilityMatrix>Z-Image Turbohttps://comfyanonymous.github.io/ComfyUI_examples/z_imagehttps://huggingface.co/Tongyi-MAI/Z-Image-Turbohttps://huggingface.co/jayn7/Z-Image-Turbo-GGUF>Flux.2 Devhttps://comfyanonymous.github.io/ComfyUI_examples/flux2https://huggingface.co/black-forest-labs/FLUX.2-devhttps://huggingface.co/city96/FLUX.2-dev-gguf>Qwen Image & Edithttps://docs.comfy.org/tutorials/image/qwen/qwen-imagehttps://huggingface.co/Qwen/Qwen-Imagehttps://huggingface.co/QuantStack/Qwen-Image-GGUFhttps://huggingface.co/QuantStack/Qwen-Image-Distill-GGUFhttps://huggingface.co/QuantStack/Qwen-Image-Edit-2509-GGUF>Text & image to video - Wan 2.2https://docs.comfy.org/tutorials/video/wan/wan2_2https://huggingface.co/QuantStack/Wan2.2-TI2V-5B-GGUFhttps://huggingface.co/QuantStack/Wan2.2-T2V-A14B-GGUFhttps://huggingface.co/QuantStack/Wan2.2-I2V-A14B-GGUF>Chromahttps://comfyanonymous.github.io/ComfyUI_examples/chromahttps://github.com/maybleMyers/chromaforgehttps://huggingface.co/lodestones/Chroma1-HDhttps://huggingface.co/silveroxides/Chroma-GGUF>Models, LoRAs & upscalinghttps://civitai.comhttps://huggingface.cohttps://tungsten.runhttps://yodayo.com/modelshttps://www.diffusionarc.comhttps://miyukiai.comhttps://civitaiarchive.comhttps://civitasbay.orghttps://www.stablebay.orghttps://openmodeldb.info>Index of guides and other toolshttps://rentry.org/sdg-link>Related boards>>>/h/hdg>>>/e/edg>>>/d/ddg>>>/b/degen>>>/vt/vtai>>>/aco/sdg>>>/u/udg>>>/tg/slop>>>/trash/sdg>>>/vp/napt>>>/r/realistic+parody
>mfw Resource news01/19/2026>kohya-ss/sd-scripts v0.10.0 releasedhttps://github.com/kohya-ss/sd-scripts/releases/tag/v0.10.0>Radiance: Professional HDR Image Processing Suite for ComfyUIhttps://github.com/fxtdstudios/radiance>M3DDM+: An improved video outpainting by a modified masking strategyhttps://github.com/tamaki-lab/M3DDM-Plus>ShapeR: Robust Conditional 3D Shape Generation from Casual Captureshttp://facebookresearch.github.io/ShapeR>VidLeaks: Membership Inference Attacks Against Text-to-Video Modelshttps://zenodo.org/records/17972831>Moonworks Lunara Aesthetic Datasethttps://huggingface.co/datasets/moonworks/lunara-aesthetic01/18/2026>VIBE: Visual Instruction Based Editor https://huggingface.co/iitolstykh/VIBE-Image-Edit>Arthemy Live Tuner SDXL ComfyUIhttps://github.com/aledelpho/Arthemy_Live-Tuner-SDXL-ComfyUI>Pixel-Perfect Aligner (AI Fix) for GIMP 3https://github.com/CombinEC-R/Pixel-Perfect-Aligner>Stable AI Flow: Phase-Locked Live AI Filterhttps://github.com/anttiluode/StableAIflow>ComfyUI-Flux2Klein-Enhancer: Conditioning enhancement node for FLUX.2 Klein 9B https://github.com/capitan01R/ComfyUI-Flux2Klein-Enhancer>DiffusionDesk: Self-hosted Creative AI server integrating stable-diffusion.cpp and llama.cpphttps://github.com/Danmoreng/diffusion-desk>WAN 2.6 Reference-to-Video is available in ComfyUIhttps://blog.comfy.org/p/wan26-reference-to-video01/17/2026>FLUX.2 Prompting: Prompting Guide - FLUX.2 [klein]https://docs.bfl.ai/guides/prompting_guide_flux2_klein01/16/2026>ComfyUI-CapitanFlowMatch: Optimal samplers and schedulers for rectified flow modelshttps://github.com/capitan01R/ComfyUI-CapitanFlowMatch01/15/2026>FLUX.2 [klein]: Generate and edit in less than a second with state-of-the-art qualityhttps://bfl.ai/models/flux-2-klein>ComfyUI-TBG-ETUR: 100MP Enhanced Tiled Upscaler & Refiner Pro. Enhance Your Images with TBG's Upscalerhttps://github.com/Ltamann/ComfyUI-TBG-ETUR
>mfw Research news01/19/2026>Your One-Stop Solution for AI-Generated Video Detectionhttps://arxiv.org/abs/2601.11035>PhysRVG: Physics-Aware Unified Reinforcement Learning for Video Generative Modelshttps://arxiv.org/abs/2601.11087>CoDance: An Unbind-Rebind Paradigm for Robust Multi-Subject Animationhttps://lucaria-academy.github.io/CoDance>SoLA-Vision: Fine-grained Layer-wise Linear Softmax Hybrid Attentionhttps://arxiv.org/abs/2601.11164>ATATA: One Algorithm to Align Them Allhttps://arxiv.org/abs/2601.11194>Enhancing Vision Language Models with Logic Reasoning for Situational Awarenesshttps://arxiv.org/abs/2601.11322>When Are Two Scores Better Than One? Investigating Ensembles of Diffusion Modelshttps://arxiv.org/abs/2601.11444>MHA2MLA-VLM: Enabling DeepSeek's Economical Multi-Head Latent Attention across Vision-Language Modelshttps://arxiv.org/abs/2601.11464
we havent seen baker anon since around xmas. would he really vanish without even saying goodbye?
>>107914677idk. it is depressing. we seem in the latter days of sdg. i am pretty burnt out.
>>107914783post cadence hasn't been too bad lately, keeping close to a thread a day. I do miss a lot of posters though. maybe we'll see them again some day
klein lora attemp 1 (.3) worked well enough
I created a kpop band. kekFull videohttps://files.catbox.moe/cljkr0.mp4
>>107915098nice. how do you rate f2k trainability compared to chroma or zimg?>>107915215this was all local? pretty nice pacing and scene structure. the faces get pretty demonic in parts thohow long did the whole project take?
>>107915340>how do you rate f2k trainability compared to chroma or zimg?taking into account that the more i've trained loras the more i've learned what works and waht doesnt, and that i dont tend to go back to retrain old loras with new knowledge...i know chroma well and i know what doesnt work on it well too, what works is a challenge because of the nature of training (datasets, samplers, optimizations)given that..chroma is hardest and longest to train, but if done right it's like you add your character/style to the modelz-image is fast to train but tends to overfit (because it's not a base model+i didnt spend too much time on it)klein is not even fully implemented (in onetrainer anyway) and it's as fast if not faster than z, and so far seems to preserve the base model stuff without bullying it (chroma) or overfitting (z). kinda of best of both worlds, but i'm still ironing out some quirks. it took 6 hrs or so to do 50 epochs (around 2500 steps). using 250+ source images with meh captions, batch 10 at res 1024same thing on chroma would've been maybe 15-20 hrs (if not more), and i never tried that many source images on z, but probably around 5-6 hrs too
gonna leave the chromagirl training overnight, see how it does at 100 epochs lel
>>107915340>how long did the whole project take?About 3-4 days, all local. Yeah, quality is bad because of just 12gb. Still amazing what you can do with just 12gb.