Previous /sdg/ thread : >>107080769>Beginner UIEasyDiffusion: https://easydiffusion.github.ioSwarmUI: https://github.com/mcmonkeyprojects/SwarmUI>Advanced UIComfyUI: https://github.com/comfyanonymous/ComfyUIForge Classic: https://github.com/Haoming02/sd-webui-forge-classicreForge: https://github.com/Panchovix/stable-diffusion-webui-reForgeStability Matrix: https://github.com/LykosAI/StabilityMatrix>Early Preview UIAniStudio: https://github.com/FizzleDorf/AniStudio>Qwen Image & Edithttps://docs.comfy.org/tutorials/image/qwen/qwen-imagehttps://huggingface.co/Qwen/Qwen-Imagehttps://huggingface.co/QuantStack/Qwen-Image-GGUFhttps://huggingface.co/QuantStack/Qwen-Image-Distill-GGUFhttps://huggingface.co/QuantStack/Qwen-Image-Edit-2509-GGUF>Flux.1 Kreahttps://docs.comfy.org/tutorials/flux/flux1-krea-devhttps://huggingface.co/black-forest-labs/FLUX.1-Krea-devhttps://huggingface.co/QuantStack/FLUX.1-Krea-dev-GGUF>Text & image to video - Wan 2.2https://docs.comfy.org/tutorials/video/wan/wan2_2https://huggingface.co/QuantStack/Wan2.2-TI2V-5B-GGUFhttps://huggingface.co/QuantStack/Wan2.2-T2V-A14B-GGUFhttps://huggingface.co/QuantStack/Wan2.2-I2V-A14B-GGUF>Chromahttps://comfyanonymous.github.io/ComfyUI_examples/chromahttps://github.com/maybleMyers/chromaforgehttps://huggingface.co/lodestones/Chroma1-HDhttps://huggingface.co/silveroxides/Chroma-GGUF>Models, LoRAs & upscalinghttps://civitai.comhttps://tensor.arthttps://huggingface.cohttps://tungsten.runhttps://yodayo.com/modelshttps://www.diffusionarc.comhttps://miyukiai.comhttps://civitaiarchive.comhttps://civitasbay.orghttps://www.stablebay.orghttps://openmodeldb.info>Index of guides and other toolshttps://rentry.org/sdg-link>Related boards>>>/h/hdg>>>/e/edg>>>/d/ddg>>>/b/degen>>>/vt/vtai>>>/aco/sdg>>>/u/udg>>>/tg/slop>>>/trash/sdg>>>/vp/napt
>mfw Resource news11/03/2025>Sparse Model Inversion: Efficient Inversion of Vision Transformers for Data-Free Applicationshttps://github.com/Egg-Hu/SMI>H2-Cache: A Novel Hierarchical Dual-Stage Cache for High-Performance Acceleration of Generative Diffusion Modelshttps://github.com/Bluear7878/H2-cache-A-Hierarchical-Dual-Stage-Cache>Understanding the Implicit User Intention via Reasoning with Large Language Model for Image Editinghttps://github.com/Jia-shao/Reasoning-Editing>Rethinking Robust Adversarial Concept Erasure in Diffusion Modelshttps://github.com/Qhong-522/S-GRACE>SuperScaler Pipeline for ComfyUIhttps://github.com/tritant/ComfyUI_SuperScaler11/02/2025>Local Dream 2.2.0 - batch mode and historyhttps://github.com/xororz/local-dream/releases/tag/v2.2.0>Qwen3-VL support merged into llama.cpphttps://github.com/ggml-org/llama.cpp/pull/16780>Sana-Video-2B Weights Releasedhttps://github.com/NVlabs/Sana/blob/main/asset/docs/model_zoo.md#sana-video10/31/2025>Raylight: Enable true multi gpu capability in Comfy UI using XDiT XFuser and FSDPhttps://github.com/komikndr/raylight>Emu3.5: Native Multimodal Models are World Learnershttps://emu.world>OmniX: From Unified Panoramic Generation and Perception to Graphics-Ready 3D Sceneshttps://yukun-huang.github.io/OmniX>Enhancing Temporal Understanding in Video-LLMs through Stacked Temporal Attention in Vision Encodershttps://alirasekh.github.io/STAVEQ2>SageAttention 3 on an RTX 5080 (Blackwell) under WSL2 + CUDA 13.0 + PyTorch 2.10 nightlyhttps://github.com/k1n0F/sageattention3-blackwell-wsl2>Dynamic Prompting with Rich Textboxhttps://github.com/GreenLandisaLie/ComfyUI-RichText_BasicDynamicPrompts>ChronoEdit: Towards Temporal Reasoning for Image Editing and World Simulationhttps://github.com/nv-tlabs/ChronoEdit>Big Tech Is Spending More Than Ever on AIhttps://www.wsj.com/tech/ai/big-tech-is-spending-more-than-ever-on-ai-and-its-still-not-enough-f2398cfe?st=hHSNn7
>mfw Research news11/03/2025>Can MLLMs Read the Room? A Multimodal Benchmark for Verifying Truthfulness in Multi-Party Social Interactionshttps://arxiv.org/abs/2510.27195>DANCER: Dance ANimation via Condition Enhancement and Rendering with diffusion modelhttps://arxiv.org/abs/2510.27169>Generating Accurate and Detailed Captions for High-Resolution Imageshttps://arxiv.org/abs/2510.27164>Spiking Neural Networks: The Future of Brain-Inspired Computinghttps://arxiv.org/abs/2510.27379>Fine-Tuning Open Video Generators for Cinematic Scene Synthesis: A Small-Data Pipeline with LoRA and Wan2.1 I2Vhttps://arxiv.org/abs/2510.27364>MoRE: 3D Visual Geometry Reconstruction Meets Mixture-of-Expertshttps://g-1nonly.github.io/MoRE_Website>Phased DMD: Few-step Distribution Matching Distillation via Score Matching within Subintervalshttps://arxiv.org/abs/2510.27684>Imbalanced Classification through the Lens of Spurious Correlationshttps://arxiv.org/abs/2510.27650>Sketch-to-Layout: Sketch-Guided Multimodal Layout Generationhttps://arxiv.org/abs/2510.27632>Who Made This? Fake Detection and Source Attribution with Diffusion Featureshttps://arxiv.org/abs/2510.27602>Semantic Frame Aggregation-based Transformer for Live Video Comment Generationhttps://arxiv.org/abs/2510.26978>E-MMDiT: Revisiting Multimodal Diffusion Transformer Design for Fast Image Synthesis under Limited Resourceshttps://arxiv.org/abs/2510.27135
>>107099657good op selection. this character was quite my style. anon should post more
i miss schizo anon
>>107101880Thank you for the page 10 nigbobump.
good morning anon
>>107100084looks fresh and tasty
This last one with chroma, the others with with dype + krea and one with base flux.
>gm
>>107099657I see that you want more Ney'liri. Goood.
There is whole set on PixAI, account name: Aelf
>>107102066Good morning, nice birds. The details look very accurate in the dype + krea gens.The colors in the flux and chroma gens look warmer, maybe more pleasing, though the birds stand out more by comparison in the dype + krea gens.
>>107102350Nice ones>>107102832Personally, I prefer base flux. Most gens are nice. With dype + krea or even chroma, 3/4 are trash. The details with pyde are really good, even for small objects and faces.
>>107100084I wish I had some rn>>107101991>>107103010gm>>107102350>Ney'liri.love this character. ocdonut?>>107103029>or even chroma, 3/4 are trash.I feel like I have a pretty high success rate with chroma. but I guess I don't care much if a finger is broken or something. I do wonder when the next big flux-based model is coming. I was expecting a flood of chroma-based models coming out after it was finished, but it hasn't happened
>vibe killed
>>107102613I like the intense reds>>107103029The fine deatils with dype like the feathers and hands are impressive.>>107103578Your comic gens are fun.
>>107103737>Your comic gens are fun.ty, I'm very humored by the expressions its been making. its also fun to try to imagine what story is being told, even if its rather incoherent, lolyour animations are super chill as always
Morning anons
>>107104257maybe the janny can take a look at /ldg/ that has been raided by copy pasted comments from the past being spammed in replies to almost every comment for the past week instead?
>>107104386gm>>107104606keep your ldg problems in ldg, thanks
>>107104689>keep your ldg problems in ldg, thankssit down internet janitor wannabe
>nigbo
Missed out on Godzilla day yesterday
>>107105456how is my beloved monarch girl's adventures going nowadays?
>>107105480new lora is even more flexible allowing more styles and things to bleed through. i might merge it with an older lora tho, sometimes the character doesnt come through as well
>>107105777loving the worlds you create, nice, I still got your rothschild assassination sequence
>>107105861Thanks, buddy.