Previous /sdg/ thread : >>108624673>Beginner UIEasyDiffusion: https://easydiffusion.github.ioSwarmUI: https://github.com/mcmonkeyprojects/SwarmUI>Advanced UIComfyUI: https://github.com/comfyanonymous/ComfyUIForge Classic: https://github.com/Haoming02/sd-webui-forge-classicStability Matrix: https://github.com/LykosAI/StabilityMatrix>Z-Imagehttps://comfyanonymous.github.io/ComfyUI_examples/z_imagehttps://huggingface.co/Tongyi-MAI/Z-Imagehttps://huggingface.co/Tongyi-MAI/Z-Image-Turbo>Flux.2 Dev/Kleinhttps://comfyanonymous.github.io/ComfyUI_examples/flux2https://huggingface.co/black-forest-labs/FLUX.2-devhttps://huggingface.co/black-forest-labs/FLUX.2-klein-4Bhttps://huggingface.co/black-forest-labs/FLUX.2-klein-9B>Chromahttps://comfyanonymous.github.io/ComfyUI_examples/chromahttps://huggingface.co/lodestones/Chroma1-HDhttps://huggingface.co/silveroxides/Chroma-GGUF>Animahttps://huggingface.co/circlestone-labs/Anima>Qwen Image & Edithttps://docs.comfy.org/tutorials/image/qwen/qwen-imagehttps://huggingface.co/Qwen/Qwen-Image>Text & image to video - Wan 2.2https://docs.comfy.org/tutorials/video/wan/wan2_2>Models, LoRAs & upscalinghttps://civitai.comhttps://huggingface.cohttps://tungsten.runhttps://yodayo.com/modelshttps://www.diffusionarc.comhttps://miyukiai.comhttps://civitaiarchive.comhttps://civitasbay.orghttps://www.stablebay.orghttps://openmodeldb.info>Index of guides and other toolshttps://rentry.org/sdg-link>Related boards>>>/aco/sdg>>>/b/degen>>>/d/ddg>>>/e/edg>>>/gif/vdg>>>/h/hdg>>>/r/realistic+parody>>>/tg/slop>>>/trash/sdg>>>/u/udg>>>/vp/napt>>>/vt/vtaiOP https://rentry.co/twkuk8tz
First for nigbophilia
Is there a ZIT image edit workflow?
>mfw Resource news04/18/2026>Rose: Range-Of-Slice Equilibration PyTorch optimizerhttps://github.com/MatthewK78/Rose04/17/2026>ControlFoley: Unified and Controllable Video-to-Audio Generation with Cross-Modal Conflict Handlinghttps://yjx-research.github.io/ControlFoley>TokenGS: Decoupling 3D Gaussian Prediction from Pixels with Learnable Tokenshttps://research.nvidia.com/labs/toronto-ai/tokengs>MM-WebAgent: A Hierarchical Multimodal Web Agent for Webpage Generationhttps://aka.ms/mm-webagent>Qwen2D-VAEhttps://huggingface.co/Anzhc/Qwen2D-VAE>ComfyUI HY-World 2.0 — WorldMirror 3Dhttps://github.com/AHEKOT/ComfyUI_HYWorld2>Anima Style Explorer: A free web tool for ComfyUI styleshttps://anima.mooshieblob.com>Stanford AI Index Report 2026https://hai.stanford.edu/assets/files/ai_index_report_2026.pdf04/16/2026>Motif-Video 2B: A micro-budget text-to-video diffusion transformer from Motif Technologieshttps://motiftech.io/videoshowcase>HY-World 2.0: A Multi-Modal World Model for Reconstructing, Generating, and Simulating 3D Worldshttps://huggingface.co/tencent/HY-World-2.0>ErnieTurbo_extracted_lorahttps://huggingface.co/GuangyuanSD/ErnieTurbo_extracted_lora/tree/main04/15/2026>DisCa: Accelerating Video Diffusion Transformers with Distillation-Compatible Learnable Feature Caching https://huggingface.co/tencent/DisCa>Lyra 2.0: Explorable Generative 3D Worldshttps://research.nvidia.com/labs/sil/projects/lyra2>AniGen: Unified S3 Fields for Animatable 3D Asset Generationhttps://github.com/VAST-AI-Research/AniGen>T2I-BiasBench: A Multi-Metric Framework for Auditing Demographic and Cultural Bias in Text-to-Image Modelshttps://gyanendrachaubey.github.io/T2I-BiasBench>Generative Refinement Networks for Visual Synthesishttps://github.com/MGenAI/GRN>VideoFlexTok: Flexible-Length Coarse-to-Fine Video Tokenizationhttps://videoflextok.epfl.ch
>mfw Research news04/18/2026>Reconstruction of a 3D wireframe from a single line drawing via generative depth estimationhttps://arxiv.org/abs/2604.13549>SHIFT: Steering Hidden Intermediates in Flow Transformershttps://arxiv.org/abs/2604.09213>Dual-Modality Anchor-Guided Filtering for Test-time Prompt Tuninghttps://arxiv.org/abs/2604.12403>Fragile Reconstruction: Adversarial Vulnerability of Reconstruction-Based Detectors for Diffusion-Generated Imageshttps://arxiv.org/abs/2604.12781>Redefining Quality Criteria and Distance-Aware Score Modeling for Image Editing Assessmenthttps://arxiv.org/abs/2604.12175>SIC3D: Style Image Conditioned Text-to-3D Gaussian Splatting Generationhttps://arxiv.org/abs/2604.08760>Tora3: Trajectory-Guided Audio-Video Generation with Physical Coherencehttps://arxiv.org/abs/2604.09057>Anthropogenic Regional Adaptation in Multimodal Vision-Language Modelhttps://arxiv.org/abs/2604.11490>VisPCO: Visual Token Pruning Configuration Optimization via Budget-Aware Pareto-Frontier Learning for Vision-Language Modelshttps://arxiv.org/abs/2604.15188>AmodalSVG: Amodal Image Vectorization via Semantic Layer Peelinghttps://arxiv.org/abs/2604.10940>Training-Free Model Ensemble for Single-Image Super-Resolution via Strong-Branch Compensationhttps://arxiv.org/abs/2604.11564>Unified Multimodal Uncertain Inferencehttps://arxiv.org/abs/2604.08701>Distorted or Fabricated? A Survey on Hallucination in Video LLMshttps://github.com/hukcc/Awesome-Video-Hallucinationhttps://arxiv.org/abs/2604.12944>Robustness of Vision Foundation Models to Common Perturbationshttps://arxiv.org/abs/2604.14973>ReflectCAP: Detailed Image Captioning with Reflective Memoryhttps://arxiv.org/abs/2604.12357>LOLGORITHM: Funny Comment Generation Agent For Short Videoshttps://arxiv.org/abs/2604.09729>PRADA: Probability-Ratio-Based Attribution and Detection of Autoregressive-Generated Imageshttps://arxiv.org/abs/2511.20068
>>108633181yes
>>108633205Doesn't look like it: https://aiarena.alibaba-inc.com/corpora/arena/leaderboard?arenaType=TI2I
>>108633181what is zit image edit?
>>108633239Z-Image Turbo - I'm looking for Image to Image.
>>108633281well search for z image turbo i2i or img2img then, u wont find anything for "edit"
>>108633403lel nice
>>108633403i am the laundry now
>a mixtape cassette coverit kinda works lel
>>108633557>>a mixtape cassette coverfun idea
>>108633603i stole it from somewhere>A mixtape from the 1990s lying on a rug in a clutter-filled bedroom. The case stands upright and is made of clear plastic. [INSIDE THE SLEEVE] [OUTSIDE THE SLEEVE]. A nostalgic photograph of the cassette tape lying on a faded carpet. The word "CASSETTE" can be seen written on it. The image captures the handwritten typography, and faint tape-track reflections. Ambient teal-green bedroom light creates a soft, retro vibe.
>>108634100me in the back
i hate comfyui so g'damn much
>>108634653its all we got :(
>>108634718sad but true
i'm gonna marry comfyui
>>108635062sorry, you're not comfyui's type. she's into rich venture capitalists
>>108635117who say's i'm not? maybe i'm only pretending to be broke
>>108635062so you can beat him to death? do it
>>108635209who? the fennec? if i did that he won't give me his ui's hand in marriage. what'd he do now anyway?
>>108635194lemme hold a mil
>>108635226whatever he did made (un)comfyui a pain in the ass to use>just update the nodes, trust me bro>just update the main tree, trust me bro>your update broke custom nodes>WONTFIX lol dont use custom nodes, use our shitty built in ones insteadand so onalso luminodes way too specific to lumi
>>108635246>also luminodes way too specific to lumii mean... but what in particular? i'm guessing the llm providers
>>108635264i likehow you have "lumi wilcard processor" "lumi wrap text" "lumi show text" and they're just copies of other nodes
>>108635275i don't like installing the mega packs is all. also, impactpack decided to vibe code a whole ass replacement for dynamicprompts and bungled the yaml handling. i actually swiped his processor node but swapped the og dynamicprompts back in.
>>108635300yah i just use the dynamicprompts one, the impact ones are good if you want to type loras in, which i dont anymore
>>108635306idk, i just tell codex to make whatever node i want and throw it in. like the plush llm shit was kinda retarded, like why is all this shit on the main node? i never got around to making more providers because openrouter is "good enough". reminds me i need to yeet plush bc i'm sick of seeing it's cutesy "tickled 23 nodes :))))))" shit
>>108635241whats in it for me :)
>>108635335there are so many vibecoded llm nodes out there lelprobably most custom nodes by now are vibe coded
>>108635381i put it in the registry mainly so if anyone tries to use my catboxes the damn thing will work. its good practice bc i get to pretend i have users and not break things willy nilly. reminds me i need to make the llmpromptprocessor accept images for multimodal models. maybe lm studio/ollama providers.
>>108635355>whats in it for me :)you can come visit my lavish SF office when I buy it with your money
>>108635401llama.cpp is all you need
>>108635416
>>108635416my kind of girl
gn all
>>108635548gn!
>>108635548gnme too
i miss schizo anon