Previous /sdg/ thread : >>108719619>Beginner UIEasyDiffusion: https://easydiffusion.github.ioSwarmUI: https://github.com/mcmonkeyprojects/SwarmUI>Advanced UIComfyUI: https://github.com/comfyanonymous/ComfyUIForge Classic: https://github.com/Haoming02/sd-webui-forge-classicStability Matrix: https://github.com/LykosAI/StabilityMatrix>Z-Imagehttps://comfyanonymous.github.io/ComfyUI_examples/z_imagehttps://huggingface.co/Tongyi-MAI/Z-Imagehttps://huggingface.co/Tongyi-MAI/Z-Image-Turbo>Flux.2 Dev/Kleinhttps://comfyanonymous.github.io/ComfyUI_examples/flux2https://huggingface.co/black-forest-labs/FLUX.2-devhttps://huggingface.co/black-forest-labs/FLUX.2-klein-4Bhttps://huggingface.co/black-forest-labs/FLUX.2-klein-9B>Chromahttps://comfyanonymous.github.io/ComfyUI_examples/chromahttps://huggingface.co/lodestones/Chroma1-HDhttps://huggingface.co/silveroxides/Chroma-GGUF>Animahttps://huggingface.co/circlestone-labs/Anima>Qwen Image & Edithttps://docs.comfy.org/tutorials/image/qwen/qwen-imagehttps://huggingface.co/Qwen/Qwen-Image>Text & image to video - Wan 2.2https://docs.comfy.org/tutorials/video/wan/wan2_2>Models, LoRAs & upscalinghttps://civitai.comhttps://huggingface.cohttps://tungsten.runhttps://yodayo.com/modelshttps://www.diffusionarc.comhttps://miyukiai.comhttps://civitaiarchive.comhttps://civitasbay.orghttps://www.stablebay.orghttps://openmodeldb.info>Index of guides and other toolshttps://rentry.org/sdg-link>Related boards>>>/aco/sdg>>>/b/degen>>>/d/ddg>>>/e/edg>>>/gif/vdg>>>/h/hdg>>>/r/realistic+parody>>>/tg/slop>>>/trash/sdg>>>/u/udg>>>/vp/napt>>>/vt/vtaiOP https://rentry.co/twkuk8tz
now this is pod racing
>mfw Resource news05/01/2026>Representation Fréchet Loss for Visual Generationhttps://github.com/Jiawei-Yang/FD-loss>Caption Generator Pro: Tkinter app for generating image captions with LLaVA-style modelshttps://github.com/CoolGenius-123/Caption-Generator-Pro>Metascan v0.3.0 Updatehttps://github.com/pakfur/metascan/releases/tag/v0.3.0>Phosphene: Local video and audio generation for Apple Silicon ( LTX2.3 )https://github.com/mrbizarro/phosphene>MoCapAnything V2: End-to-End Learning of Generalizable Motionhttps://animotionlab.github.io/MoCapAnythingV2>Diffusers <0.37.1 Security Vulnerability - Code Injectionhttps://github.com/huggingface/diffusers/security/advisories/GHSA-98h9-4798-4q5v04/30/2026>ProcFunc: Function-Oriented Abstractions for Procedural 3D Generation in Pythonhttps://github.com/princeton-vl/procfunc>Efficient, VRAM-Constrained xLM Inference on Clientshttps://github.com/deepshnv/pipeshard-mlsys26-ae04/29/2026>Z-Anime | Full Anime Fine-Tune on Z-Image Base https://huggingface.co/SeeSee21/Z-Anime>QuantVideoGen: Auto-Regressive Long Video Generation via 2-Bit KV-Cache Quantizationhttps://github.com/svg-project/Quant-VideoGen>World-R1: Reinforcing 3D Constraints for Text-to-Video Generationhttps://github.com/microsoft/World-R1>Benchmarking Layout-Guided Diffusion Models through Unified Semantic-Spatial Evaluation in Closed and Open Settingshttps://github.com/lparolari/cobench>VibeToken: Scaling 1D Image Tokenizers and Autoregressive Models for Dynamic Resolution Generationshttps://github.com/SonyResearch/VibeToken>OmniVTG: A Large-Scale Dataset and Training Paradigm for Open-World Video Temporal Groundinghttps://github.com/oceanflowlab/OmniVTG>Refinement via Regeneration: Enlarging Modification Space Boosts Image Refinement in Unified Multimodal Modelshttps://github.com/LeapLabTHU/RvR>SketchVLM: Vision language models can annotate images to explain thoughts and guide usershttps://sketchvlm.github.io
>mfw Research news05/01/2026>AesRM: Improving Video Aesthetics with Expert-Level Feedbackhttps://arxiv.org/abs/2604.28078>TripVVT: A Large-Scale Triplet Dataset and a Coarse-Mask Baseline for In-the-Wild Video Virtual Try-Onhttps://arxiv.org/abs/2604.27958>HiMix: Hierarchical Artifact-aware Mixup for Generalized Synthetic Image Detectionhttps://arxiv.org/abs/2604.27903>Frequency-Aware Semantic Fusion with Gated Injection for AI-generated Image Detectionhttps://arxiv.org/abs/2604.27875>Improving Calibration in Test-Time Prompt Tuning for Vision-Language Models via Data-Free Flatness-Aware Prompt Pretraininghttps://arxiv.org/abs/2604.27715>Leveraging Verifier-Based Reinforcement Learning in Image Editinghttps://arxiv.org/abs/2604.27505>Post-Optimization Adaptive Rank Allocation for LoRAhttps://arxiv.org/abs/2604.27796>Generate Your Talking Avatar from Video Referencehttps://gseancdat.github.io/projects/TAVR>AdvDMD: Adversarial Reward Meets DMD For High-Quality Few-Step Generationhttps://arxiv.org/abs/2604.28126>The Effects of Visual Priming on Cooperative Behavior in Vision-Language Modelshttps://arxiv.org/abs/2604.27953>Visual Generation in the New Era: An Evolution from Atomic Mapping to Agentic World Modelinghttps://arxiv.org/abs/2604.28185>Are DeepFakes Realistic Enough? Exploring Semantic Mismatch as a Novel Challengehttps://arxiv.org/abs/2604.28022
Hi, sorry for the dumb questions, I am new to the PC scene. I am looking to buy this computer.Could It be used to gen ai , to some extent? I recall reading that Nvidia GPU are more compatible , but can amd ones still be used, or Is It too complicated atm? Also, would 1179€ be a bad price, prebuilt? Case: 2x USB 3.0, tempered glass, 4x 120mm ARGB fans Motherboard: B450M (Wi-Fi + Bluetooth, Gigabit LAN, USB 3.0, Audio) Power Supply: 600W 80 Plus CPU: AMD Ryzen 7 5700X, 8 cores / 16 threads, up to 4.60 GHz CPU Cooler: Tower air cooler Storage: 1TB M.2 NVMe SSD RAM: 32GB DDR4 3200MHz Graphics Card: RX 9060 XT 16GB Operating System: Windows 11 Professional
>>108735176amd not ideal for localgen. someone else might know the state of things better but my money is on "it's more complicated"the rest seems fine, but i usually ask /pdbg/ >>108730477 about that kinda shit
>>108735196/pcbg/ i mean
where'd everyone go :(
to the empty room, salud!...or raccoons?https://suno.com/s/YAHh8u3s2oywMCg4https://www.youtube.com/watch?v=IRn7UHX75mo
Last one from me good night anons
i miss schizo anon