[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


Discussion and Development of Local Image and Video Models

Previous: >>108517229

https://rentry.org/ldg-lazy-getting-started-guide

>UI
ComfyUI: https://github.com/comfyanonymous/ComfyUI
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneo
SD.Next: https://github.com/vladmandic/sdnext
Wan2GP: https://github.com/deepbeepmeep/Wan2GP

>Checkpoints, LoRAs, Upscalers, & Workflows
https://civitai.com
https://civitaiarchive.com/
https://openmodeldb.info
https://openart.ai/workflows

>Tuning
https://github.com/spacepxl/demystifying-sd-finetuning
https://github.com/ostris/ai-toolkit
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/musubi-tuner
https://github.com/tdrussell/diffusion-pipe

>Z
https://huggingface.co/Tongyi-MAI/Z-Image
https://huggingface.co/Tongyi-MAI/Z-Image-Turbo

>Anima
https://huggingface.co/circlestone-labs/Anima
https://tagexplorer.github.io/

>Qwen
https://huggingface.co/collections/Qwen/qwen-image

>Klein
https://huggingface.co/collections/black-forest-labs/flux2

>LTX-2
https://huggingface.co/Lightricks/LTX-2

>Wan
https://github.com/Wan-Video/Wan2.2

>Chroma
https://huggingface.co/lodestones/Chroma1-Base
https://rentry.org/mvu52t46

>Illustrious
https://rentry.org/comfyui_guide_1girl

>Misc
Local Model Meta: https://rentry.org/localmodelsmeta
Share Metadata: https://catbox.moe | https://litterbox.catbox.moe/
Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Txt2Img Plugin: https://github.com/Acly/krita-ai-diffusion
Archive: https://rentry.org/sdg-link
Collage: https://rentry.org/ldgcollage

>Neighbors
>>>/aco/csdg
>>>/b/degen
>>>/r/realistic+parody
>>>/gif/vdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/trash/slop
>>>/vt/vtai
>>>/u/udg

>Local Text
>>>/g/lmg

>Maintain Thread Quality
https://rentry.org/debo
https://rentry.org/animanon
>>
Blessed thread of frenship
>>
At the end of 2025, we received our last batch of decent new models: Flux Klein, Z-Image, and Qwen 2512. 4 months later, where are the finetunes?
>>
>>108520023
lodes(chroma) is fine tuning Z-Image.
>>
>>108520023
Z image base released in January of this year.
Previous finetroons took up to 8 months.
>>
>mfw Research news

04/03/2026

>Modular Energy Steering for Safe Text-to-Image Generation with Foundation Models
https://arxiv.org/abs/2604.02265

>Can Video Diffusion Models Predict Past Frames? Bidirectional Cycle Consistency for Reversible Interpolation
https://arxiv.org/abs/2604.01700

>Why Instruction-Based Unlearning Fails in Diffusion Models?
https://arxiv.org/abs/2604.01514

>SteerFlow: Steering Rectified Flows for Faithful Inversion-Based Image Editing
https://arxiv.org/abs/2604.01715

>MAR-MAER: Metric-Aware and Ambiguity-Adaptive Autoregressive Image Generation
https://arxiv.org/abs/2604.01864

>Low-Effort Jailbreak Attacks Against Text-to-Image Safety Filters
https://arxiv.org/abs/2604.01888

>HieraVid: Hierarchical Token Pruning for Fast Video Large Language Models
https://arxiv.org/abs/2604.01881

>UniRecGen: Unifying Multi-View 3D Reconstruction and Generation
https://arxiv.org/abs/2604.01479

>Large-scale Codec Avatars: The Unreasonable Effectiveness of Large-scale Avatar Pretraining
https://junxuan-li.github.io/lca

>Semantic Richness or Geometric Reasoning? The Fragility of VLM's Visual Invariance
https://arxiv.org/abs/2604.01848

>Omni123: Exploring 3D Native Foundation Models with Limited 3D Data by Unifying Text to 2D and 3D Generation
https://arxiv.org/abs/2604.02289

>Reinforcing Consistency in Video MLLMs with Structured Rewards
https://arxiv.org/abs/2604.01460

>Model Merging via Data-Free Covariance Estimation
https://arxiv.org/abs/2604.01329

>Steerable Visual Representations
https://arxiv.org/abs/2604.02327

>Attention at Rest Stays at Rest: Breaking Visual Inertia for Cognitive Hallucination Mitigation
https://arxiv.org/abs/2604.01989

>ViT-Explainer: An Interactive Walkthrough of the Vision Transformer Pipeline
https://arxiv.org/abs/2604.02182

>Bias Is a Subspace, Not a Coordinate: A Geometric Rethinking of Post-hoc Debiasing in Vision-Language Models
https://arxiv.org/abs/2511.18123
>>
>>108520023
just stop posting. go for a walk or take a nap.
>>
>>108520053
i was told that 4 months ago. still no finetunes
>>
>>108520063
sdxl took longer than 4 months to tune
>>
>>108520071
SDXL released July 26 2023
NovelAI v3 (sdxl) released November 15 2023
only 3.5 months, and they converted it to vpred too
>>
>>108520101
what does NAI have to do with how long local tunes take
>>
>>108520063
what do you want from a finetune? clearly all the finetunes for zit/zib, flux 2 and qwen don't count.
is there some hyper specific criteria you need met before it counts?
>>
For anon who was looking for help at the end of the last thread
>>108520043
I can't speak as to whether or not your comfy workflow is broken though.
>>
File: ComfyUI_00140_.png (1.64 MB, 1200x1600)
1.64 MB
1.64 MB PNG
>mfw (mono filament wire)
>>
cozy breas
>>
File: succ_6749.jpg (1000 KB, 2008x1568)
1000 KB
1000 KB JPG
>>108520101
> SDXL released July 26 2023
Its hard to believe it hasn't even been three full years. Feels like an eternity since SDXL came out.
>>
>>108520160
and you don't blame the software itself shitting the bed for months?
>>
>>108520004
Thank you for baking this thread, anon
>>108520014
Thank you for blessing this thread, anon
>>
damn, things have progressed so far since SDXL. We had Dall-E 3, GPT-o, Nano Banana 2, and now per-pixel intelligence with Uni-1. Even ComfyUI and CivitAI grew up, both started only supporting small local models but now feature the best API models the world has to offer. It's great coming back to our roots and remembering where things started with SD1.4, back when we were all stuck using local. Now we can gen 4k outputs with Seedream in under 10 seconds, how crazy is that?
>>
File: HE9wsBHbUAIsLPb.jfif.jpg (62 KB, 768x1344)
62 KB
62 KB JPG
>>
>>108520507
Nope. All my gens with anon's chosen artist were busted too, on Forge Neo. Checkpoint didn't know their artist.
>>
>>108520527
going to be interesting to watch it unfold over the next few years, a lot of the smaller companies or companies that don't have brand recognition are going to go tits up.
autodesk, adobe and pixologic will all run their own models that get fully integrated into their applications.
chatgpt, x and googles models are already mspaint for boomers and normies to make memes.
hollywood studios will run their own proprietary models.
ironically the local coomers will be the biggest winners, nothing drives tech adoption like pornography. just look at grok.
>>
https://huggingface.co/RuneXX/LTX-2-Workflows/blob/main/older_comfy_pre_feb2026/LTX-2%20-%20I2V%20and%20T2V%20Basic%20(Custom%20Audio).json

ltx is pretty great at lip syncing with custom audio: 20s even worked

https://files.catbox.moe/20mzie.mp4
>>
https://huggingface.co/SanDiegoDude/JoyAI-Image-Edit-Safetensors
Did anyone try that new edit model?
>>
>>
>>108520789
see how good ltx works for sync audio? harry potter is saved.

https://files.catbox.moe/kh3rpd.mp4



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.