[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


Discussion and Development of Local Image and Video Models

Previous: >>108517229

https://rentry.org/ldg-lazy-getting-started-guide

>UI
ComfyUI: https://github.com/comfyanonymous/ComfyUI
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneo
SD.Next: https://github.com/vladmandic/sdnext
Wan2GP: https://github.com/deepbeepmeep/Wan2GP

>Checkpoints, LoRAs, Upscalers, & Workflows
https://civitai.com
https://civitaiarchive.com/
https://openmodeldb.info
https://openart.ai/workflows

>Tuning
https://github.com/spacepxl/demystifying-sd-finetuning
https://github.com/ostris/ai-toolkit
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/musubi-tuner
https://github.com/tdrussell/diffusion-pipe

>Z
https://huggingface.co/Tongyi-MAI/Z-Image
https://huggingface.co/Tongyi-MAI/Z-Image-Turbo

>Anima
https://huggingface.co/circlestone-labs/Anima
https://tagexplorer.github.io/

>Qwen
https://huggingface.co/collections/Qwen/qwen-image

>Klein
https://huggingface.co/collections/black-forest-labs/flux2

>LTX-2
https://huggingface.co/Lightricks/LTX-2

>Wan
https://github.com/Wan-Video/Wan2.2

>Chroma
https://huggingface.co/lodestones/Chroma1-Base
https://rentry.org/mvu52t46

>Illustrious
https://rentry.org/comfyui_guide_1girl

>Misc
Local Model Meta: https://rentry.org/localmodelsmeta
Share Metadata: https://catbox.moe | https://litterbox.catbox.moe/
Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Txt2Img Plugin: https://github.com/Acly/krita-ai-diffusion
Archive: https://rentry.org/sdg-link
Collage: https://rentry.org/ldgcollage

>Neighbors
>>>/aco/csdg
>>>/b/degen
>>>/r/realistic+parody
>>>/gif/vdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/trash/slop
>>>/vt/vtai
>>>/u/udg

>Local Text
>>>/g/lmg

>Maintain Thread Quality
https://rentry.org/debo
https://rentry.org/animanon
>>
Blessed thread of frenship
>>
At the end of 2025, we received our last batch of decent new models: Flux Klein, Z-Image, and Qwen 2512. 4 months later, where are the finetunes?
>>
>>108520023
lodes(chroma) is fine tuning Z-Image.
>>
>>108520023
Z image base released in January of this year.
Previous finetroons took up to 8 months.
>>
>mfw Research news

04/03/2026

>Modular Energy Steering for Safe Text-to-Image Generation with Foundation Models
https://arxiv.org/abs/2604.02265

>Can Video Diffusion Models Predict Past Frames? Bidirectional Cycle Consistency for Reversible Interpolation
https://arxiv.org/abs/2604.01700

>Why Instruction-Based Unlearning Fails in Diffusion Models?
https://arxiv.org/abs/2604.01514

>SteerFlow: Steering Rectified Flows for Faithful Inversion-Based Image Editing
https://arxiv.org/abs/2604.01715

>MAR-MAER: Metric-Aware and Ambiguity-Adaptive Autoregressive Image Generation
https://arxiv.org/abs/2604.01864

>Low-Effort Jailbreak Attacks Against Text-to-Image Safety Filters
https://arxiv.org/abs/2604.01888

>HieraVid: Hierarchical Token Pruning for Fast Video Large Language Models
https://arxiv.org/abs/2604.01881

>UniRecGen: Unifying Multi-View 3D Reconstruction and Generation
https://arxiv.org/abs/2604.01479

>Large-scale Codec Avatars: The Unreasonable Effectiveness of Large-scale Avatar Pretraining
https://junxuan-li.github.io/lca

>Semantic Richness or Geometric Reasoning? The Fragility of VLM's Visual Invariance
https://arxiv.org/abs/2604.01848

>Omni123: Exploring 3D Native Foundation Models with Limited 3D Data by Unifying Text to 2D and 3D Generation
https://arxiv.org/abs/2604.02289

>Reinforcing Consistency in Video MLLMs with Structured Rewards
https://arxiv.org/abs/2604.01460

>Model Merging via Data-Free Covariance Estimation
https://arxiv.org/abs/2604.01329

>Steerable Visual Representations
https://arxiv.org/abs/2604.02327

>Attention at Rest Stays at Rest: Breaking Visual Inertia for Cognitive Hallucination Mitigation
https://arxiv.org/abs/2604.01989

>ViT-Explainer: An Interactive Walkthrough of the Vision Transformer Pipeline
https://arxiv.org/abs/2604.02182

>Bias Is a Subspace, Not a Coordinate: A Geometric Rethinking of Post-hoc Debiasing in Vision-Language Models
https://arxiv.org/abs/2511.18123
>>
>>108520023
just stop posting. go for a walk or take a nap.
>>
>>108520053
i was told that 4 months ago. still no finetunes
>>
>>108520063
sdxl took longer than 4 months to tune
>>
>>108520071
SDXL released July 26 2023
NovelAI v3 (sdxl) released November 15 2023
only 3.5 months, and they converted it to vpred too
>>
>>108520101
what does NAI have to do with how long local tunes take
>>
>>108520063
what do you want from a finetune? clearly all the finetunes for zit/zib, flux 2 and qwen don't count.
is there some hyper specific criteria you need met before it counts?
>>
For anon who was looking for help at the end of the last thread
>>108520043
I can't speak as to whether or not your comfy workflow is broken though.
>>
File: ComfyUI_00140_.png (1.64 MB, 1200x1600)
1.64 MB
1.64 MB PNG
>mfw (mono filament wire)
>>
cozy breas
>>
File: succ_6749.jpg (1000 KB, 2008x1568)
1000 KB
1000 KB JPG
>>108520101
> SDXL released July 26 2023
Its hard to believe it hasn't even been three full years. Feels like an eternity since SDXL came out.
>>
>>108520160
and you don't blame the software itself shitting the bed for months?
>>
>>108520004
Thank you for baking this thread, anon
>>108520014
Thank you for blessing this thread, anon
>>
damn, things have progressed so far since SDXL. We had Dall-E 3, GPT-o, Nano Banana 2, and now per-pixel intelligence with Uni-1. Even ComfyUI and CivitAI grew up, both started only supporting small local models but now feature the best API models the world has to offer. It's great coming back to our roots and remembering where things started with SD1.4, back when we were all stuck using local. Now we can gen 4k outputs with Seedream in under 10 seconds, how crazy is that?
>>
File: HE9wsBHbUAIsLPb.jfif.jpg (62 KB, 768x1344)
62 KB
62 KB JPG
>>
>>108520507
Nope. All my gens with anon's chosen artist were busted too, on Forge Neo. Checkpoint didn't know their artist.
>>
>>108520527
going to be interesting to watch it unfold over the next few years, a lot of the smaller companies or companies that don't have brand recognition are going to go tits up.
autodesk, adobe and pixologic will all run their own models that get fully integrated into their applications.
chatgpt, x and googles models are already mspaint for boomers and normies to make memes.
hollywood studios will run their own proprietary models.
ironically the local coomers will be the biggest winners, nothing drives tech adoption like pornography. just look at grok.
>>
https://huggingface.co/RuneXX/LTX-2-Workflows/blob/main/older_comfy_pre_feb2026/LTX-2%20-%20I2V%20and%20T2V%20Basic%20(Custom%20Audio).json

ltx is pretty great at lip syncing with custom audio: 20s even worked

https://files.catbox.moe/20mzie.mp4
>>
https://huggingface.co/SanDiegoDude/JoyAI-Image-Edit-Safetensors
Did anyone try that new edit model?
>>
>>
>>108520789
see how good ltx works for sync audio? harry potter is saved.

https://files.catbox.moe/kh3rpd.mp4
>>
>>
>>108520925
does chroma give that body shape with just prompt?
>>
>>
>>
>>
is z even trainable? I haven't seen a single lora that can remotely generate something that resembles a real penis yet klein 9b had decent loras that could do it from the first couple of weeks after its release.
>>
>>108520904
>>108520925
>>108520935
>>108520949
>>108520955
>>108520961
I've got a friend that was asking for these images without the bikini on. Could you please provide these for scientific purposes?
>>
>>
->>108521097
https://civitai.com/search/models?baseModel=ZImageBase&sortBy=models_v9&query=penis
>>
>>108520904
Too old btw
>>
File: ComfyUI_00227_.png (3.65 MB, 1248x1824)
3.65 MB
3.65 MB PNG
>>
File: HEpqyd-aEAAX8Wt.jfif.jpg (475 KB, 1584x2816)
475 KB
475 KB JPG
>>
Blogpost time.
Experimenting with anima lora training again. I tried training with diffusion-pipe two months ago, but got bad results and the tool was imo quite bad. Using good old sd-scripts now and getting better results, though it's not there yet.
I am experimenting with using a regularization dataset for this style lora. And the result is, mixed? Notwithstanding the fact that it takes twice as long (and anima is already slower to train than sdxl) as not using a regularization dataset, when it works, it seems to improve the results a bit. But often it makes the lora stop working. The lora decides to draw an unrelated style (seems to be mix of stuff in the reg dataset) for some prompts (couldn't find a pattern, all have the TW). And this persists across seeds of the prompt and epochs of the lora. Very weird.
Maybe a larger regularization dataset might help. I am using only 40 images (80 in the training dataset, reg dataset is automatically repeated so there is no mismatch however).
I went with a in the middle --prior_loss_weight value of 0.66. I am skeptical that bigger value would help, considering the problems I am facing, but maybe it performs better with lower values like 0.25-0.33?
Currently using NLP captions for training, it might perform differently with tag captions, or it might not.
The trigger word for my lora is "x style art.". There is no equivalent to this in regularization dataset captions, so maybe adding "y style art", "z style art" to captions might help to prevent unrelated styles from showing up?
1/2
>>
>>108521266
Maybe I am approaching with the wrong philosophy? Regularization images I believe are supposed to be different members of the same "class" of whatever I am training, so that the training doesn't fry model's knowledge, and also as general purpose help against overfitting. Given that I am training a style lora, I grabbed artworks from differ artists. Maybe something else like images of underrepresented concepts in my dataset would work better? Or would this backfire and make my lora completely less able to draw regularization data content in the correct style?
There is so little decent resources for this shit man. Trial and error shots in the dark until something works.
Also bunch of other shit I want to test like network_args "train_llm_adapter=True" (might fry the lora but seems worth testing) or shift values (attempted to train a few flow loras in the past, though never experimented with this in detail. Currently using 3.0)
Thanks for reading my blog. Open to suggestions.
2/2
>>
I'm so hungry for a pizza rn
>>
>>108521272
*make my lora less able to draw regularization data content
*There are so few decent resources for this shit man
I re-wrote this part and left some Frankenstein sentences from the previous draft, my bad.
>>
>>108521266
>>108521272
small models just suck at learning texture, they all have this overly smooth look to them. the same happened with SDXL. try the exact same thing on Qwen and it will work fine
>>
>>108520003
ez bait
>>
>>108521266
>>108521272
Low and slow is the name of the game.
>The trigger word for my lora is "x style art."
Why not as the model itself uses artists? i.e. @artist
>>
>>
>>108521266
>Currently using NLP captions for training, it might perform differently with tag captions, or it might not.
tdrussell explained how he captioned his dataset on the hf repo so you should probably follow that
>>
>>
>>108521328
>>108521339
downgrade desu
>>
>>108521266
it looks like youre frying the fuck out of it



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.