[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


Discussion and Development of Local Image and Video Models

Previous: >>108558395

https://rentry.org/ldg-lazy-getting-started-guide

>UI
ComfyUI: https://github.com/comfyanonymous/ComfyUI
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneo
SD.Next: https://github.com/vladmandic/sdnext
Wan2GP: https://github.com/deepbeepmeep/Wan2GP

>Checkpoints, LoRAs, Upscalers, & Workflows
https://civitai.com
https://civitaiarchive.com/
https://openmodeldb.info
https://openart.ai/workflows

>Tuning
https://github.com/spacepxl/demystifying-sd-finetuning
https://github.com/ostris/ai-toolkit
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/musubi-tuner
https://github.com/tdrussell/diffusion-pipe

>Z
https://huggingface.co/Tongyi-MAI/Z-Image
https://huggingface.co/Tongyi-MAI/Z-Image-Turbo

>Anima
https://huggingface.co/circlestone-labs/Anima
https://tagexplorer.github.io/

>Qwen
https://huggingface.co/collections/Qwen/qwen-image

>Klein
https://huggingface.co/collections/black-forest-labs/flux2

>LTX-2
https://huggingface.co/Lightricks/LTX-2

>Wan
https://github.com/Wan-Video/Wan2.2

>Chroma
https://huggingface.co/lodestones/Chroma1-Base
https://rentry.org/mvu52t46

>Illustrious
https://rentry.org/comfyui_guide_1girl

>Misc
Local Model Meta: https://rentry.org/localmodelsmeta
Share Metadata: https://catbox.moe | https://litterbox.catbox.moe/
Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Txt2Img Plugin: https://github.com/Acly/krita-ai-diffusion
Archive: https://rentry.org/sdg-link
Collage: https://rentry.org/ldgcollage

>Neighbors
>>>/aco/csdg
>>>/b/degen
>>>/r/realistic+parody
>>>/gif/vdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/trash/slop
>>>/vt/vtai
>>>/u/udg

>Local Text
>>>/g/lmg

>Maintain Thread Quality
https://rentry.org/debo
https://rentry.org/animanon
>>
>mfw Resource news

04/09/2026

>MAR-GRPO: Stabilized GRPO for AR-diffusion Hybrid Image Generation
https://github.com/AMAP-ML/mar-grpo

>HybridScorer: Score, sort, and cut large sets down fast with GPU-accelerated AI review
https://github.com/vangel76/HybridScorer

04/08/2026

>OrthoFuse: Training-free Riemannian Fusion of Orthogonal Style-Concept Adapters for Diffusion Models
https://github.com/ControlGenAI/OrthoFuse

>MIRAGE: Benchmarking and Aligning Multi-Instance Image Editing
https://github.com/ZiqianLiu666/MIRAGE

>Few-Shot Semantic Segmentation Meets SAM3
https://github.com/WongKinYiu/FSS-SAM3

>PoM: A Linear-Time Replacement for Attention with the Polynomial Mixer
https://github.com/davidpicard/pom

>RS Nodes for ComfyUI: Cmprehensive custom node pack focused on LTXV audio-video generation, LoRA training and post-processing
https://github.com/richservo/rs-nodes

>FLUX.2 Small Decoder: Distilled VAE decoder for faster decoding and lower VRAM usage
https://huggingface.co/black-forest-labs/FLUX.2-small-decoder

>Nvidia snaps up AI chip packaging capacity as TSMC expands in U.S.
https://www.cnbc.com/2026/04/08/tsmc-nvidia-advanced-packaging-intel.html

04/07/2026

>Anima preview3 released
https://huggingface.co/circlestone-labs/Anima#preview3

>FrameFusion Image Interpolation: Compact image interpolation model for generating in-between frames
https://github.com/BurguerJohn/FrameFusion-Model

>An Inside Look at OpenAI and Anthropic’s Finances Ahead of Their IPOs
https://www.wsj.com/tech/ai/openai-anthropic-ipo-finances-04b3cfb9

>PrismML debuts energy-sipping 1-bit LLM in bid to free AI from the cloud
https://www.theregister.com/2026/04/04/prismml_1bit_llm

>ComfyUI Hires Fix Ultra - All in One
https://github.com/ThetaCursed/ComfyUI-HiresFix-Ultra-AllInOne

>ATSS: Detecting AI-Generated Videos via Anomalous Temporal Self-Similarity
https://github.com/hwang-cs-ime/ATSS
>>
>mfw Research news

04/08/2026

>GenLCA: 3D Diffusion for Full-Body Avatars from In-the-Wild Videos
https://onethousandwu.com/GenLCA-Page

>Grounded Forcing: Bridging Time-Independent Semantics and Proximal Dynamics in Autoregressive Video Synthesis
https://arxiv.org/abs/2604.06939

>Evolution of Video Generative Foundations
https://arxiv.org/abs/2604.06339

>VersaVogue: Visual Expert Orchestration and Preference Alignment for Unified Fashion Synthesis
https://arxiv.org/abs/2604.07210

>Controllable Generative Video Compression
https://arxiv.org/abs/2604.06655

>Not all tokens contribute equally to diffusion learning
https://arxiv.org/abs/2604.07026

>FlowInOne:Unifying Multimodal Generation as Image-in, Image-out Flow Matching
https://arxiv.org/abs/2604.06757

>Holistic Optimal Label Selection for Robust Prompt Learning under Partial Labels
https://arxiv.org/abs/2604.06614

>Towards Robust Content Watermarking Against Removal and Forgery Attacks
https://arxiv.org/abs/2604.06662

>PhyEdit: Towards Real-World Object Manipulation via Physically-Grounded Image Editing
https://arxiv.org/abs/2604.07230

>Noise Constrained Diffusion (NC-Diffusion) Framework for High Fidelity Image Compression
https://arxiv.org/abs/2604.06568

>RefineAnything: Multimodal Region-Specific Refinement for Perfect Local Details
https://limuloo.github.io/RefineAnything

>Visual prompting reimagined: The power of the Activation Prompts
https://arxiv.org/abs/2604.06440

>MoRight: Motion Control Done Right
https://research.nvidia.com/labs/sil/projects/moright

>Fast-dVLM: Efficient Block-Diffusion VLM via Direct Conversion from Autoregressive VLM
https://arxiv.org/abs/2604.06832

>DesigNet: Learning to Draw Vector Graphics as Designers Do
https://arxiv.org/abs/2604.06494

>FP4 Explore, BF16 Train: Diffusion Reinforcement Learning via Efficient Rollout Scaling
https://arxiv.org/abs/2604.06916

>When to Call an Apple Red: Humans Follow Introspective Rules, VLMs Don't
https://arxiv.org/abs/2604.06422
>>
File: 1765189754590886.png (256 KB, 872x775)
256 KB
256 KB PNG
>>108562783
>even a ComfyUi employee fell for it
lol, lmao even
>>
>>108563499
i don't get it
>>
File: file.png (495 KB, 905x722)
495 KB
495 KB PNG
>>108563499
now I understand why it's called HappyHorse, they arr all the horse face kek
>>
>>108563499
>literal who
>>
>>108563514
So that's the power of API?
>>
>>108563505
>new model under the pseudonym 'happyhorse' gets teased on arenas, beats top API model seedance 2
>news spreads of this new model, people wondering if it's the new google VEO, others speculating it's china because of the name
>jeet vibe-codes a generic pop-up throwaway website for 'happy-horse ai' claiming it is SOTA, 15b parameters, and will be locally released
>the exact same grift that happened with the 'mogao' model that turned out to be seedream api
>localbrowns itt fall for it >>108555676
>it spreads, chinese anime man falls for it (picrel)
>gets reposted to reddit, redditoids fall for it
>kijai shuts him down and calls it fake, which it obviously is
>doesn't matter, news spreads and now everyone thinks a SOTA video model will be released locally within 48 hours
the backlash will be funny when it releases as API-only and the comments get flooded with outrage, even though whatever company behind it never even claimed it would be local. though it's still deserved, as every model should be local (even if it's fun laughing at localkeks)
>>
File: this.png (257 KB, 562x416)
257 KB
257 KB PNG
>>108563557
>the backlash will be funny when it releases as API-only and the comments get flooded with outrage
based, at least it had the effect to create some hate to an API model, can't wait to see the comments
>>
File: happyhorse05.png (92 KB, 875x808)
92 KB
92 KB PNG
>>108563567
it's kind of sad to see so many people get hyped for nothing, when the reality is we won't receive a model like this for at least another full year
>>
File: 1753632922542443.png (424 KB, 640x360)
424 KB
424 KB PNG
>>108563573
>we won't receive a model like this for at least another full year
bold to assume we'll get something better ever, it's obvious now that Alibaba has abandonned us, do you even realize that we've been waiting for Z-image edit for more than 4 months now? Pretty fair to say this is all over, chinese culture won
>>
whats with all these xitter screenshots and old memes
>>
>>108563591
none of my images made it in the fagollage too. sad! :(
>>
>>108563573
>it's an open model so the benchmarks are magically true!!
kek
>>
>>108563591
we have a new model we have to fud
it isn't released yet so we're trying to get ahead of it
>>
>>108563594
how does that relate to what was posted
>>
>>108563602
thread being cringe :)
>>
>>108563601
>we have a new model we have to fud
I'm ok with fudding closed source models desu
>>
>>108563605
you seem upset
>>
File: 1749992175112340.jpg (923 KB, 1536x1536)
923 KB
923 KB JPG
>muh fagollage
lmao
>>
>>108563557
the model looks like complete slop, i don't think anyone will care one way or the other.
i guess it might be ok if it is an API model and it can hold character likeness properly, bytedance kind of killed seedance 2.0 over that shit.
>>
>>108563617
>i don't think anyone will care one way or the other.
if it was local it would've been hyped though, it's way better than LTX 2.3 and Wan 2.2
>>
>dont care about sneederboards
>never tricked into getting hyped for nothing
Join me anon
>>
>>108563557
Source that it's fake?
>>
File: 1748109295030791.png (281 KB, 947x899)
281 KB
281 KB PNG
>>108563640
KJGod said it
>>
File: FAKE.png (99 KB, 1198x933)
99 KB
99 KB PNG
>>108563640
>>108563651
https://github.com/brooks376/Happy-Horse-1.0/issues
kek
>>
File: take this my kind sir.png (89 KB, 333x498)
89 KB
89 KB PNG
>>108563655
https://github.com/brooks376/Happy-Horse-1.0/issues/3#issue-4225521889
>The author is using a deceptive title and README to exploit the open-source community's trust.
>The open-source community is a place for developers to share and collaborate, not a dumping ground for your vanity metrics or clickbait schemes.
>>
File: ComfyUI_Anima_00038_.png (1.68 MB, 1024x1024)
1.68 MB
1.68 MB PNG
ACEStep 1.5 XL Turbo. I am speechless, these are all first shots.

J-Core/Ballad
https://vocaroo.com/12JLSQwAuKIH

Electronic, Hatsune Miku included in prompt-
https://vocaroo.com/1l5RndzPCRbL

Country-
https://vocaroo.com/104rQ4A0Ux62

Gabber-
https://vocaroo.com/1b9C9ss8CTh9

Prompts are all enhanced with Gemini's help. Lyric alignment tends to be perfect now. We're now at Udio/Suno v5 territory
>>
File: __00037_.png (1.59 MB, 1296x832)
1.59 MB
1.59 MB PNG
>>
>>108563674
>The open-source community is a place for developers to share and collaborate, not a dumping ground for your vanity metrics or clickbait schemes.
lmaaaao
tell that to cumfart please
>>
>>108563679
>Electronic, Hatsune Miku included in prompt-
it doesn't sound like miku at all, and the sound is still metalic as fuck, why did you use the turbo model though? the sft is supposed to have better quality right?
>>
>>108563655
fingers crossed it's just an api model.
an open model on par with seedance would be the death of /ldg/
>>
>Alibaba
>Trusting that this lab will release anything SOTA since not releasing Qwen Image 2 despite releasing its parameter count and top researchers from the team departing shortly after the CEO claimed that he is not happy about the state of open source

Unless BFL steps in and gives us a good video model, they have no reason to give us one themselves. There's no competition and no reason to release.
>>
>>108563693
>an open model on par with seedance would be the death of /ldg/
nothing is close to seedance, I've seen some videos from HappyHorse they are mid as fuck
https://youtu.be/mmk9C6bkV_c?t=161
>>
>>108563640
>saaars that is fake???????
this is the website. if you believe this is legit, you're brown: https://happyhorse-ai.com
the github repo with the 'source code' is full of made up bullshit as well. the most obvious tell is that they call it 'happy horse' when artificialanalysis lets companies use code-names for models (mogao = seedream, blueberry/strawberry were flux-2). they admit directly it's a pseudonym:
>We’ve added a new pseudonymous video model to our Text to Video and Image to Video Arenas.‘HappyHorse-1.0’
this is so that if the model turns out to be dogshit, the company behind it can just silently pull it without the whole world knowing that openai/grok's latest api model is a complete flop. saars are using this to their advantage to make fake websites surrounding these codenames to get people to enter their login information or pay crypto for 'credits' to use it
>>
>>108563702
seedance today isn't the "hollywood killer" that it was when they first showed it off, they bricked the model with their face detection slop.
>>
>>108563688
Definitely sounds like her, at least in style, the faded sounds in that particular song were prompted for (glitchy synths was included in many parts, so it glitched out her voice). That is roughly what you'd get out of a cloud model like Suno after prompting for her, I'd imagine it gets better with LoRAs.

>why did you use the turbo model though? the sft is supposed to have better quality right?

I always test Turbo first especially since on S it had better creativity, I'll try SFT/Base later. So far what this model has outputted puts it way above S SFT though.
>>
File: 1775573203286570.png (1.01 MB, 1950x1405)
1.01 MB
1.01 MB PNG
>>108563729
obviously, hollywood threatened them with lawsuits, the """based""" chinks bent the knee to the jews
>>
>>108563739
>this model has outputted puts it way above S SFT though.
>I'll try SFT/Base later.
how do you know that if you haven't tested SFT yet? what?
>>
>>108563750
local diffusion?
>>
File: 9679858348383.jpg (1.1 MB, 1664x2432)
1.1 MB
1.1 MB JPG
>>
>>108563476
Is there anything that rivals nano-banana or similar in terms of being able to just describe what you want to see, rather than a whole "prompt engineer" slopfest? I want to connect SillyTavern to ComfyUI and gemma 4 likes to generate intricate descriptions of settings
>>
File: LMAOOOOO.gif (2.77 MB, 400x253)
2.77 MB
2.77 MB GIF
>>108563769
>Is there anything that rivals nano-banana or similar in terms of being able to just describe what you want to see, rather than a whole "prompt engineer" slopfest?
IS THIS NIGGA SERIOUS??
>>
>>108563769
no, not anything close really
>>108563779
yeah it's kinda grim
>>
File: michael-2354825030.jpg (44 KB, 600x600)
44 KB
44 KB JPG
>>108563779
>use the word banana
>a nigger chimps out at me
I don't know what I was expecting
>>
>>108563769
It is common to use an LLM between yourself and an image model, yes.
>>
File: 1763748844098575.png (694 KB, 1199x675)
694 KB
694 KB PNG
>>108563769
Nano Banana isn't even the best image model anymore, I think GPT-Image 2 dethroned it
https://xcancel.com/WolfRiccardo/status/2041573176681918728
>>
local diffusion?
>>
>>108563743
I mean regular SFT, not the new XL.
>>
so close
https://huggingface.co/SG161222/SPARK.Chroma_v1
>>
File: 34708937240872480804762.jpg (212 KB, 1672x941)
212 KB
212 KB JPG
>>108563798
do you get paid to shill chatgpt here? you've been spamming this slop for days.
>>
>>108563798
oh cool. How do I run the publicly released weights of GPT-Image 2? Is a p40 good enough or do I need two?
>>
>>108563807
The v1 was done two weeks ago. Can anyone make a q8 goof?
>>
>>108563769
Qwen Image 2, but it's closed and only has been rumored to open (doubt it). Joy AI Image Edit seems like a viable alternative to Qwen, it in fact is better at prompt following than everything else we have, but Comfy hasn't added support for it yet.
>>
>>108563809
>>108563814
look at those localkeks seething and jealous that we get quality toys while all you can have is cucked plastic slop keek
>>
>>108563814
ComfyUI has Partner Nodes you can check out, but it's not added there yet. Stay tuned, just check the OP
>>
>>108563818
>only has been rumored to open (doubt it)
that chink insider used to be reliable, now he can't stop missing lol >>108563557
>>
Very organic
>>
>>108563823
For some reason I'm willing to pay thousands to setup a 6x 3090 rig with 512 GB of DDR4 (before it went nuts) but it feels like a waste of money to give ((cloud providers)) $0.01 per image.
>>
>>108563831
the thing is that being rich is useless on local diffusion models, we don't have a giant local model that would rival the best API models, it doesn't exist
>>
>>108563819
>quality toys
larping as a white person driving a car is peak entertainment to an indian.
>>
File: remind me of this yeah.png (275 KB, 640x428)
275 KB
275 KB PNG
>>108563849
I'm not the one who made that image, I'm showing you the realism and you focus on something completly unrealted like the color of the skin, remind me of something...
>>
File: deepdream2-1346427197.jpg (153 KB, 736x413)
153 KB
153 KB JPG
>>108563847
I have hope. It wasn't too long ago I was thirsting over Deep Dream and imagining how cool it'd be to run that. If local is only as good as SOTA was 6 months ago (more like a year for image gen right now) it's worth it just for the freedom and FOSS spirit.
>>
how's the fudding proceeding friends? did we manage to gaslight anyone?
>>
feels like artist tags go to shit when you try to introduce sex tags on anima
>>
>>108563911
A1111 is so based, he should come back and put ComfyAPI to his place
>>
>>108563915
this is true in my experience as well. every tag is a style tag really. the more you add, the further it drifts
>>
>>108563873
the "moon" being a gen of a still frame from a blurry 720p bodycam, not even a video because openai doesn't do that anymore.
>>
>>108563896
they do it for free even
>>
>>108563923
sdwebui had its time (in the corpo I currently work for I actually coded a UI similar to it for internal usage for promo material creation), but I elected to use comfy as the backend because it's just too flexible to work with
>>
>>108564033
this so much. comfyui made by comfyorg is such a well-coded, convenient and flexible user interface, nothing comes even close. i'd pay for it if I could, right fellow customers?
>>
File: kek.png (372 KB, 449x401)
372 KB
372 KB PNG
>>108564039
>i'd pay for it if I could
are you that broke ani? I thought you had secured some investments in japan? maybe you should come back to comfy and ask him the 1 million grant again, you're talking about him in such a nice way on /ldg/ I'm sure he'll consider giving some of those bits to such a good non-treasonus friend
>>
>>108564039
>frontend
I specifically mentioned I use its backend, fucking retard.
Noodles scare the normal employees since they have cattle level intelligence.
>>
>>108564080
Its cool that there's people in this world where lines are too hard to understand. I love living in this world where that is true
>>
>>108564080
>>108564129
>move your mouse with your feet, now!
>what do you mean you don't want to deal with that because there's more elegant ways to manipulate a mouse, like hands for example...
>just say that you're too retarded to do it
ComfyAPI shills in a nutshell
>>
Hey /ldg/,

I’ve been working on Spellcaster, an open-source plugin that seamlessly integrates 30+ AI tools directly into GIMP and Darktable. It uses ComfyUI as the backend engine, running entirely locally on your own GPU. It is essentially the GIMP version of what you guys are doing/using.

My goal was to bring Photoshop-level AI features to open-source editors, without the steep learning curve, cloud requirements, or subscription fees.

What it does right inside your editor:

Inpaint & Outpaint: Generative fill to change objects or extend your canvas.

Enhance & Fix: 1-click AI upscaling, background removal, face restoration, and object erasure.

Relighting & Video: Change lighting direction on portraits (IC-Light) or turn still layers into short video clips (Wan 2.2).

One-Click Install: The installer handles all the backend complexity (detects GPU, downloads models, sets up ComfyUI, and links to your editor).

https://github.com/laboratoiresonore/spellcaster/blob/main/README.md

I am looking for collaborators / feedback. There are a couple of advanced features that I'd like to implement next:
-full LTX2 support
-parsing any workflow method (at the moment, the script is designed with noobs in mind but comfyui veterans will likely want to use their own workflows and special sauces)
-making the "studio" system as advanced as possible
-clever refractoring and reorganizing the script
-better theming

The script is pretty recent but it's good enough that it is all I personally use, instead of comfy
>>
>>108562337
> >Nobody here talks about ACEStep 1.5 XL which just dropped

> https://ace-step.github.io/ace-step-v1.5.github.io/#XLDemos

> It's a different class of model bros, I'm not hearing any slop...
lol remember when locals have claiming that acestep is at suno/udio level of quality?
what level is now then?
>>
>>108564129
if you give the corpo drone a noodle gui... he's not going to understand, that's just how it is. You'd expect people here to be more tech savvy but even subhuman retards found their way here instead of staying in plebbit sooo whatever.
>>
>>108564146
How does it differ from Krita? If you keep the deeper AI controls like loras and sampler settings on the surface and not buried in menus like krita, it may be worth considering.
>>
>>108564154
>remember when locals have claiming that acestep is at suno/udio level of quality?
ESL gibberish crying about local. it would be funny if it wasn't so goddamn always.
>>
listen up bro, if you don't like comfyorg product you are barely human and don't really deserve to live, alright? comfyorg will shit in our mouths because that's what they like and you must enjoy it goyim
>>
How i can get videos with lipsync ? I seen some around but I do not know which model can generate them ? is a online service only? Which one ?
>>
>>108564146
Looks cool, merci. Any demonstration video?
>>
>>108564247
add lipsync to an existing video? generate videos with audio and lipsync?
>>
File: 94685637372.jpg (1.43 MB, 1664x2432)
1.43 MB
1.43 MB JPG
>>
>>108563679
For the anon who mentioned instrumentals last thread, here's Spaghetti Western
https://vocaroo.com/1i8vvdmjVnDD

This is a genre the previous version could not do at all, and even its base model struggled with, that's a one shot from Turbo. Only issue I've noticed so far is it speaks some stuff in parentheses out loud in the midst of instrumental, which is not hard to fix in post processing and perhaps SFT does better here.

>>108564154
Well, with LoRAs the previous version absolutely was. However, LoRA training to improve everything is tedious. This one doesn't need LoRAs and has much better musical knowledge out of the box, which is now for the first time competitive with Udio/Suno at just 4B.
>>
>>108564260
1) add lipsync voice generation to an existing video.
2) Generate videos with audio or lipsync (if not possible the 1)
I'm curious about both options, and how they seen in the usual tiktok feed it seems pretty fast to made. Generate with sound is a new thing for me.
>>
we need more comfyui product discussion itt. come on bitch, more engagement
>>
test
>>
>>108564262
>>108563755
Ummmmmmmmmm... prompt?
>>
Grok is much better with lyrics than everything else kek
https://vocaroo.com/1gBbejpLw6s9
>>
>tdrussel
>diffusion pipe: initial commit 2 years ago
But that didn't trigger FUD. It was Anima that put a target on him. If we follow the money, who could feel threatened by Anima but not diffusion-pipe? I think the answer is NovelAI. They're funding the troll farm.
>>
>>108564493
Nah *someone* is just very mad about comfies decision
>>
>>108564310
>Only issue I've noticed so far is it speaks some stuff in parentheses out loud in the midst of instrumental

I see what the issue is kek, it should all be in brackets instead, parentheses are only for whispers and background noises.
>>
>>108564598
that someone knows what comfy truly wants he's known him for a long time after all!
>>
File: 1753954094570315.png (1.25 MB, 2900x1501)
1.25 MB
1.25 MB PNG
the small gemma 4 models are so ass on vision task, it's a shame they went for a smaller mmproj relative to the 26 and 31b models
>>
File: 1762691525124100.jpg (771 KB, 1328x1840)
771 KB
771 KB JPG
ayo
>>108564685
yeah theyre cooked.
even the MOE is shit btw
I went back to qwen3vl
>>
>>108564685
Use gemini 3.1 API instead of kekked localslop
>>
>>108563769
Yeah, Flux Klein.
>>
>>108564727
Klein is not NBP tier with prompts or text because it's not autoregressive. It's very impressive for what it is, and it probably doesn't get any better than what Klein does with prompts for its particular architecture, but it's still not quite there yet.
>>
https://civitai.com/articles/28368/chenkinnoob-xl-v05-is-coming-soon

We are thrilled to announce that ChenkinNoob-XL-V0.5, the direct successor to V0.2, has completed its training phase and will be officially released on April 10th (Beijing Time)!

After months of architectural refactoring and dataset expansion, V0.5 is no longer just a "gacha toy." We have pushed it to industrial-grade productivity standards.

What to Expect in V0.5:

Massive Dataset Leap: Built directly upon V0.2, we have added 2.17 million high-quality, open-source game-related images. The total training dataset now reaches ~12 million images, effortlessly capturing the latest anime art styles and popular characters.

Pro-Level Aesthetics: Built with industrial-grade standards, V0.5 fundamentally eliminates the cheap "AI-generated look," ensuring top-tier composition, lighting, and native anime aesthetics.

A Mysterious Ecosystem Addition: Alongside the V0.5 base model, we will also be releasing a highly capable new model within the ckn ecosystem. What exactly is it? We'll leave that as a surprise for you to guess until release day!

The wait is almost over. Get ready for the next evolution of anime AI generation.

Stay tuned for April 10th!
>>
>>108564801
>sdxl
I sleep
>>
>>108564801
Stop tuning sdxl jesus christ
>>
>>108564801
>SDXL still gets updates in 2026

Just why?
>>
>>108564801
tranimakeks seething. SDXL won.
>>
File: REEEEE.png (52 KB, 220x220)
52 KB
52 KB PNG
>>108564801
>SDXL
WHY?? We now have Z-image base and Klein 4b, what is wrong with youuu??
>>
File: mogged.jpg (2.27 MB, 7508x1867)
2.27 MB
2.27 MB JPG
>>108564685
>>108564690
kek
>>
>>108564829
You unironically don’t understand chinese culture. They don’t want new models, they just want more SDXL slop because its fast and thats where the millions of character loras are.
>>
File: 1772090458692516.jpg (1.06 MB, 2560x1753)
1.06 MB
1.06 MB JPG
>>108564862
moral of the story, stick with gemini if you want to caption images lol
>>
>>108564801
both of the god damn promotional images have 6 fingers on one of the hands
>>
File: 1771787291393912.jpg (41 KB, 512x1024)
41 KB
41 KB JPG
what's the lora training sample aesthetic called?
>>
>>108564690
It's not local
>>
>>108564989
>he knows local is too ass to generate such a good image
sad :(
>>
>>108563681
kek'd
>>
File: 1747212654368284.png (1.11 MB, 1976x1163)
1.11 MB
1.11 MB PNG
>>108564989
>its not local
>>
>>108565108
based, bodied that freak
>>
>>108565108
Oh shit the rtx node now does latent too?
>>
>>108565134
nope, just a subgraph
>>
can you really run gemma4:31b on a 4090?
>>
Is this real?
https://happyhorse.app/
>>
>>108563476
Kik Epp23g
Tele Bgftg33

Can anyone help me get a better result training a gf Lora?
>>
>>108565177
no, they're all fakes, HappyHorse is a codename anon, it won't be the real name after the reveal
>>
File: _AnimaPreview3_00105_.jpg (425 KB, 1160x1696)
425 KB
425 KB JPG
>>
qrd on anima v3?
>>
File: _AnimaPreview3_00119_.jpg (504 KB, 1160x1696)
504 KB
504 KB JPG
>>
>>108563514
>4changs
>>
File: o_00053_.png (1.39 MB, 1152x896)
1.39 MB
1.39 MB PNG
>>
>>108565373
Quite good i enjoy it
>>
>>108565394
but is it cute and funny yet?
>>
>>108565148
q4 is nearly indistinguishable from q8 and runs even on a 3090
>>
File: 1774081832750822.png (39 KB, 220x220)
39 KB
39 KB PNG
>I decide to take a look at what StabilityAI is doing in the year of ourd 2026
https://xcancel.com/StabilityAI/status/2021322296707908034#m
>safety, safety safety
I see that you never changed, after all those years
>>
>>108565426
>q4 is nearly indistinguishable from q8

Holy cope.
>>
File: lmao.jpg (2.04 MB, 7961x2897)
2.04 MB
2.04 MB JPG
>>108565426
>q4 is nearly indistinguishable from q8
https://www.youtube.com/watch?v=H47ow4_Cmk0
>>
>>108565497
>smugly posts an image model quant example when a text model quant is being discussed
i wish i was as cringeproof as you are, nonny
>>
>>108565497
>comparing a 30b dense to some flux-sized image model
baka?
>>
>>108565503
try to use a Q4 text encoder and see how it goes keek
>>
>>108565526
Bigger models get hurt less with quanting. Q4 30B is absolutely perfectly fine.
>>
>>108565545
it also depends on how much training the model got, if it's undertained, quants won't hurt much, but if it's a bit overtrained like gemma, every weight count so...
>>
>>108565545
please, your dosage of copium is dangerous
>>
q1 is possible, bonsai showed us so shut the fuck up about your precious '''lossless''' q8
lol
>>
>>108565710
retard vramlet, you never quanted a model in your life, shut your fucking cakehole about things you don't know about.
>>
File: 1762080854657300.jpg (452 KB, 1232x1944)
452 KB
452 KB JPG
>tfw reze lost
>>
I don't browse these threads much but I have a question on hardware.

I currently have an ubuntu system I just run for shit projects I do, can I just slap my 3090 on it and would it work right away with local LLMs because I've heard nvidia drivers are a pain on linux?

It's currently on my main windows PC right now but I can't multitask whenever I have a model loaded on it.
>>
>>108565229
well...we still call it nano banana though it was a codename
>>
>>108565806
>I just slap my 3090
I mean if it isn't a complete potato like 8g ddr3 system, yes.
>local LLMs
Not the llm thread but sure.
>I've heard nvidia drivers are a pain on linux?
They werk fine for the most part, it's just bunch of cultists seething.
Plus nvidia is a lot better for AI irrespective of your OS.
>>
>>108565826
>Not the llm thread but sure.
That was pretty fucking stupid of me but yeah I could use it for local diffusion too.

It's an old system with a Ryzen 2600 but it is ddr4 at least.
>>
>>108564801
What's the fucking point of waiting a day to release / announcing a day before?

>>108565785
You in the nice-girl thread too?
>>
File: 1758671165167472.jpg (706 KB, 1328x1840)
706 KB
706 KB JPG
>>108565873
of course
>>
>>108565912
Brofist, I posted that pic. Once in a while I still feel the magic of the internet.
>>
>>108565912
looks like some SDXL tune given the eyes and fingers. is it?
>>
File: o_00059_.png (1.25 MB, 1152x896)
1.25 MB
1.25 MB PNG
>>
>>108566124
>>108565108
>>
File: o_00060_.png (1.83 MB, 896x1152)
1.83 MB
1.83 MB PNG
>>
>>108565108
What's in optimize node anon?
>>
>>108563679
This time around, I do not see advantages to using SFT. The gap between it and Turbo is much lower than on previous version, and I do not notice too much difference in sound quality, plus I think Turbo being more creative still stands.

This seems to be general consensus on Discord as well, everyone is using Turbo, though I'm admittedly a bit worse at prompting SFT and tuning its settings so maybe it's just bias (this is default settings on qinglong UI with steps changed to 50).

Here are some samples from XL SFT, these are prompted with Gemini's help and a tiny change will make something sound 20x better so take results with grain of salt

Keygen music- https://vocaroo.com/11B2ndXidclH
Very sensitive on that one and made everything really fast paced, will prob. need LoRA for that, but chiptunes sound more authentic

Denpa/hyperpop with romaji lyrics
https://vocaroo.com/19gaTvuK3VQg

Eurobeat
https://vocaroo.com/1mF8vI2ppaK6
>>
>>108566261
how good is it at actually singing all the lyrics in the prompt? the previous version of acestep pretty much never got them all
>>
>>108564690
These images look so great on mobile then you look at them with a monitor and you can see all its flaws
>>
>>108565497
isnt this image like 3 years old by now? image/videos gguf models got deprecated a long time ago
>>
>>108566325
>how good is it at actually singing all the lyrics in the prompt?

XL Turbo gets it right almost every seed, same with SFT. Not perfect, but both have extremely high pass rates, so much it's not a concern anymore.
>>
>>108566351
>image/videos gguf models got deprecated a long time ago
??? What were they replaced by???
>>
>>108566352
Though, as before, what you put in the duration matters. Small duration/slow bpm but too long lyrics can speed up lyrics or increase errors, but generally much more forgiving and seems to adapt really well even if you mess up duration.
>>
>>108566356
fp8scaled, mixed models, the whole purpose of gguf models was to save vram not enhance quality, since vram management and speed has improved by a ton there is no incentive on using gguf models anymore, they are slower to load and clunkier to run, especially on video models
>>
File: o_00062_.png (1.62 MB, 896x1152)
1.62 MB
1.62 MB PNG
>>
>vram management and speed has improved by a ton
>>
File: deWA_zi_00005_.png (3.12 MB, 1792x1140)
3.12 MB
3.12 MB PNG
this is two days old but I just saw it

>Over 1,000 Exposed ComfyUI Instances Targeted in Cryptomining Botnet Campaign
https://thehackernews.com/2026/04/over-1000-exposed-comfyui-instances.html
>>
File: 1775738237.jpg (125 KB, 848x414)
125 KB
125 KB JPG
>She is being embraced from behind by a large, muscular man in plate armor with his head mare
Was supposed to write bare.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.