[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


Discussion of Free and Open Source Diffusion Models

Prev: >>108035720

https://rentry.org/ldg-lazy-getting-started-guide

>UI
ComfyUI: https://github.com/comfyanonymous/ComfyUI
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneo
SD.Next: https://github.com/vladmandic/sdnext
Wan2GP: https://github.com/deepbeepmeep/Wan2GP

>Checkpoints, LoRAs, Upscalers, & Workflows
https://civitai.com
https://civitaiarchive.com/
https://openmodeldb.info
https://openart.ai/workflows

>Tuning
https://github.com/spacepxl/demystifying-sd-finetuning
https://github.com/ostris/ai-toolkit
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/musubi-tuner
https://github.com/tdrussell/diffusion-pipe

>Z
https://huggingface.co/Tongyi-MAI/Z-Image
https://huggingface.co/Tongyi-MAI/Z-Image-Turbo

>Anima
https://huggingface.co/circlestone-labs/Anima

>Klein
https://huggingface.co/collections/black-forest-labs/flux2

>LTX-2
https://huggingface.co/Lightricks/LTX-2

>Wan
https://github.com/Wan-Video/Wan2.2

>Chroma
https://huggingface.co/lodestones/Chroma1-Base
https://rentry.org/mvu52t46

>Illustrious
https://rentry.org/comfyui_guide_1girl
https://tagexplorer.github.io/

>Misc
Local Model Meta: https://rentry.org/localmodelsmeta
Share Metadata: https://catbox.moe | https://litterbox.catbox.moe/
GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/
Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Txt2Img Plugin: https://github.com/Acly/krita-ai-diffusion
Archive: https://rentry.org/sdg-link
Bakery: https://rentry.org/ldgcollage

>Neighbors
>>>/aco/csdg
>>>/b/degen
>>>/r/realistic+parody
>>>/gif/vdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/trash/slop
>>>/vt/vtai
>>>/u/udg

>Local Text
>>>/g/lmg

>Maintain Thread Quality
https://rentry.org/debo
https://rentry.org/animanon
>>
>>108037746
can you stop putting your uninspired gens into the OP? you are too mentally unhinged to decide what quality is so you should stop baking
>>
>>108037789
Why do you keep saying things about yourself, you cry day in and day out like a autistic baby.
>>
>>108037798
why are you replying to yourself with different personalities?
>>
File: 00016-2017891922.jpg (1.02 MB, 2048x2688)
1.02 MB
1.02 MB JPG
>>108037804
>>
Blessed thread of frenship
>>
>>108037746
>puts gens from trannies in op but rutinely skips majority of the gens in threads
it does seem like you have to be mentally ill to be a baker no matter what
>>
File: 00068-4101171308.jpg (1.06 MB, 2048x2688)
1.06 MB
1.06 MB JPG
>>
brownoid thread of local
>>
>>
gemini image captioning is soo much better than joy caption beta one. makes genning with qwen 2512 a lot better.
>>
>>108037746
Thank you for baking this thread, anon
>>108037820
Thank you for blessing this thread, anon
>>
>>108037854
Qwen Image Max as all your filenames say isn't the same thing as 2512, it's API only lol
>>
>>108037859
yeah? I'm using API nodes
>>
>>108037864
fucking based
>>
Surprisingly good at miniature stuff, not even my fetish.
>>
>>108037873
i made this image
>>
>>108037859
i can run the qwen model locally but its just too slow. I'm just being lazy and burning remaining credits on some third party AIO api website before i cancel my sub. Lighting qwen loras ruin the details and running the full 50 steps hogs my system resources.
>>
>>108037871
first happy KomfyBux spender I've seen lol
>>
local hardware is holding back local.
>>
>>108037920
my gigantic penis is holding back your mom from breathing
>>
Anima is giving me sd 1.5 vibes with using incorrect steps and cfg. Which is a good thing.
>>
>>108037854
is this 2512 or some close source model?
>>
File: s4tt.png (1.98 MB, 2432x1664)
1.98 MB
1.98 MB PNG
>>108037810
would rape
>>
File: hh6.jpg (1.18 MB, 2560x3072)
1.18 MB
1.18 MB JPG
what retarded thing am I doing that is causing the double outline thing with Klein?
>>
>>108038003
ypi are dim
>>
File: ComfyUI_09507.png (3.31 MB, 2160x1440)
3.31 MB
3.31 MB PNG
>>108037655
Anon, I hate to be the one to tell you this... but you're terminally face-blind.
>>
Interesting, anima completely ignores any kind of lora for me. It doesn't even break the gen.
>>
>>108038030
horse face
>>
>>108038032
wat
Komfy doesn't load loras that aren't architecturally compatible at all
no loras are known to exist for Anima ATM so yeah
>>
>>108038032
kinoooo
prompt?
>>
>>108038062
i forgot
>>
>>108038046
It does load for every other image model I've used so far.

>>108038062
It's a super secret prompt that I developed while eating breakfast.
>>
File: ih77.png (2.28 MB, 2432x1664)
2.28 MB
2.28 MB PNG
>>
>>108038083
It's truly making me feel nostalgic to sd 1.5, because it feels like I've aged 20 years the past 3.
>>
I love how evil people here are
>>
File: jf5.png (2.81 MB, 2432x1664)
2.81 MB
2.81 MB PNG
>I love how evil people here are
>>
>>108038104
what did anon mean by this
>>
I just gotta say, I hate luddites. Today I had a classmate in college telling someone that soon the AI companies will run out of money and we will go back to how things were, like lets pretend that's gonna happen (it won't lol) he retard doesn't realize we have local models.
>>
File: 1741856043746911.jpg (900 KB, 1184x1776)
900 KB
900 KB JPG
>>
>>108038153
why arent you his gf?
>>
Where were you when ComfyOrg saved local?
>>
>>108038153
you can both burn in hell
>>
>>108038163
my issue with comfy is that it is too good for a local tool, allowing anyone with a decent gpu to have this much power is too dangerous
>>
>>108038174
fuck you asshole, I asked a fair question
>>
>>108038174
I said both of you can burn in hell
>>
>>108038030
too much synthetic in the dataset bubby
>>
File: ghd46.png (1.91 MB, 2432x1664)
1.91 MB
1.91 MB PNG
>>108038163
wishing auto was still alive
>>108038153
I love you
>>108038164
I love you too
>>108038182
and you
>>
>I love you
>>
I don't care for ani's shitty wrapper. I don't care for ran's shitty gens. I don't care for the retarded schizo blood feud.

Rot in Hell.
>>
i only care about when seedream 5 releases
>>
File: 1763321540089056.png (3.94 MB, 2304x1152)
3.94 MB
3.94 MB PNG
>>108038097
>sd 1.5
If it's not 1.4, leave it on the floor
>>
File: 1756917990114388.png (3.55 MB, 1536x1536)
3.55 MB
3.55 MB PNG
>>108038195
>wishing auto was still alive
Crazy how after all these years SD1.4/SD1.5 in Auto is the best method for making tiling images
>>
is z-image base fp32 better at realistic than the fp16? The fp16 seems melty to me and base loras don't really work well on it
>>
File: 1763644878214905.png (3.53 MB, 1248x1680)
3.53 MB
3.53 MB PNG
>>
I'm feeling sad about that
>>
File: 1746990612599006.jpg (834 KB, 1248x1680)
834 KB
834 KB JPG
>>
>>108038003
Latent upscale at any point in the workflow?
>>
hibernation mode: engage
>>
>>108037920
Nvidia slaves are holding back local.
>>
>>108038582
proof you arent one?
>>
Which schizos are real? Ani seems to be real because there's someone seething about his rentry and someone always tries to sneak anustudio into the thread.
Is his archnemesis ran also real or is it actually a figment of his imagination?
>>
Claude Sonnet 5: The “Fennec” Leaks

Fennec Codename: Leaked internal codename for Claude Sonnet 5, reportedly one full generation ahead of Gemini’s “Snow Bunny.”

Imminent Release: A Vertex AI error log lists claude-sonnet-5@20260203, pointing to a February 3, 2026 release window.

Aggressive Pricing: Rumored to be 50% cheaper than Claude Opus 4.5 while outperforming it across metrics.

Massive Context: Retains the 1M token context window, but runs significantly faster.

TPU Acceleration: Allegedly trained/optimized on Google TPUs, enabling higher throughput and lower latency.

Claude Code Evolution: Can spawn specialized sub-agents (backend, QA, researcher) that work in parallel from the terminal.

“Dev Team” Mode: Agents run autonomously in the background you give a brief, they build the full feature like human teammates.

Benchmarking Beast: Insider leaks claim it surpasses 80.9% on SWE-Bench, effectively outscoring current coding models.

Vertex Confirmation: The 404 on the specific Sonnet 5 ID suggests the model already exists in Google’s infrastructure, awaiting activation.
>>
>>108038292
>seems melty
Lower shift
>>
>>108038558
yup, that's probably it huh?
>>108038616
We'll never know and that's probably for the best
>>
https://files.catbox.moe/lcq3mx.flac
:^) fireside song
>>
absolutely love flux klein 9b for i2i and it's even fantastic for i2i that's not just for localized edits but also like "put x and y together into a new image" which it still does fine. but i'm just now for the very first time properly trying t2i and what the FUCK is with all this mangled anatomy and double headed people, for something as simple as x is straddling the chest of y, is there a subset of resolutions i'm meant to be targeting as with SDXL or something? or is klein 9b just kinda bad at t2i (especially anatomy/human posing, which BFL models have always been deliberately lobotomized on, i'm assuming i just didn't notice because in i2i mode this is patched over by the fact it'll just copy the anal sex pose from the input picture which really makes you wonder why they would bother lobotomizing the t2i in the first place)
>>
File: out.png (1.74 MB, 1024x1024)
1.74 MB
1.74 MB PNG
>>108038249
https://github.com/leejet/stable-diffusion.cpp/pull/914 sd.cpp can do it with flux qwen z-image and even sdxl. Picrel is https://civarchive.com/models/1666480?modelVersionId=1772536 prompt "anime style, no_humans, grass texture, from above" with --circular
>>108038582
As someone with both, nvidia cards give you a bit more than double the it/s and t/s for local models of an AMD card of matching compute power. While the AMD card is a pain in the ass to set up software for only to wind up running faster on vulkan compute than rocm, or for the software that runs on pytorch basically needing to use AMD's rocm docker container as a base if you want 9 sec gens instead of 8 min gens (gpu pegged to 100% hanging the OS gui with 9/16GB vram used).
It's up to AMD to compete not for the user to have to buy into their worse option and hope they'll keep promises which they've failed to for years while Lisa Su goes up on stage at CES and talks about running comfy on the latest CDNA apus, even if nvidia are total scumfucks that rape you for an extra 8gb.
>>
File: out.png (1.66 MB, 1024x1024)
1.66 MB
1.66 MB PNG
>>108038249
>>108038705 (Me)
flux 4b distilled prompt "realistic top down photo of a cherry wood floor"
>>
>>108038705
You do know that Google's npu have just defeated nvidia at realtime, right? It's called Project Genie. You get basically a shell of an fmv game, where you have full control though, and it's ai generation.

I think it's fascinating. I doubt Google has an interest in making a consumer ai device / card, but they have taken the lead in *diffusion*
>>
>>108038732
Can they make a tiling model? This seems like something that ai would be better at than humoids

remember alienskin?
>>
How do you get Z Image Turbo to make super enormous boob size?
No matter what I put, the size is still within human limits. Is there a way or is Z Image Turbo too limited to realism?
>>
>>108038741
i hate you for not knowing we have multiple already
>>
>>108038758
just use your imagination
>>
>>108038735
I'm talking about consumer devices specifically since I don't follow SaaS/cloud news very closely. If I could buy a google npu for 350 dollars and run diffusion and llms on it faster than a similarly priced GPU I certainly wouldn't be against it though as someone who has no brand loyalty to any tech company and knows they're all out to get your money.
>>108038741
I don't know if a model specifically for generating tiled outputs is needed when this method gives such usable results already.
>>
>>108038153
>you morons are supporting a tranny nocoder on welfare and his vendetta against an innocent dev
no one supports you though
>>
how often should you upgrade your gpu?
>>
>>108038792
When it breaks or can't run what you need it to anymore.
>>
>>108038792
if you don't have a 6k you should, likely the last affordable AI gpu ever made
>>
>>108038817
no lies detected, but we'll all be on npus soon
>>
>>108038826
nigga pussy?
>>
>>108038792
upgrade each time you're outright bottlenecked/blocked from doing something you'd really like to do, ideally either (a) something you can do slowly on current hardware (and enjoy) that you could do ~4x faster or more on the new card and so make it an outright different experience, or (b) something you can't do at all but you're confident you'd spend a lot of time on if you could, e.g. i think if you have 10GB of SDXL Realism gens on your disk you should definitely upgrade if it'll enable you to run ZIT/Flux models, but knowing you like to gen a lot of anime Illu/Pony doesn't necessarily mean you should upgrade to run those, same for accessing WAN/LTX unless you already gen with those and want a speedup.
The other situation is if you're making a bet on future pricing and availability that you're confident in and the upgrade would be at least somewhat meaningful. In the past a decent heuristic was "if you might want to upgrade, just upgrade now [unless the next gen's release is literally <1 month away]" as waiting never really got you bargains, now if anything waiting tends to cost you more so you should upgrade as soon as you have any good reason, making sure you sell your old gpu back into the second hand market or donate to a friend.
>>
>>108038758
Z image is a pretty stiff model. If you don’t specify ethnicity, it’ll give you Asian, and even when you do try to spawn in a European, it’ll cycle between the same handful of European faces. It can’t do panty lines (one of my favorite modifiers to add). If you ask for a girl in diapers with pacifier or even a woman in diapers, you’ll get a toddler. This one is baffles me because it has clearly been trained on both diapers, pacifiers, and adult women. Why can’t I gen a woman in diaper?

It simply can’t combine or mix its training data very well. It’s an accurate model, but not a robust one at all.
>>
>>108038834
sounds like good advice, just upgrade when you're really bottlenecked or can't do something you’d love to do, especially if it’s a big jump or future-proofing a bit, don’t wait too long or you’ll miss out.
>>
>>108037940
catbox?
>>
>>108038817
the 6000 pro? it's actually a "good deal" at $8k relative now. 5090s are spiking up to over $4k while the 6000 pro held its launch price. the 5090 is absolutely not worth it at these prices comparatively speaking.
>>
>>108038838
That's been my experience as well. Is there a better alternative? A model that follows the prompt a bit better.
>>
>>108038083
very nice
>>
>>108038705
How them to compete when pytorch sold out to Nvidia and laid the torch.cuda landmine plus other shit like kernels exists? Retarded researches and devs still use torch.cuda, because every learning material says cpu=cpu, gpu=cuda and people buy Nvidia cards, because they don't want to wait for few years for others to implement or lose speed because of fast cuda only kernels and mechanisms implementations. And this makes the situation only worse.
Imagine if we had similar problem in CPUs.
>>
>>108038874
try qwen2512, it's in similar weight class speed wise if you're talking about z-image base. qwen also has 8-step turbo lora
>>
>>108038869
Yeah 6k pro, if you buy that you are set for video/image models for the next 5 years, for llms sadly even a whole 6k pro isn't enough to hit peak, but it will certainly help you run some nicer models
>>
>>108038889
Can you teach me how to use qwen?
>>
>>108038895
just download the comfyui example workflow, it has a version with the turbo lora as well. it's not a difficult model to use and lora training is exceptionally easy
>>
>>108038889
Ok, I'll try it out.
Is the prompting similar to Z image Turbo? Where it prefers long detailed descriptions? Or is it more like tags?
>>
>>108038831
tpu for now. Ironwood is the most powerful diffusion chip (idk if the chip itself is called Ironwood or the board etc) in the world:
https://blog.google/innovation-and-ai/infrastructure-and-cloud/google-cloud/ironwood-tpu-age-of-inference/


https://medium.com/@finomicsedge/googles-secret-chip-beats-nvidia-at-scale-but-you-can-t-buy-it-fb0b8661fd99
>At the individual chip level, they’re remarkably similar. Ironwood delivers 4,614 teraflops (TFLOPS) of FP8 compute power. Blackwell’s B200 configuration hits 4,500 TFLOPS. That’s essentially a tie, close enough that arguing about the difference wastes time.
>>
File: 1701421993843728.png (1.15 MB, 2400x1704)
1.15 MB
1.15 MB PNG
>>108037940
ChoujinX-sque
>>
HOW THE FUCK DO YOU PREVENT KLEIN FROM GIVING PEOPLE PLASTIC SKIN AND INCREASING CONTRAST AND SATURATION TO ABSURD LEVELS WHEN UPSCALING ASDASDASDAAAGHH
>>
>>108038928
Why are you generating porn? God is watching everything you do.
>>
i was gone for a month and just dipped back into local sota. after checking the backthreads, it looks like people are still mainly evaluating their impressions of how superficially coherent and correct output is, not necessarily in combination with base model knowledge and instruction following.
the added step of needing a sota llm to wield a model is kind of the opposite of what needs to happen. i get that it's an incremental step towards understanding prompt/token relation to get to truer natural language control, but it's like admitting "we get that you're all too illiterate to write, and we're too embarrassed to expose how inadequate our text encoders and model comprehension are, so we're cutting humans out of the process".
not saying there've been no improvements or that the edit models aren't awesome, but it mainly feels like a lot of the hoorah is from people who never learned how to prompt chroma.
>>
>>108038664
https://files.catbox.moe/4b1z7n.flac
>>
>>108038908
all models are trained on long descriptions now but that doesn't mean tags don't work. but it won't understand tag soup the same way sdxl models do. it's still a relatively rigid chinese model, just a bit different than z.
>>
>>108038928
just prompt bro
>>
>>108038817
I have dgx spark
>>
>>108038969
wow just come out and tell everyone ur retarded
>>
>>108038969
>>108038972
It isn't a secret, I didn't recommend it because it is 7x slower than a 6k pro
>>
>>108038977
a 6000 pro = 2 dgx sparks
96gb < 256 gb
>>
>>108038985
That doesn't change the memory bandwidth, it is still 7x slower.
>>
So from my experiments with anima, it seems to have a nearly noob tier drawfag knowledge and quality with vastly superior prompt comprehension, can do several subjects without blending the details and generally understands prompt extremely well compared to SDXL models. All of this without the glossy waishit look.
Feels like a better lumina, although lumina still has numerous art styles i really like.
>>
>>108038990
doesn't change the fact of what I said
>>
>>108038968
What exactly, I tell it specifically not to change the rest of the image, keep contrast, saturation, colors, color temperature etc the same, to not change, the atmosphere, vibe etc, color palette etcetc, nothing works
>>
>>108038985
>>108038998
>>
>>108038969
https://youtu.be/L9QZ97y9Exg You should try what this guy did but with image gen, maybe it can scale up too.
>>
>>108039005
more vram is strictly a win condition
have you not learned from 3090 stories
>>
>>108039036
proof?
>>
>>108039036
Spark is a fancy testbench. Not an inference machine.
>>
>>108039040
everyone said the same until they oom in used 3090 era
>>
^ indians don't have audio on their internet cafe pcs
>>
>loses
>start not to quote
>>
>>108038999
maybe try upscaling with a normal upscaler like a GAN or whatever, then try fixing soupy slurry bits with klein at that point i2i same res -> same res? i haven't tried this sorry, frankly imo native 1-2 megapixel generations are a big enough canvas for pretty much any composition and i'll happily just zoom that image rather than fuck around trying to upscale it without wrecking things.
>>
>>108039050
>comparing a 3090 consumer gpu vs a 6000 workstation gpu
lol
>>
File: 1762231608793752.png (81 KB, 761x440)
81 KB
81 KB PNG
Even a 5090 beats your toy
>>
>>108039036
looks like this is rent free lol
people are so mad when you have high vram
>>
samefag
>>
File: method.png (689 KB, 2000x755)
689 KB
689 KB PNG
ChronoEdit: Towards Temporal Reasoning for Image Editing and World Simulation
ChronoEdit-14B enables physics-aware image editing and action-conditioned world simulation through temporal reasoning. It distills priors from a 14B-parameter pretrained video generative model and separates inference into (i) a video reasoning stage for latent trajectory denoising, and (ii) an in-context editing stage for pruning trajectory tokens. ChronoEdit-14B was developed by NVIDIA as part of the ChronoEdit family of multimodal foundation models. This model is ready for commercial use.

ChronoEdit Method Overview

Overview of the ChronoEdit pipeline. From right to left, the denoising process begins in the temporal reasoning stage, where the model imagines and denoises a short trajectory of intermediate frames. These intermediate frames act as reasoning tokens, guiding how the edit should unfold in a physically consistent manner. For efficiency, the reasoning tokens are discarded in the subsequent editing frame generation stage, where the target frame is further refined into the final edited image.


https://huggingface.co/collections/nvidia/chronoedit
>>
File: 325873.png (1.46 MB, 960x1680)
1.46 MB
1.46 MB PNG
good morning saars, hope everyone had a comfy night of sleep
>>
>>108039200
I made this
>>
>>108038944
https://files.catbox.moe/69ilj4.flac

I is quality learing how into prompt
>>
>>108038139
Even if they ran out of money, does he think every one is just going to forget how useful AI has been and stop? Seriously? Pandora's box has been opened. It's not going away.
>>
>>108039245
good morning, how is /adt/l? dead and irrelevant as always?
why don't you go back to >>>/h/8821784 they miss you there
>>
>>108039310
nta but ai isn't useful, look at the slop in this thread
>>
>>108038139
I had a classmate look me dead in the eye and say the ai was coming. And I thought. uh. huh, well that will fuck the shit.
>>
>>108039331
Useful enough to keep replacing employees, thus saving companies millions.
>>
>>108039346
no it isn't it is just being used as an excuse to fire people post covid over hiring
>>
>>108039346
Women never talk to me. dgaf
>>
>>108039356
Yes we do
>>
Thinking about how my cute cousin lost weight and all the cute pudge is gone. Life is so cruel.
>>
>>108039358
ywnbaw, sir
>>
>>108039368
Proud biofem, fuck off
>>
>>108039370
farts
>>
>>108039383
*shits down your throat*
>>
nah got all the aids I need for now
>>
File: 00006-1353063321.png (1.11 MB, 832x1216)
1.11 MB
1.11 MB PNG
>>
Julien is crying again
>>
File: o_00342_.png (929 KB, 1152x896)
929 KB
929 KB PNG
>>
:^) I now have unlimited far right extremist Christian music. Now I just need to LXT this stuff.
>>
>>108039441
that's not healthy
>>
File: 00005-1376452162.jpg (82 KB, 512x768)
82 KB
82 KB JPG
>>
>>108039456
anima sloppa
>>
>>108039456
Anime is welcome here, but not childs.
>>
>>108039470
that's barely a humanoid, look at those hands, fucking disgusting abomination
>>
>>108039456
Can you make her sing this?
>>108039293
>>
>>108039487
yeah
>>
File: 000046-13427613528.png (874 KB, 1024x1536)
874 KB
874 KB PNG
Dear ignorant surely not /adt/ shitposters, chibi is a style that uses a big head and a tiny body for comedy and cuteness and adults get chibi versions all the time in anime and manga.
>>
do chibis also have 3 fingers on their left hands?
>>
Why does a general cum lora not exist for Wan? Every single one is an action, like facial cumshot, cum in mouth, etc. What if I just wanted cum on the body but without the motion from the lora? I tried using various cum loras with only the low lora for detail, but it still shows cum shooting in random spots.
>>
File: ComfyUI_00022_.jpg (26 KB, 512x512)
26 KB
26 KB JPG
>>108039494
Yeah, chibi adaptation is part of 4chan culture
chibi hobby
chibi board
chibi website
>>
File: 4386628782.png (1.35 MB, 960x1680)
1.35 MB
1.35 MB PNG
>>108039326
Not as dead as /y/ sir
Can't post 1boys standing there sir
>>
guys i got an interview as diffuser, wish me luck
>>
>>108039510
>this thing I have to explain to you is your own culture
you think I was born yesterday?
>>
I keep procrastinating and gen images
help
>>
It’s a little frustrating that this still needs to be said, but there is a general to post anime and it's called /adt/ and there are no schizos and bots like here and the people are less toxic. Posting anime here does not contribute to the overal mood thanks.
>>
I wonder if ace step 1.5 will be any better than HeartMuLa. The main problem is HeartMuLa barely seems to follow the tags for style.
>>
>>108039499
If he posted his gen in /adt/, this wouldn’t even be an issue, because we would have helped him desuuu
>>
File: 00007-13643268.png (2.15 MB, 1152x1536)
2.15 MB
2.15 MB PNG
>>
>>108039570
4 fingers
>>
>>108039564
We beed to drag him back there
>>
>>108039520
It’s all good, I understand, your real home general is the one you laught, and have fun and not the one you keep bumping so it won’t die. Welcome.
>>
>>108039601
Thanks mate
>>
File: ComfyUI_00326_.jpg (67 KB, 512x768)
67 KB
67 KB JPG
>>
>>108039618
ai slop
>>
>>108039618
>512x768
>two tails
>slopped details
what exactly are you trying to do/test?
>>
>>108039293
https://files.catbox.moe/clf7y9.flac
>>
>>108039629
She's real.
>>
>>108039632
he is trying to poison our brains
>>
>>108039655
serious response pls. did a new anime model release?
>>
>>108039660
fuck you
>>
>>108039660
Yes. Just as shit as sdxl and promotes tech regression via forcing booru tags.
>>
>>108039674
I'm so tired of this shit, POORS are holding us back I really mean it, imagine running a 2b slop model in current year, I hate them so fucking much
>>
>>108039674
>Just as shit as sdxl
do you mean base SDXL or illustrious/noobai(sdxl based)? because XL/Noob are perfectly fine. They just needed to get away from the outdated clip encoder
>>
File: ComfyUI_09520.png (3.34 MB, 1440x2160)
3.34 MB
3.34 MB PNG
>>108038181
>The background is a sprawling gothic townscape under a large red moon. Buildings feature ornate spires and crumbling stone architecture. A central building has a shark-like head protruding from its roof with an open mouth. Wrought iron fences line a pathway leading to a stone archway. Numerous glowing lanterns illuminate the scene, casting warm light on the buildings and surrounding foliage. The foreground is filled with dense patches of red and pink flowers. Several spherical structures with internal lighting are visible on the left side. A dark teal sky provides backdrop for the crimson moon. Cobblestone streets wind through the town. Detailed architecture includes gargoyles and pointed roofs. The scene evokes a slightly eerie, fantastical atmosphere.
This was captioned from Darkstalkers background art, nothing in there seems like it would lead to a less realistic look (maybe "fantastical"?), but I can't force it to look more photo real... not that I shoot for extreme realism or anything. My Jenny LoRA is only meant to be as flexible as I can possibly get it after all.

>>108038758
>How do you get Z Image Turbo to make super enormous boob size?
You don't. Use Klein Edit and make them absurdly giant if you want.
>>
>>108039687
>perfectly fine
retard
>>
>>108039681
HeartMuLa is 3B. There's an unreleased 7B version, apparently.
>>
>>108039698
we are talking about image models retard
>>
File: Flux2-Klein_00517_.png (394 KB, 544x544)
394 KB
394 KB PNG
why the fuck do people even like anime? rofl
>>
>>108039706
I'm not.

>we're his children
>God damn the ...
>the cross that has been broken
:dancing hamster:
>>
>>108037746
>queen of spades
Kys coomer
>>
aw shit it's morning, gotta brush my teeth again.
>>
File: o_00350_.png (1.76 MB, 896x1152)
1.76 MB
1.76 MB PNG
>>
File: radiance_x32.jpg (134 KB, 1280x1280)
134 KB
134 KB JPG
>>108039430
cute
>>
>select all nodes
>convert to subgraph
>edit subgraph widget
>show all
Can anyone explain why I still have to keep using F*rge or S*armUI?
>>
>>108039780
>inpaints your uncomfy post
>>
>>108039785
>he doesn't use SAM3
>he still do manual inpainting in 2026
yeah, please go back to /ic/
>>
brown
>>
File: o_00352_.png (1.93 MB, 1280x768)
1.93 MB
1.93 MB PNG
>>
File: F2Kb__00001_.png (1.32 MB, 1024x1024)
1.32 MB
1.32 MB PNG
She's a gud singer. Not real tho. She's not real. She's not a real person. She's not real, you can't meet her, she's not a real person. She's pretty, she's not a real person. You can't meet her.

https://files.catbox.moe/xg8hlo.flac
>>
>>108039687
>prompt bleed out the ass
>same lack of understanding of 3d space as any sdxl shitmix
>muh hands
>quality tags lol
>0.6B text encoder
>>
File: o_00353_.png (1.48 MB, 1280x768)
1.48 MB
1.48 MB PNG
>>
is the frontend finally not slow as shit now in comfy?
>>
>>108039962
it's worse
>>
>>108039962
it's still slow and unfortunately i don't think it'll ever be fixed. there's definitely something going on with the frontend that causes the ui to lag to shit after awhile. i wish i could figure out what the issue is.

It seems like if you have a large complex workflow, each queued job makes the UI progressively worse. In my case, after the 20th job, it becomes unusable and I have to refresh. Refreshing immediately fixes it.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.