[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


It is What It is Edition

Discussion of Free and Open Source Text-to-Image/Video Models and UI

Prev: >>106683092

https://rentry.org/ldg-lazy-getting-started-guide

>UI
ComfyUI: https://github.com/comfyanonymous/ComfyUI
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneo
SD.Next: https://github.com/vladmandic/sdnext
Wan2GP: https://github.com/deepbeepmeep/Wan2GP

>Checkpoints, LoRAs, Upscalers, & Workflows
https://civitai.com
https://civitaiarchive.com/
https://openmodeldb.info
https://openart.ai/workflows

>Tuning
https://github.com/spacepxl/demystifying-sd-finetuning
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/sd-scripts/tree/sd3
https://github.com/derrian-distro/LoRA_Easy_Training_Scripts
https://github.com/tdrussell/diffusion-pipe

>WanX
https://comfyanonymous.github.io/ComfyUI_examples/wan22/
https://github.com/Wan-Video

>Chroma
https://huggingface.co/lodestones/Chroma1-Base
Training: https://rentry.org/mvu52t46

>Neta Lumina
https://huggingface.co/neta-art/Neta-Lumina
https://civitai.com/models/1790792?modelVersionId=2122326
https://neta-lumina-style.tz03.xyz/

>Illustrious
1girl and Beyond: https://rentry.org/comfyui_guide_1girl
Tag Explorer: https://tagexplorer.github.io/

>Misc
Local Model Meta: https://rentry.org/localmodelsmeta
Share Metadata: https://catbox.moe | https://litterbox.catbox.moe/
GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/
Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Txt2Img Plugin: https://github.com/Acly/krita-ai-diffusion
Archive: https://rentry.org/sdg-link
Bakery: https://rentry.org/ldgcollage

>Neighbours
>>>/aco/csdg
>>>/b/degen
>>>/b/realistic+parody
>>>/gif/vdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/trash/slop
>>>/vt/vtai
>>>/u/udg

>Local Text
>>>/g/lmg

>Maintain Thread Quality
https://rentry.org/debo
>>
Comfy + dragged + street + shot
>>
Local Chads, We WILL Rise Again!
>>
oh lol >>106685675

honestly starting to genuinely consider just genning a total 20 steps and skipping lightning dickery altogether. im honestly fine with my 85-81s/it at 1280x720. after the 60fps interpolation muh pp is satisfied.
>>
new Qwen Edit is HOT GARBO
>>
>>106685732
Any heavy movement wan gen looks like shit with 20 steps. Needs like 40-50
>>
>>106685711
Nay.

>>106685716
Continue rising, anon.
>>
>>106685626
i don't understand your funny words. i'm simply just paranoid.

>>106685628
some tags are definitely overtrained. various keywords like "cute" drastically change the style which is fucking annoying.
>>
Remember that no sane human being works for free
Remember that Comfy is a human
Remember that is fine for Comfy to focus on API
Remember that Comfy is right
Remember that you are going through a typical teenage phase with 'muh local'
>>
>>106685743
oh bother.
>>
>>106685743
Just use WAN on the api and you can get speedy high quality gens without a long wait :)
>>
who is still comfy with Forge UI?
>>
i saw someone recommend 128gb of ram even if you have a 5090? does that hold any weight?
>>
> catgirl, cat ears, heart-shaped pupils, swallowing

what's your go-to prompt when bored?
>>
>>106685807
>futanari, taker pov
i'm bored ALL the time ;)
>>
>>106685763
>Remember that you are going through a typical teenage phase with 'muh local'
nay. it has been essential for most software ecosystems, everything else turns to shit.

besides if comty drops local, whatever API service wins doesn't even need comfy in the end when they hit their cash cow phase
>>
File: Wanimate_00108.mp4 (924 KB, 614x832)
924 KB
924 KB MP4
i love soft saggy boobs so much
>>
>>106685716
show me youtube ai videos made with local open source video models. Nobody serious in the content creation scene is using open source video models for the content creation. These are glorified toys compared to saas models. The nsfw aspect of FOSS video models means jack shit if you get high rate of shitty sloppy results.
>>
>>106685819
dont reply to bait
>>
>>106685803
i think so there is still probably offloading happening and to ram rather thann loading from ssd is faster. browser UI are ram hungry. some stuff you can use off CPU still uses lots of ram.

It'll probably be easier with more than 64GB RAM. I certainly use my 95
>>
>>106685803
Well, if you're going to use models that are over 32gb then you will be offloading, that said it must be huge models if you would need 128gb for offloading
>>
>>106685803
128gb is the standard in general these days, where 64 is the new minimim.
by new i mean im pretty sure even gaymers were already upgrading to 64 like 4 years ago.

personally ive just been too lazy to get a whole new kit and getting another 32 for me is a bit of a waste because corsair are jewey on their prices
so yeah 128gb.
>>
>>106685853
64gb is minimum for videogen. 16gb can handle sdxl.
>>
>>106685803
what is $200 more if you can afford 5090
>>
I mean, if you have a 5090, then just get the 128gbs of ram, you OBVIOUSLY have money to spare.
>>
>>106685821
looks nice, workflow?
>>
File: 1640398356709.jpg (50 KB, 768x512)
50 KB
50 KB JPG
>>106685763
Remember, these chinks are popular because of the local. why weren't they on api from the start?
>>
>>106685853
>personally ive just been too lazy to get a whole new kit
it's anyhow annoying since a whole lot of (most) mainboards won't actually support your old 32-64GB 2 stick kit at full speed with say another new 2 64/96/128gb stick kit even if they have four slots
>>
>>106685763
I’m just glad he’s getting paid
SD is my favorite pasttime since I learnt about it, so im elated either which way the things go
>>
>>106683213
So who is better illustrious or noob? I'm getting short on tags with lustrous and if noob knows more stuff it would help me a lot, but I don't want to use a worse model.
>>
has anyone had success with the native comfyui workflow for Wan2.2 Animate? i get errors... I hoped that they took the time to implement it well.
>>
File: radiance.png (2.53 MB, 832x1488)
2.53 MB
2.53 MB PNG
>>106685927
neither is clearly better - it depends on what you prompt, just try noob
>>
>>106685927
>I'm getting short on tags with lustrous and if noob knows more stuff it would help me a lot
if it would personally help you a lot in your specific case, then obviously go with noob then, retard.
>>
>>106685911
that too. its a headache i'm not interested in when i can either upgrade to AM5 for DDR5 or wait for AM6. at which point getting raped on speed is like, i'm still running a speed twice as fast as what im at now.
>>
also special shout out to the jewish cumfart dev that completely fixed all the memory issues, i can keep cracking gen after gen now with no OOMs. (even with KJ nodes :) )
>>
>>106685937
I don't know how good noob is. But I always get some untagged things in my dataset because danbooru 2023 doesn't have enough tags
>>
>>106683246
>>106683284
So the only good options are non local models, which are censored anyways because fuck me?
>>
>>106685962
then simply, try it and use your brain to figure out the rest
>>
File: ComfyUI_00484_.png (2.59 MB, 1280x1280)
2.59 MB
2.59 MB PNG
>>
File: 00205-422349755h.jpg (1006 KB, 2048x2688)
1006 KB
1006 KB JPG
>>106685927
noob clears easily, I was out the game for a few months but unless they made a new model noob absolutely destroys illustrious with it's knowledge and colors, the problem is people get filtered by cfg rescale to make it better

I say it clears chroma out the box because it has styles that actually anchor and none of the common token bleed.
>>106685762
This shit is a serious issue and it forces me to describe things that typically can be said in one word to avoid the bleed, it can basically override loras
>>
>>106685803
Get it. More ram in any capacity is better. Keeps the OOM goblin away, ya know
>>
File: 1624482885784.gif (1.79 MB, 275x275)
1.79 MB
1.79 MB GIF
>tfw my 5090 blocks the second pcie slot for my 4090
>>
File: file.png (1.6 MB, 832x1488)
1.6 MB
1.6 MB PNG
>>106685976
nah the newer local models are nice too
>>
>>106685958
Yeah, my computer completely stopped hanging. I'll name my first born son after him. comfyanon can go fuck himself
>>
>>106686054
can your power supply even handle both of those cards at the same time?
>>
File: 0924222053404-iCmCQpHF77q.jpg (713 KB, 2755x1549)
713 KB
713 KB JPG
not bad
>>
>>106686054
just get copper wire and make the slot into a cable
>>
>>106686054
Couldn't you get external pcie slots for those beasts?
>>
>>106686054
I have my 5090 master ice pre tariff in my flux pro case and for it to fit I have to put it in a lower pcie lane and only a select few cards will even fit. it's fucked
>>
>>106686054
if you meet power requirements possibly you can use a pcie-pcie board/cable from ali or w/e to physically get the card installed at a different angle / in a slightly different place
>>
File: radiance.png (2.87 MB, 832x1488)
2.87 MB
2.87 MB PNG
>>
>>106686054
Stop buying plastic garbage and just use api nodes. No new models are coming to local anyway
>>
Please ignore the Californian schizo post like
>>106686120
and his posting periods pretty much show this is him.
If he does not stop I will start the wheelchair gens
>>
>>106686073
I got 1300w, but I'll be able to get 1600 if needed. The 4090 will just be a vram cuck.

>>106686091
>>106686103
It's what I started looking at. I have a massive case, it has a standing mount. So pair that with the vertical mount, it might just fit.

>>106686092
The cards definitely need to become smaller again. Or I will become a watercooling sadist.
>>
Please ignore the ComfyUI shill
>>106686131
If ComfyUI isn’t removed from the OP I will start the gpt posting
>>
>>106686141
What fucks me up the shroud and actual usable heat dissipating portions are a fraction of the card size on a ton of models and they basically do nothing.
>>106686154
You have a thread for that, are you lonely?
>>
you won't do shit, bitch
>>
what does this error mean? i don't think i did anything to screw with the genning process
>>
File: radiance.png (3.33 MB, 832x1488)
3.33 MB
3.33 MB PNG
>>106686141
the cards have currently no reason to become smaller, we have a gorillion of highest tech transistors and they need cooling too
>>
I am seeing a lot of workflows for Wan2.2, why none for the nee Wan2.5??
>>
>>106686166
lmao that is someone trying to connect to your instance.
you did put a password on your comfy, right anon?
>>
>>106686166
Been wondering the same thing.
>>
>>106686185
>>106686166
>not using wireguard
>>
>>106686166
Isn't that when you try to start another comfyui?
>>
File: radiance.png (2.4 MB, 832x1488)
2.4 MB
2.4 MB PNG
radiance anime looks are getting pretty stable
>>
>>106686185
>lmao that is someone trying to connect to your instance.
i thought you had to open your ports specifically for that, if you're implying that every instance of comfyui is open to the public, then that would be a major breach.

so i call bullshit
>>
>>106686206
nah, i get it too randomly so idk.
>>
>>106686214
Yes, you need to open your ports. I get the same error and my comfy is offline
>>
>>106686214
https://www.shodan.io/search?query=port%3A8188+http.html%3Acomfyui+

uwuuuuuu
thankies for letting me gen on ur pc mommy
>>
>>106686185
that did give me a 0.5 sec mini heart attack, thank you anon

>>106686206
i did not try to start another one though
>>
>>106686159
But they have to look cool, bro.

>>106686169
We need freon coolers.
>>
if you invade my comfyui, then you better prompt something good or i'll find you
>>
>>106686228
reminder that it is a felony and you will get raped for doing stupid shit.

i found an instance where someone had installed a monero miner in their comfy, very funny desu
>>
>>106685700
>https://comfyanonymous.github.io/ComfyUI_examples/wan22/
needs to be updated with the new version
https://blog.comfy.org/p/wan-25-preview-api-nodes-in-comfyui
>>
>>106686255
No apis, please, thank you for understanding
>>
>>106686240
it's gonna be a futa ani but she will give you a kiss but maybe (if you are lucky and she likes you) it will become a pov blowjob <3 hopping on shodan now to mess with fucks now
>>
Damn it, all these nodes I want to use but not compatible. Is there an equivalent that is compatible?
>>
>>106686247
are those comfys left accessible on purpose?
>>
>>106686303
Looks like shit and you need serious mental health services, please fuck off you and your faggots have never stopped seething at this thread. Do that for a few months and you'll be more accepted you drunk fucking loon
>>
>>106686303
>it will be 1girl slop, but with penis
shameful display, futa will not impress me
>>
Hello, new to this space! Where can I download the Wan 2.5 weights? It looks really good!
>>
>>106686317
some of them are likely honeypots. just avoid all USA instances.
i found one in france where they were gen'ing realistic loli, lmao.
sure would be a shame if someone were to report him to the cloud provider
>>
>even ani exploits comfy's shit security
>>
why would anyone open their ports but not secure them?
>>
>106686330
i know you think you're being clever but it's really quite pathetic. there are so many easier ways to troll people. you're a little fotm trollbait slut who will never amount to anything. give your hardware to the poor and leave.
>>
File: reactions (6).png (163 KB, 327x316)
163 KB
163 KB PNG
>>106686330
>guaranteed replies
>>
>>106686347
The low iq group of /sdg/ posters expect anon to be as dumb and ignorant as they are. I wonder how many of them have malware on their PC downloading the random links the disabled one spams daily
>>
>>106686330
It's only on ComfyUI API for now, but ComfyUI has stated they are committed to local models so I expect a lot of their API-only models to release openly in the coming months. They recently received a $17m investment which just shows how committed to local models they are. Big things are ahead, just buckle up and sit back!
>>
also, any idea why when i try genning something without changing any settings, the gen process just stops instantly
is it because i didn't change seed and it would just lead to the same result?
>>
exploiting the comfy cloud will be fun
>>
File: 1750505822452221.png (313 KB, 1506x1181)
313 KB
313 KB PNG
Some people are coping hard on the comment section lol
https://xcancel.com/Alibaba_Wan/status/1970697244740591917#m
>>
>>106686379
Ouch... hate the break it to you but, you're wallet is dry XD. This is what happens when you attempt to use an API model without having enough credits.
>>
>>106686379
yeah, comfy just skips anything that has the same metadata as the last gen
>>
File: 1730496387632695.png (104 KB, 1361x675)
104 KB
104 KB PNG
>>106686383
I have a feeling they'll release it at some point, they'll realize this isn't the model to rival veo3 and no one will care about wan 2.5 as an API model, you can only be succesfull in the API space if your model has something the others don't, like Seedream is the most realistic model, Midjourney is the most aesthetic one... etc
>>
https://youtu.be/SN0ecySFdkw
>>
>>106686383
it's not cope, it's coordinated astroturfing and shilling

is it really just one retard going on and on about api shit? time to add that to the filter too, fucking christ
>>
There is a coordinated effort from comfyorg to shill closed models in these threads. Appropriate action needs to be taken on the OP.
>>
>>106686430
You don't understand how bad the blood runs in this thread. We have the pastebin in the OP for a reason, he just has more spergs on his side like Rocket sperg and ani who have a personal vendetta against the thread
>>
>>
>>106686430
>>106686440
>everyone I dislike is one guy
that's the kind of cope I'm talking about
>>
>>106686427
I'm not watching an hour video for an API model release, Alibaba decided to go SAAS good for them, but that also means they're not welcome here anymore
>>
>>106686450
How odd that after midnight in California all of the posting stops. You're a low functioning loser has been getting your ass handed to you for 3 years. You could spend all this time making the thread we let you have better.
>>
>>
>>106686060
What newer models?
>>
I expect despite this being on topic he's going to lose his mind and spam false reports on me again because handicap symbols drive him berserk
>>
>>106686022
What do I have to adjust vs illustrious? I've never used noob. Is training different?
>>
File: schizo behavior.png (39 KB, 1066x259)
39 KB
39 KB PNG
>>106686463
3 years? what? I've been there less than a year ago, who the fuck do you think I am? I am not your poopdickschizo boogeyman
https://www.youtube.com/watch?v=oO_jtlib_x4
>>
>>106686490
It's a vpred model so you will need to adjust settings and work at a significantly lower cfg scale, there should be a ton of guides for it by now
>>106686494
It never catches on
It never works
Yet you persist and fail, same canned response regardless of what random subject you and your band of retards spend every waking hour trying to push every week.
>>
Seems like ComfyUI is causing more trouble than it's worth... the recent shilling of API models leaves a sour taste. Perhaps it's time we re-evaluate what is best for local diffusion.
>>
>>106686511
Comfy never was local it only was convenient because of adaptability
>>
>>106686511
>Perhaps it's time we re-evaluate what is best for local diffusion.
What do you suggest Ani?
>>
Why does the ComfyUI Wan2.2 Animate template require me to install custom nodes? No other template required me to do that. Is it not fully implemented or what's going on?
>>
>>106686510
Vpred?
>>
Speaking of vpred, why doesn't neoforge support it?
>>
File: 00161-2983497195.jpg (820 KB, 2688x3072)
820 KB
820 KB JPG
I misjudged the HD model, it seems to at a insane level at higher resolutions (Slow as fuck gens) and handles composition better, the weak point is the aggressive tagging fucking with the style which can be tamed with a strong lora (Not ideal some of the strengths overpower the base model but is mandatory for stability with HD)
I need to figure out how to gaslight the model into perfect hands which I could do with the other models
>>106686559
Read up on it, trust me it might be more of a pain in the ass, many anons decide to stick with the normal model because they can't get it right and the model needs fuck ton of fidgeting depending on how you prompt
>>
>>106686550
where'd you find that? if it wasn't on comfy.org then it's not official
>>
>>106686579
The workflow is natively available in comfyui and comes with the github install
>>
>>106686550
yes, it has some KJ nodes I think, so it not totally native...
>>
>>106685999
model?
>>
>>106686570
Not offense anon, but the image you posted seems generic as fuck.
>>
File: 00166-2437028273.jpg (861 KB, 2688x3072)
861 KB
861 KB JPG
>>106686681
It does what it needs to do
>>
File: FluxKrea_Output_2616262.png (3.5 MB, 1536x1536)
3.5 MB
3.5 MB PNG
ABSOLUTE SAAS VICTORY
>>
>>106686698
What differences do you notice whith illustrious? Your images seem very generic, dunno if you tried style loras
>>
>>106686719
>uses a local model (Flux Krea) to make fun of local models
that's some next level trolling kek
>>
Imagine having to resort to using nunchaku
Imagine having to use comfyui because you need to use nunchaku
Holy fuck what a cucked life, thank fucking GOD I'm not poor LOL
>>
>>106686819
I use diffusers :)
>>
Civitai lora trainer is the most cucked thing ever. You have to tiptoe around tags so there's no no-no words and then you have to wait hours just so it starts to train epochs and the example images it creates are complete horseshit that have nothing to do how well the lora is trained.
>>
>>106686828
>Civitai lora trainer
Do people really do this? If you can run the model you can most likely train it yourself
>>
>>106686849
>Do people really do this? If you can run the model you can most likely train it yourself
I have so much buzz and nothing to do with it.
>>
>>106686511
> we
absolutely end yourself
>>
>>106686570
>>106686698
you dont need chroma if all you're doing is genning dogshit like the ones youre posting
>>
>>106686828
I don't see any advantages over local. Most I see on civitai is shit anyways.

Is there any non local model better than what you can get with illustrious?
>>
>>106685700
Remove cumfart from the OP
>>106685763
The l in /ldg/ stands for local
Don't like it? Fuck off
>>106686303
Kys avatartranny faggot
>>106686430
Local diffusion general
>>
>>106686719
kek
>>
>>106686828
>>106686875
same, i exploited the buzz system back when it wasn't cucked and don't know what to do with all of it.
i wanted to train some chroma loras but if it's as faggy as you say it is then there's no point.
>>
>>106686926
>i wanted to train some chroma loras but if it's as faggy as you say it is then there's no point.
That's what I'm experimenting with right now since it let's me use 1024 res with batch 4. Might be worth testing the automatic settings it gives, actual restarts with cosine. Right now I'm testing
 {
"engine": "kohya",
"unetLR": 0.0003,
"clipSkip": 1,
"loraType": "lora",
"keepTokens": 0,
"networkDim": 32,
"numRepeats": 1,
"resolution": 1024,
"lrScheduler": "constant",
"minSnrGamma": 5,
"noiseOffset": 0.1,
"targetSteps": 2415,
"enableBucket": true,
"networkAlpha": 64,
"optimizerType": "AdamW8Bit",
"textEncoderLR": 0,
"maxTrainEpochs": 30,
"shuffleCaption": false,
"trainBatchSize": 4,
"flipAugmentation": false,
"lrSchedulerNumCycles": 1
}
>>
File: wan22_00651.mp4 (1.37 MB, 832x448)
1.37 MB
1.37 MB MP4
>>
https://www.reddit.com/r/StableDiffusion/comments/1npdv2l/qwen_edit_2509_is_awesome_but_keep_the_original/
>I noticed this with trying a go-to prompt for 3D: 'Render this in 3d'. This is pretty much a never-fail style change on the original QE. In 2509, it simply doesn't work.
maybe that's because it's a finetune and they got some catastrophic forgetting?
https://en.wikipedia.org/wiki/Catastrophic_interference
>>
File: 1747762384954430.jpg (49 KB, 612x765)
49 KB
49 KB JPG
>>106686166
since some version that was released 3 weeks back, comfy doesn't guard against remote connections the way it used to.
well, the way it protects against hasn't changed, but something else has. especially when used with a particular custom node.
>>
>>106687008
>satan being cast out of the kingdom of heaven
>>
>>106687026
going by the last two threads, this might be the case. it just struggles with some thing that the original model can easily do.
>>
>>106686917
>Is there any non local model better than what you can get with NOOB?
the only thing novelai does better is text but keen eyes can spot nai from a mile away and its kind of a turn off
theres a reason you dont see anon taking the piss with it cause its actually not better
>>
File: saas wonned.png (70 KB, 1069x843)
70 KB
70 KB PNG
>go saas
>get rich
it's that simple. nobody will release local models anymore after this, especially not china.
>>
File: ComfyUI_00374_.jpg (500 KB, 1624x1176)
500 KB
500 KB JPG
Damn, seems like the new Qwen Image Edit model update did actually lose the capability to turn drawings into photos.
That's weird.
Tried the nunchaku, fp8 and bf16 version now with my old workflow where I swapped the text encode node and cranked the cfg up to 4.0.
That's really disappointing.
>>
File: 1754658031815855.png (172 KB, 1910x1119)
172 KB
172 KB PNG
>>106687048
it increased a lot in september because they released a lot of models (LLMs and diffusion models), and a lot of people talked about that company positively, for shareholders that's a good sign to buy the stock
>>
File: Comparison.jpg (2.91 MB, 1936x1320)
2.91 MB
2.91 MB JPG
The new 3.0 version of the NetaYume Lumina finetune is a decent incremental upgrade in most ways IMO, nothing crazy, just better small details like her fingers / the "02" text on her suit / some of the panels and stuff here for most gens
Might want to update the OP link with the new version:
https://civitai.com/models/1790792?modelVersionId=2203741
>>
>>106687065
> peso
of course the api shill and shitposeter is brazilian
>>
>>106687064
I think they can't really improve the model further with some extra finetuning, all this will do is to erase more and more concepts, they need to redo this shit from scratch
>>
>>106687048
Just download comfyui and SaaS won't be such a headache to deal with anymore. You can even combine SaaS solutions with local models in the same workflow
>>
>>106687072
I'll have to take it for a spin soon. Are artist styles any better?

>>106687072
>Might want to update the OP link with the new version:
>https://civitai.com/models/1790792?modelVersionId=2203741
>>106685700
>>
>>106687072
Nice, better character style also! Will share you info in /adt/ thanks!
>>
comfy is brazilian?
>>
>>106687076
how is this shitposting?
>>
>>106687072
Her suit is less detailed in the newer one
>>
File: 00177-686266751.jpg (832 KB, 2688x3072)
832 KB
832 KB JPG
I'm really disappointed with DPM++ 2M's performance but I was never impressed with it in the first place compared to other samplers, colors are always bad and comp is worse than euler a
>>
>>106686994
Honestly CivitAI's default settings with 0.0005 LR / Cosine With Restarts @ 3 restarts work great for Chroma after trying a couple Chroma loras there, I do recommend setting dim / alpha to 32 / 32 as you did though.
>>
>>106687076
>poopdickschizo is Eddy Gordo
kek
>>
>>106687083
This is ComfyUI's entire business model btw: linking all SaaS together so it's no longer a headache to manage 50 different logins and payments. It's in Comfy's best interest to see more models switch to API, as that is what the $17m investment was for. Investors gave Comfy $17m to try and salvage the $17b they spent on building the API models in the first place.
>>
File: ChromaRadiance.png (1.44 MB, 832x1488)
1.44 MB
1.44 MB PNG
>>106686239
>We need freon coolers.
more nuclear power plants, reduce one or more wall of our houses with external units of AC, live with simulated waifus or husbands?

sounds like a plan for a political party
>>
is there an ai general that doesn't have dedicated shizos? it's just so tiresome. can't we just talk about the actual ai images instead of making everything into a cult of personality?
you just really do not matter in the grand scheme of anything, no one does.
>>
>106687109
Post smells like our main schizo
>>
>>106687048
It was only a matter of time but can't complain as wan 2.1/2.2 are still amazing products for free. Going from animatediff to wan is something else. Its a shame about the current wan optimization projects as they completely came to a halt.
>>
>>106687065
they almost doubled their stock in 1 month by releasing some slop models locally, people are so fucking guillible it's comical
>>
>>106687101
how many steps/epochs?
>>
>>106687111
/odg/ is in the draft phase. a true general dedicated to 100% open weight models. no API, no comfyUI, no civitai, no SaaS. it will be painful at first but it's the only way forward for people who actually care about the open-weight ecosystem
>>
File: 1754350297334688.jpg (47 KB, 637x679)
47 KB
47 KB JPG
>>106687087
>>106687103
He's dancing all the way to the bank
>>
>>106687033
isn't it just a matter of having ports forwarded or not? how else could someone connect?
>>
>>106687087
no he's Canadian and white
>>
File: ComfyUI_00376_.jpg (461 KB, 1680x1176)
461 KB
461 KB JPG
>>106687064
>>106687082
Huh, wait a second. Using the old prompt produced nothing, but changing the wording to be more explicit with the input image gets me closer to photoreal again. Well, more uncanny, but it's progress.
Strange.
>Old prompt: Change the image to a highly detailed photograph of a real human, keeping facial features and clothing exactly the same.
>New prompt: Change the image from a drawing to a highly detailed photograph of a real human.
>>
File: file.png (2.46 MB, 832x1488)
2.46 MB
2.46 MB PNG
>>106686483
wan/hidream/qwen/hunyuanimage and possibly others are good for some uses and good technology that would become "generally" good with finetuning if nothing else is released

Chroma is very versatile, Chroma radiance als appears to be coming together now
>>
>>106686828
They actually added negative prompts in the trainer seemingly just to support Chroma sample images better, they're good enough for it IMO. I've had no issues with having to edit captions either even for NSFW stuff (using purely natural language captions from jailbroken Gemini 2.5 Pro)
>>
>>106687149
it should work for each wording, that's what the old model was able to do
>>
>>106687131
>no comfyUI
>>>/g/adt
but you have vramlets as well so
>>
File: chromaradiance.png (2.62 MB, 832x1488)
2.62 MB
2.62 MB PNG
>>
>>106687138
kek
>>
>>106687160
Does /a/ not allow AI threads?
>>
>>106687142
look into what causes the exception and go from there. what allows it to work is absolutely retarded.
>>
>>106687160
ComfyUI is linked in that OP, tardyboy. and what a surprise, a bunch of SaaS slop is as well. Where comfy goes, SaaS follows. It's time to cut comfy from the OP or split threads, way too many shills now thanks to API nodes.
>>
File: file.png (203 KB, 1406x726)
203 KB
203 KB PNG
I got Wan 2.2 set up following the Rentry https://rentry.org/wan22ldgguide

It mentions swapping out the Q8 model for lower versions if you have less than 24gb VRAM. Are the models listed in pic rel the Q8 ones? Also I have 24gb (3090) but the process kills itself when trying to run WanVideo Model Loader. I had similar issues on Wan 2.1. Would anyone have a solution, like a specific workflow that doesn't totally fill 24gb VRAM (and then die)?
>>
>>106687125
Thinking in steps for Lora training is silly IMO since it depends directly on the number of images in the dataset as opposed to more or less actual training being done. I did one there with 60 images at batch size 4, 55 epochs, no repeats, that came out well, for example, it's kinda subjective though how many epochs is "enough".
>>
>>106687186
So you just troll others into doing the work for you or
>>
File: chromaradiance.png (2.8 MB, 832x1488)
2.8 MB
2.8 MB PNG
>>106687111
not really. you need to be a little resistant to trolling and dumb takes on the internet.

there will be SaaS trolls or even paid shills, there will be anti ai or anti people (irl celebs and developer) trolls, there will be other nonsense. likewise you'll get me the dude who just likes a dev model or w/e.
>>
>>106686570
Don't listen to the haters, but I know you won't anyway
Looks great, love the skin shade on it, not sure if it's chroma or your lora, but the matt gloss looks better than the shiny gloss a lot of SD has.
>>
>>106687033
Don't bring up any more specifics about this right now please.
>>
>>106687199
>Thinking in steps for Lora training is silly IMO since it depends directly on the number of images in the dataset as opposed to more or less actual training being done.
I'm confused because I was told, for example, XL usually converges at 1600 steps. And it would make sense to use steps since the number of steps per epoch changes depending on how many images are in the set. So saying "you should usually be okay with 50 epochs" could mean XX number of steps or YY number of steps. Am I missing something?
>>
File: Capture.png (14 KB, 298x236)
14 KB
14 KB PNG
>>106687190
None of those are Q8. You can find Q8 and lower quants here https://huggingface.co/QuantStack/Wan2.2-T2V-A14B-GGUF/tree/main
You should have no trouble using Q8 with 24gb of VRAM with just a little offloading. If you're using KJ workflow increase the blocks to swap until you stop OOMing
>>
File: radiance.png (2.29 MB, 832x1488)
2.29 MB
2.29 MB PNG
>>106687190
i see basically no reason to NOT run a q8 with your 3090. maybe something else like your browser is also using your VRAM?

anyhow if the process kills itself try the cofmyui multigpu distorch loader instead and try giving it 4 or 8GB system RAM for starters, see if that doesn't solve everything.
>>
>>106687064
(I kinda like the grainy style on the left one ///)
>>
>>106687220
Steps is a way better metric for loras. Ignore retards saying to think in epochs. A dataset of 10 images for 10 epochs (only 100 steps) is nowhere near enough training, while a dataset with 500 images and 10 epochs (5000 steps) will be perfect for a style lora for example.
>>
>>106687178
can't seem to find anything about it, maybe it's better this way lol
>>
>>106687222
>>106687231
Thank you anons, weird how the Rentry implies you're downloading Q8 and then links to something else.
>>
Remember to invest now in Wan2.5 [preview] before the prices increase with Wan2.5 [full]!
>>
>>106687249
yes. shouldn't work for too long anyway
>>
>>106687084
Some are a little bit stronger, nothing huge though
>>
>>106687097
how so? i disagree
>>
File: 1742586674799495.png (234 KB, 540x473)
234 KB
234 KB PNG
>32gb
>1500 dollars
that's a better deal than the 5090 no?
>>
>>106687186
filzzefaggot youre such a fucking annyoing retard, I hope you literally kys irl
>>
File: file.png (86 KB, 1092x441)
86 KB
86 KB PNG
>>106687250
sure.

BTW the other anon >>106687222 shows the block swap node. it is a variant on the same thing and using them is fine too. i just prefer these here myself.
>>
>>106687285
Shading on her ankles, creases on waist, shading and highlights on her breasts
>>
File: LIAR.png (248 KB, 1448x1075)
248 KB
248 KB PNG
he lied to us!
>>
File: Radiance.png (2.68 MB, 768x1344)
2.68 MB
2.68 MB PNG
>>106687207
>>106687167
What settings are you using that it doesn't draw you the thin wires and random color particles in the pic? I keep getting heavy grain in radiance. Did you do second pass?
>>
File: qie.jpg (918 KB, 1680x2352)
918 KB
918 KB JPG
>>106687234
I stole that from another Anon in a thread a few weeks ago to try Qwen Image Edit with.
>>106687149
Alright, it appears that it is indeed a prompt issue, rather than an issue with the model. (Depending on how you want to see it).
So, for bf16 anime->real does still work, but needs more deliberate prompting. Hope that Anon from the last thread is still here.
>>
File: radiance.png (2.8 MB, 832x1488)
2.8 MB
2.8 MB PNG
>>106687300
if software support was the same it'd be a better deal I think, even if arguably also the compute is slower

but it isn't. therefore the vast majority will use and recommend nvidia.
>>
>>106687064
Yeah I experienced the same thing last thread. My prompt was more basic 'make the picture real', but it worked without problems with the old qwen edit. I'm still not 100% sold on it.
>>
>>106687300
>AMD
> not NVIDIA with Cuda cores
> why?
Dogshit is cheaper than steak, and you can eat a lot of it for almost free, maybe you should try an all-dogshit diet, just to save up some money
>>
>>106687314
ah yeah i agree desu i wanna see more tests
>>
https://github.com/comfyanonymous/ComfyUI/pull/10006

letting retards near computers should be a criminal offense.
>>
>>106687334
doesn't AMD support CUDA with that Zluda shit?
>>
>>106687330
just for curiosity sake, can you try just the text encoder/clip at bf16 and keep the model fp8?
>>
Is there a Rentry for setting up (the good previous version of) Qwen Image Edit?
>>
File: radiance.png (2.85 MB, 832x1488)
2.85 MB
2.85 MB PNG
>>106687327
no second pass. currently 45 steps (probably excessive) euler beta1_1

but I'm not even sure it's that. also try updating comfyui and the newest https://huggingface.co/lodestones/chroma-debug-development-only/tree/main/radiance_continue - it's possibly just the newer dev checkpoint?
>>
>>106687300
considering the raw compute difference, no, that sounds like a bad deal. If you also factor in the shitshow that is rocm then it becomes a very very bad deal.
>>
>>106687343
>stealing a joke made on the linux repo
Are AI fags ever able to do something original
>>
>>106687357
no fun allowed
PR's are very serious business
>>
File: 1743436945715306.mp4 (575 KB, 1000x800)
575 KB
575 KB MP4
there's a cool thing on Wan 2.5 that lets you add an input image to incorporate to the I2V process, I wonder if you can do something like this on Wan 2.2
>>
>>106687353
I'm running the yesterday's uploaded Q8.
>excessive
Chroma loves lot of steps. I'm running even 60 on normal models when finalizing gens lol.
>>
>>106687388
You can definitely just add it as a reference photo
>>
>>106687346
only if youre a turbo brain-haver it seems
>>
>>106687388
You can but it would be painful
>>
File: 1344859.mp4 (3.34 MB, 928x1248)
3.34 MB
3.34 MB MP4
>>
got wan 2.2 running perfect the way i want it quality/speed balanced
but boy it sure doesn't like doing anything i ask it to beyond really simple movements like "raises skirt", anything more than that and it just kinda shrugs off the idea. meanwhile 2.1 handled it all fine.
is it a lora thing? cfg scale? i don't even have the lightning lora enabled at all on the high pass.

these loras from civitai seem kinda, dogshitpoopoo and almost make the images worse sometimes. limited body jiggle is also something im noticing.

https://civitai.com/models/1984727/ass-ripple

https://civitai.com/models/1343431/bouncing-boobs-wan-14b

>>106687429
for me
>>
>>106687429
For you.
>>
>>106687481
what is disabling lora on the high pass supposed to do?
>>
>>106687524
because the 2.1 lora won't be loaded, its on my list of shit to google this morning in between like 20 other things
>>
>>106687472
love it
>>
>>106687154
>wan/hidream/qwen/hunyuanimage/chroma
None of those have tag based training like illustrious. Everyone is using illustrious because normal prompting sucks.
>>
>>106687574
>Everyone is using illustrious
Because they are retarded ESLs
>>
Stop using 2.1 loras with 2.2. Either run the correct 2.2 lightning lora or not at all
>>
Man, now we can do anything with all those gens and models from the past, hope you didn't delete any 1.5 model anon
>>
>nigbo
>>
so much more is possible with wan2.5 now. thank you comfy for adding it to the api
>>
>>106687149
had no luck with your prompt, still getting slight changes but no 'real' photo. I'll try playing around more with more creative prompting
>>
>>106687587
no.
>>
>>106687587
2.2 lora is literally broken while 2.1 works perfectly.
>>
>>106687587
and what are you going to do about it if i dont do as you say, pussy
>>
File: qie2.jpg (1.91 MB, 1680x4704)
1.91 MB
1.91 MB JPG
>>106687348
I only have the nunchaku version of the model downloaded anymore besides the bf16, but here's the results.
Top 2 gens are nunchaku + bf16 CLIP with old/new prompt
Next 2 gens are nunchaku + fp8 CLIP with old/new prompt
res_2s + bong_tangent at 30 steps.

What I noticed, however, is that the gens start off as photoreal but gradually move towards the input image hard.
So I reduced my step count from 30 (res_2s) to 10 with the nunchaku version. That's the last gen.
So, it could (perhaps) be a scheduler issue? Unsure.
>>
>>106687645
Forgot to include a gen, but it was basically the same again, anyway.
>>
>>106687645
I'm using the simple sched + euler (so pretty standard) with 20 steps 2.5 cfg
got the same problem with 40 steps 4cfg.
What's your cfg?
>>
>>106687661
4.0
>>
i can run the full wan2.2 animate with their code and man, the results are so much better than using either the KJ or comfy native workflows.
I think they are not doing the preprocessing part correctly or skipping it?
>>
>>106687681
more likely using a better preprocessing than native tools, DWPose is known to be flawed
>>
>>106687587
yeah i have no clue if its a meme or something, dont use a lightning lora on the high noise pass, only use it on the low.
>>
>>106687587
And this, ladies and gentlemen, is why APIs will win out over local
>>
>will
they already did
>>
>>106687681
where could i find a workflow example for that pretty please
>>
File: file.png (648 KB, 1026x867)
648 KB
648 KB PNG
>>106687645
>>106687661
Alright, I think I figured it out. It IS a scheduler issue.
I checked the Qwen Image Edit repo and their scheduler config denotes a max_shift of 0.9.
Adding the ModelSamplingFlux node before the sampler stops the inference to nudge towards the input image too hard and basically revert any "changes" it has already made.

This is Nunchaku + fp8 CLIP.
>>
>>106687101
I put paraller run with same dataset, let's see how it goes

{
"engine": "kohya",
"unetLR": 0.0005,
"clipSkip": 1,
"loraType": "lora",
"keepTokens": 0,
"networkDim": 32,
"numRepeats": 1,
"resolution": 1024,
"lrScheduler": "cosine_with_restarts",
"minSnrGamma": 5,
"noiseOffset": 0.1,
"targetSteps": 1610,
"enableBucket": true,
"networkAlpha": 32,
"optimizerType": "AdamW8Bit",
"textEncoderLR": 0,
"maxTrainEpochs": 20,
"shuffleCaption": false,
"trainBatchSize": 4,
"flipAugmentation": false,
"lrSchedulerNumCycles": 3
}
>>
Why should Wan2.5 be released locally? Nobody has the local hardware to run it properly. It's way bigger than Wan2.2, to the point where 96gb isn't even enough.
>b-but quants and speedslop loras!
All this does is degrade the quality of the outputs and ruin the reputation of the model. Time and time again we see retards in these threads concluding that models are 'shit' and 'slopped' because they're trying to fit the model into their shitty 12gb 3060 through some shitty custom node quant workflow. If 2.5 was released locally, all the news surrounding it would be about how it's slower and worse than 2.2, and how 'censored' it is because it wasn't trained on 500000+ hours of tattooed 480p goyporn.

alibaba released wan 2.5 as an API model and their stock soared 10%. it's clear now that this was the right move
>>
File: 1728837529372924.png (37 KB, 470x385)
37 KB
37 KB PNG
>>106687680
you might be onto something, lower amount of steps does indeed produce a working image, albeit with a lot of imperfections, in my case it didnt even convert the background (6steps,2.5cfg). If I went for higher steps it would revert to the anime image.
>>106687740
bro youre fuckign right like what the FUCK.
Prompt is still finnicky as fuck, so you can't just write 'make the image real' anymore, but this stopped the reverting.
>>
File: 00194-2766220383.jpg (430 KB, 2688x3072)
430 KB
430 KB JPG
>>106687111
He just decided to be schizo out the gate and never stopped, if not for him there wouldn't be three threads
>>106687216
It's a small circle of bitter schizos
>>
Didn't forge/neoforge get a massive update? Why are you still using comfy?
What a cucked community
>>
File: ComfyUI_01119_.jpg (367 KB, 2432x1664)
367 KB
367 KB JPG
>>
>>106687821
>base_shift
>max_shift
>shift
>shift_terminal
bruh...
>>
>>106687842
still better than the fucking shift at 3 of the old model.
>>
>>106687842
but what is shift and shift_terminal on comfyui? does that even exist? and you have to put the scheduler on exponential?
>>
File: radiance.png (1.47 MB, 1488x832)
1.47 MB
1.47 MB PNG
>>106687392
i run a full checkpoint from the last 24h or so

> Chroma loves lot of steps.
yes, I suppose so.

anyhow I got you a cleaned up workflow here, prompt/seed doesn't result in anything too remarkable: https://litter.catbox.moe/btq75xxj6ciytqo4.json
>>
>>106687835
listen bucko, i just wasted so much of my limited lifespan installing/uninstalling requirements.txt and all the gay little other bits and bobs to get shit working, the fact that some jew did it in one PR is honestly impressive and im grateful to em.
>>
Is ani here? I have a question
>>
>>106687835
>Why are you still using comfy?
why shouldn't I use comfy? it supports every single local model so far
>>
File: 00198-2766220383.jpg (882 KB, 2688x3072)
882 KB
882 KB JPG
I was talking about how hungry chroma is with steps which makes me curious to try a high res pass at 30 steps
>>
how many of you goys are running 3200mhz speed ram? trying to get a performance benchmark idea here.
it only just occured to me that its probably limiting how fast my wan2.2 gens could be given 3200mhz is probably kinda slow by comparison to current year's speeds
>>
File: 1734314091930057.mp4 (1.67 MB, 816x560)
1.67 MB
1.67 MB MP4
>>
>>106687919
For second pass is matters less. The first is more important becasue there is a difference in details and small errors being fixed when running 30/45/60/+ steps.
>>
>>106687889
?
>>
>>106687897
please continue using comfy, that way we can laugh at your fried slop outputs thanks to comfy's inability to implement local models properly (so many more important api to support, please understand!)
https://github.com/comfyanonymous/ComfyUI/pull/9882
>>
File: 1729964042027969.png (1.07 MB, 784x1232)
1.07 MB
1.07 MB PNG
>>106687740
>Change picture 1 from an anime drawing to a photograph of a real person in a real place
it sucks because you have to specify the subjects, also it changed the beach bg to well... that and also changed the camera
>>
>>106687821
meh, that's too much effort for a model that only produces pure slop when going for 2 characters
>>
File: radiance.png (3.14 MB, 1488x832)
3.14 MB
3.14 MB PNG
>>106687929
cool gen
>>
>>106687935
I don't care about HunyuanImage, refiner is a meme and I don't want to support that
>>
File: Qwan_00025_.jpg (795 KB, 2640x1984)
795 KB
795 KB JPG
>>106687821
Yeah, the prompt is still more sensitive, but I'm glad we got it working in the end.

This got me curious and I checked the Qwen Image (Non-Edit) repo as well, they're also using a 0.9 max_shift there.
Adding that to the workflow doesn't change too much, except mess up the left side of the image a bit.
>>
>>106687932
My experience with HD and a good prompt/lora is that you can be aggressive with de noise and it corrects a lot of errors, granted you can maintain hands well which is easy with a good lora set in my case. HD is being slept on in my opinion
>>
>>106687945
is it me or do they look like the chinese version of them? I think the finetune has biased the model towards chinese people (I think they only used image of chinese people to go for the finetune)
>>
>>106687957
>HD is being slept on
Because it has worse fine detail. It's shit.
>>
>>106687953
this. who cares about hunyuan's local model honestly when seedream 4.0 exists. please continue to focus on the important stuff. thanks so much for your hard work, looking forward to trying wan 2.5 later today!
>>
File: radiance.png (2.63 MB, 1488x832)
2.63 MB
2.63 MB PNG
>>
File: radiance.png (2.69 MB, 1488x832)
2.69 MB
2.69 MB PNG
>>
>>106687965
It has better composition for complex scenes from my testing and a good lora will make up for detail loss, you wouldn't know that on it's face but when you explore a bit you can see it
>>
>>106687978
>imaginge running a 9b model to get a SD1.4 tier image
localkeks are so funni
>>
>chroma
lines
>radiance
blocks

can't win
>>
https://xcancel.com/higgsfield_ai/status/1970754649042145504#m
>the best open source model
>now with sound
ITS NOT OPEN SOURCE REEEEE
>>
>>106687481
1. Could be down to cfg being too low (high noise side) and possibly the strength of any loras you may be enabled set too high (aside from lightx2v). You don't always need to run every lora at 1.00 strength.
2. Using multiple loras can lower or reduce details of the output (at least with i2v), in that case try lowering the strength of said loras especially on the low noise side.
3. Some of those civitai loras are indeed sometimes just dogshit, like that ass ripple lora.
>>
>>106688029
reminder that this is comfyorg's goal: conflating local with API models to confuse people into promoting API as 'locally uncensored' to bait more people into paying for API services. half the sd retards don't even realize wan2.5 isn't a local model because it was shilled all over. same with qwen max llm. retards spouted 'china good!' for years and now everyone thinks china = local.
>>
File: radiance.png (2.52 MB, 1488x832)
2.52 MB
2.52 MB PNG
>>106687993
yes, the anime waifu support is getting nice again on radiance
>>
chroma went from mildy interesting to downright embarrassing after seedream. china showed the world how a proper model should be baked
>>
>>106687586
Is there any guide for training and prompting with normal prompts?
I'm also a retarded ESL but I don't know how to exactly put the information on the image for those models
>>
>>106688055
>the anime waifu support is getting nice again on radiance
how, it barely knows a few anime character and has no artist tags
>>
>>106688055
This looks exactly like anythingv3 from 2022, except it takes 10x longer
>>
>>106688063
Just train it lolololololololol
>>
>>106687840
Fag
>>
>>106688082
thanks
>>
bro is trying so hard it's cute
>>
File: Qwan_00033_.jpg (598 KB, 3358x1232)
598 KB
598 KB JPG
>>106687942
>Change the image from a drawing to a highly detailed photograph of a real human.
nunchaku, fp8 CLIP, 30 steps
shift 1 in the first and shift 3.10 in the second gen
max_shift 0.9
res_2s
beta_57
cfg 4.0
>>
>>106688101
can you use this to make your comparisons? it's hard to know which one is which
https://github.com/BigStationW/Compare-pictures-and-videos
>>
>>106688094
Good image if it was Chroma but it's a shame it's 8x slower than SDXL, still doesn't make up for the time vs results
>>
give it to me straight anons, is 16GB enough for 720p Wan2.2 with quants, or should I be saving up for a bigger card?
I keep getting OOMs and exploding RAM, I've tried maxing out the block swap node in the KJ workflow as shown in
>>106687222
but even setting swap blocks to 40 and using Q4_K_S ggufs it just seems to ignore whatever I ask it to do, balloons to like 22GB VRAM, and then stops doing anything
>>
but can qwen do vore?
it sure can do gore, i nearly threw up
>>
>>106688130
Wait for 24gb supers. 24gb is the bare minimum for acceptable outputs if you want to run modern models
>>
>>106687645

Is this the new Qwen Edit? I've been trying to do similar stuff with previous qwen but the result is always just a slightly more "Realistic styled"
>>
Now that Alibaba has abandonned us, what's our next cope?
>>
>>106687645
>So, it could (perhaps) be a scheduler issue? Unsure.
they say here they're using the exponential scheduler >>106687821
>>
File: ComfyUI_00282_.jpg (608 KB, 2408x1176)
608 KB
608 KB JPG
>>106688109
Will do (or something similar) for the next comparisons. Sorry.

>>106688152
Yes. However, it worked fine with the old one as well.
This is the same image I did with the old model a few weeks ago.
Input image - Qwen Image Edit output - Wan upscale.
>>
>>106688155
chroma remains the eternal beacon of localcope. just keep giving scatstone more money and it will lock in by epoch 500
>>
>>106687840
Plastic looking AI slop, no soul
>>
>>106688176
retard
>>
>>106688184
It's a trash gen, don't like it, sorry.
>>
>>106688168
That's pretty much good enough for the type of output I want. Would you share workflow?
>>
>>106688176
>Plastic
huh?
>>
>>106688194
It looks literally just like the anime. Continue being retarded.
>>
>>106688111
That's not true
The tagging is a problem I hope I can showcase the model better the more I use it, there was a time where anons used to claim noob was worthless until I started using it and getting good with it. I like to show the process of understanding the model but I feel like lora creation is key sadly.
>>
>>106688204
he’s seething because it looks better than 2.5d chromasloppa
>>
File: file.png (2.75 MB, 832x1488)
2.75 MB
2.75 MB PNG
>>106688063
makes a diverse range of anime/comic waifus with clothes and accessories
>>
File: Chroma2k-test_00011_.jpg (682 KB, 1248x1824)
682 KB
682 KB JPG
>>
>>106688204
For you, who are the pedophile author, it's plastic and feels overly smooth artificial
Also KYS pedo who can't listen other people's opinions
>>
File: lol.png (284 KB, 536x453)
284 KB
284 KB PNG
>>106688223
"They are beautiful hands", that's what I would have said if we were in 2022.
>>
Debo really ripping from my old playbook to grief the thread, even more pathetic
>>
>>106688176
it really does look like dogshit doe
>>
>>106688130
I don't usually use kj.
i see no reason why it wouldn't work with the multigpu node,

yes, the 32gb 5090s have both a compute and VRAM advantage and will be faster but that doesn't mean 16GB VRAM isn't working, more slowly.
>>
so many redditors here today
>>
>>106688149
thanks for the honesty anon
is a 3090 still worth it? hoping the 5080 Super will push the prices down
>>
>>106688262
just buy a 5090
>>
>>106688262
As a last resort maybe. It’s painfully slow, slower than 16gb cards but at least it wont oom as easily
>>
>>106688228
Keep giving your opinions freely anon. He's a snowflake, had similar experience with him on other gens. Don't know why he shares gens if he can't handle when people give negative opinions
>>
>>106688271
It's ironic the SaaS shit started the moment a certain schizo with that card got priced out
>>
File: file.png (2.78 MB, 1488x832)
2.78 MB
2.78 MB PNG
>>106688237
definitely is not rare, but even now it's back to a good range of 1girls
>>
>>106688298
Don't like those kawaii fags, or whatever they're called, that anon is a pedo, hopefuly he will gonna end up getting arrested
>>
>>106688130
yes, 5060ti 16gb. bought it when i saw the jap benchmarks pointing out it was the only low end straggler that DIDN'T oom testing 720p.
>>
>>106688326
pedo website btw.
Reddit is calling you need to go back
>>
>>106687840
KYS pedo, don't pretend to disguise your rampant pedophilia with your "cute" gens.
>>
>>106687840
coming in clutch to say this gen is cute
>>
>>106687840
Massive win, you triggered the schizoboomer brown somehow. Post more
>>
>>106688359
Free speech website btw
Reddit is calling you need to go back
>>
>>106688377
>we did it, we beat poopdickschizo
is it the 4chan version of "owning the libs"?
>>
>>106688363
holy melty
>>
>>106688384
It's a term made by the schizo he likes to say retarded catch phrases, you should have seen the beans era where he would post that non stop every thread like clockwork. He really wonders why the thread he kept is dead
>>
>>106688377
i'll still never get why they even run ops on the locals threads. Like, is this supposed to get us to stop using local models? what does this accomplish? we just laugh at the schizos and keep genning/talking.
>>
I don't care, it's my opinion, pedos like you are famous for samefagging when someone calls you out.
>>
>>106688109
30 steps reverts to anime for me, 2.5cfg was also not enough. 20 steps + 4 cfg is working

in your example shift1 looks much better. I'm starting to think I might've fucked my wf somewhere, as I'm not getting your same results.
>>
>>106688379
why would I go to Reddit?
it's not a pedo site
>>
my favorite schizo catch phrases are lil bro and i accept your concession
they say it like clockwork every thread
>>
>>106688196
For the old one or the new one?

>>106688410
Have you added what I did in >>106687740 ?
>>
File: 1757830010588863.jpg (351 KB, 1440x1440)
351 KB
351 KB JPG
I mean who the fuck actually uses API. It's the most lame shit in existence.
>>
>>106688411
It's an echo chamber, maybe you like it if you can't handle when someone calls you out.
>>
>>106688444
>It's an echo chamber
nta but going by that logic then this place is an echo chamber too so he's in the right place lmao
>>
>>106688443
The main thread schizo got a 3090 which is aging with newer models and he can't get a new card from his parents, it's just that simple, same posting patterns and behavior. It's kind of ironic with how hard he went to defend XL because he could lord over other anons when it first came out. He was too autistic to make anything good and still got laughed at
>>
File: radiance.png (2.45 MB, 1488x832)
2.45 MB
2.45 MB PNG
>>106688130
also you say quants, I think it ought to work even with q8 on 16GB vram, again assuming you can dump a bunch of layers and other stuff like the text encoder to RAM and/or unload them

maybe the resolution forces unloading the checkpoint and perhaps even tiled decoding when you get to VAE, IDK. but it should work.
>>
new bread doko?
>>
>>106688459
Same. This is my place, not moving. Cope and seethe when you get called out. Faggot.
>>
>>106688463
Well tbf XL with an IL finetune still the most kino anyways.
>>
>>106688473
when can I finally 3d print my ai genns and eat them?
>>
>>106688490
Yeah but this was during man face base XL era where retards were fighting me on the predictable downfall of SAI due to their training practices and drunken sailor spending on DoA models
>>
>>106688488
I accept your concession lil bro.
>>
>>106688467
with around 64gb, yeah. I think everyone saying you can run q8 on a 16gb card are forgetting to mention that extremely important tidbit. Or, don't even realize they're paging onto their nvme ssd's and brutally raping down its lifespan.

oh boy. more waiting for me kek must resist the urge to let corsair rape me out of $120 for 32gb of ddr4 again.
>>
Is it a good idea to train the new Qwen Image Edit with Dall-e 3 images so that I can ask it to turn regular or anime images into that style? Will that work well?

What trainer to use?
>>
new
>>106688541
>>106688541
>>106688541
>>106688541
>>
>>106688504
Call me whatever you want, lil bro, schizo, boomer.
My message, and what I'm trying to say, is >>106687840.
This gen is smooth pedophile slop, I don't like it and it's clear that the author is an undercover pedophile masquerading behind cute generations with faggy butterflies. From there, call me whatever you want.
>>
>>106688552
what do you mean?
>>
>>106688512
it'll be years of genning until you can kill a decent nvme with a few gb every wan gen

and the amount of additional free RAM required to offload is about what you offload.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.