[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


Janitor applications are now being accepted. Click here to apply.


[Advertise on 4chan]


Never Converge Edition

Discussion of Free and Open Source Text-to-Image/Video Models and UI

Prev: >>106540158

https://rentry.org/ldg-lazy-getting-started-guide

>UI
ComfyUI: https://github.com/comfyanonymous/ComfyUI
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneo
SD.Next: https://github.com/vladmandic/sdnext
Wan2GP: https://github.com/deepbeepmeep/Wan2GP

>Checkpoints, LoRAs, Upscalers, & Workflows
https://civitai.com
https://civitaiarchive.com/
https://openmodeldb.info
https://openart.ai/workflows

>Tuning
https://github.com/spacepxl/demystifying-sd-finetuning
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/sd-scripts/tree/sd3
https://github.com/derrian-distro/LoRA_Easy_Training_Scripts
https://github.com/tdrussell/diffusion-pipe

>WanX
https://rentry.org/wan22ldgguide
https://github.com/Wan-Video
https://alidocs.dingtalk.com/i/nodes/EpGBa2Lm8aZxe5myC99MelA2WgN7R35y

>Chroma
https://huggingface.co/lodestones/Chroma1-Base
Training: https://rentry.org/mvu52t46

>Neta Lumina
https://huggingface.co/neta-art/Neta-Lumina
https://civitai.com/models/1790792?modelVersionId=2122326
https://neta-lumina-style.tz03.xyz/

>Illustrious
1girl and Beyond: https://rentry.org/comfyui_guide_1girl
Tag Explorer: https://tagexplorer.github.io/

>Misc
Local Model Meta: https://rentry.org/localmodelsmeta
Share Metadata: https://catbox.moe | https://litterbox.catbox.moe/
GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/
Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Txt2Img Plugin: https://github.com/Acly/krita-ai-diffusion
Archive: https://rentry.org/sdg-link
Bakery: https://rentry.org/ldgcollage

>Neighbours
>>>/aco/csdg
>>>/b/degen
>>>/b/realistic+parody
>>>/gif/vdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/trash/slop
>>>/vt/vtai
>>>/u/udg

>Local Text
>>>/g/lmg

>Maintain Thread Quality
https://rentry.org/debo
>>
File: Yay.gif (496 KB, 640x640)
496 KB
496 KB GIF
Made a gif of this >>106540916 for anyone that is interested. Just looping in celebration.
>>
Cursed thread of foeboat
>>
File: 1_00097_.mp4 (453 KB, 640x640)
453 KB
453 KB MP4
>>106542161
Thanks for making that, the cup part was silly.
>>
File: hy_generation_test_0.jpg (329 KB, 2048x2048)
329 KB
329 KB JPG
I did a few more tests with hunyuan image. Each one with the "reprompt" enabled and then disabled. The initial prompts I used are captions generated from original illustrations Qwen 7B VL.

spoiler: sloppy slop but maybe the higher resolution and vae will make it good for upscaling output from other models.
>>
>>106542182
That's the reason I edited it. The best bit was at the start.
>>
>>106542161
I'll always celebrate Futaba and 4chan.
>>
blessed thread of anime
>>
>>106542163
Ahaha based ty anon.
>>
>>106542184
>3dpd background with anime face
This model is so over.
>>
The reprompt didn't change much here
>>
>>106542182
catbox?
>>
>>106542184
>real ass shoes
>>
File: 00000_721664149.png (2.53 MB, 1152x2016)
2.53 MB
2.53 MB PNG
>>106542182
You're welcome. I downloaded it from Civit, I have no idea who she is, I don't like anime, kek.

Remember to post anime here! It's important for /ldg/ unity!
>>
>>106542192
i just love me pixar girls so much. i know everyone got it out of their system back with sd1.4, i'm a late bloomer.
>>
File: hy_generation_test_1.jpg (274 KB, 2048x2048)
274 KB
274 KB JPG
The reference for this prompt was a knights of sidonia manga cover
>>
>>106542206
y-you don't know yotsuba, the mascot of 4chan?
>>
>>106542196
>>106542087
"The girl wiggles her breasts before she smiles and blinks at the camera."
>>
No reprompt. The reprompter deleted information about the character / franchise name, very helpful.
>>
>>106542097
that works i guess but the 4step loras make it shit the bed
>>
File: hy_generation_test_2.jpg (317 KB, 2048x2048)
317 KB
317 KB JPG
Midjourney-flavored slop?
>>
What's the best model for horror?
>>
Why did the quality of this thread suddenly drop so low?
>>
No reprompt, followed the original prompt better.
>>
>>106542184
They're made to look like slop on purpose. Being open source, they intentionally make it look unrealistic
>>
>>106542239
640x640
>>
File: 1_00100_.mp4 (447 KB, 640x800)
447 KB
447 KB MP4
>>
>>106542184
Does it know artists?
>>
File: hy_generation_test_3.jpg (445 KB, 2048x2048)
445 KB
445 KB JPG
Last one, reference was a technical diagram for a Soviet rocket. Wondering if it can do fine detail lines/text with the high res vae.
>>
>>106542251
I tried a few so far and got nothing. Actually couldn't get it to deviate from its base generic moe slop style at all for anything anime-like.
>>
File: v4.jpg (386 KB, 844x1200)
386 KB
386 KB JPG
>>106542223
What language version is this because Vol4 is this.
>>
>chroma-unlocked-v50-annealed
what the fuck is annealed in this context?
>>
>>106542272
some schizo bake experiment. avoid
>>
>>106542272
dont even bother. stick to wan and qwen
>>
File: 1_00102_.mp4 (597 KB, 640x800)
597 KB
597 KB MP4
>>106542195
>>
>>106542272
Why are you using v50 at all? Stick to Base or HD.
>>
>>106542217
No, no idea, never watched it.
But that doesn't matter, keep posting it!
>>
>>106542288
>>106542290
uh, i was already downloading it kek. damn, that bad? im gonna try it
>>
>>106542272
Use either Chroma base or Chroma HD. The numbered versions 49, 50 and 50A are abortions. You can also get 48 as a benchmark.
>>
>never watched it
>>
File: 1_00103_.mp4 (692 KB, 640x800)
692 KB
692 KB MP4
>>106542241
>>
>>106542293
>>106542299
I like testing models to see the differences on fixed seeds. That's why. I downloaded base and HD too so i'll keep those.

Gonna try Qwen and the others later.
>>
>>106542135
Now that Nvidia will be finally offering 24 gb at a somewhat not completely insane price, what is the state of the current open source video models? Can they finally support start and end frames? Or is it still just prompt based?
>>
>try comfyui native t2v
>blurry and noisy
gonna have to bite the bullet on that wrapper huh
>>
>>106542248
stay mad vramlet
>>
File: 1376452934921.jpg (785 KB, 1134x1600)
785 KB
785 KB JPG
>>106542266
>>
>>106542184
>>106542195
Crazy how one is SEXO! SEXO! SEXO! while the other is Ehhh, Yeah. I guess I would.
>>
File: 1_00104_.mp4 (1.75 MB, 640x896)
1.75 MB
1.75 MB MP4
>>106542294
>>
>>106542314
i'm doing start and end frames right now but it's for looping pron
>>
>>106542314
Why buy a 5080 super when you can just buy a 5090?
>>
>>106542310
I don't blame you since lode explains literally anything ever. If I remember right, annealed was a merge that one random autist requested on his discord server and has next to no use case.
>>
>>106542314
>not completely insane price
You do realize it's going to be vastly more expensive than the 4090 still. You wont get it for MSRP.
>>
File: 1733668812356.webm (2.43 MB, 848x480)
2.43 MB
2.43 MB WEBM
Videogen peaked with HunVid.
>>
File: 1733674062968.webm (1.64 MB, 960x544)
1.64 MB
1.64 MB WEBM
>>106542347
>>
File: 1733745391308.webm (2.69 MB, 1280x720)
2.69 MB
2.69 MB WEBM
>>106542357
>>
>>106542340
I can't imagine spending tens of thousands and not even documenting anything. Like if he wants this to be a base model to be trained, how about some basic training settings for a full finetune?
>>
Am I tripping or does Qwen not look much different than Noob/Illustrious semi-realistic generations?
>>
>>106542383
qwen looks like coherent sd 1.5
>>
>>106542383
get your eyes checked
>>
File: 1729289276497737.jpg (770 KB, 1416x2120)
770 KB
770 KB JPG
>>
>>106542366
prompt for this one?
>>
>>106542383
it was probably trained on slopped aibooru outputs so...
>>
File: 1756135024080396.jpg (872 KB, 1416x2120)
872 KB
872 KB JPG
>>
File: 1_00106_.mp4 (957 KB, 640x928)
957 KB
957 KB MP4
>>106542192
>>
>>106542368
Lode is a super autist so I don't really think much of him not being straightforward. What does surprise me is how *nobody* in his circle has gone out of their way to help out the community and get some basic guides out for the model.
>>
>>106542334
>>106542345
The 5070 Ti Super is rumored to be sub 800 bucks. Better buck for money than a 5090 or 4090.
>>
After training a few loras and using it pretty often... chroma is just too hit or miss for me to want to sit here and gen constantly. The variance of quality when it comes to seed is actually insane. Super frustrating to deal with.
>>
>>106542444
what part of You wont get it for MSRP do you not understand
>>
>>106542424
I don't know. It's from anon https://desuarchive.org/g/thread/103457674/#q103458870
>>
>>106542444
>bang for your buck
If you're pinching pennies you're in the wrong hobby
>>
>>106542237
depends on what you mean by horror but any good model should play nice with "horror \(theme\)"
>>
>>106542444
How fucking poor are you that you can't just get a 5090? You could probably save up in a few months if you're really want it. I agree with the other guy, wrong hobby to be a fucking miser.
>>
File: 1396157476318.png (705 KB, 3304x3317)
705 KB
705 KB PNG
How do I run pseudo hiresfix/upscale/detailer or whatever you wanna call it, on chroma? The external hiresfix script doesn't on flux architecture and the flux upscale controlnet doesn't seem to do anything on Chroma. If it can be done with just UltimateSD upscale, then what settings? All I currently get are just enhanced image artifacts and not polishing.
>>
File: 1_00109_.mp4 (793 KB, 640x832)
793 KB
793 KB MP4
>>106542192
>>106542211
>>
>>106542482
holy shit, I was just about to ask that.
>>
File: 1_00110_.mp4 (455 KB, 640x640)
455 KB
455 KB MP4
>>106542482
>>
>>106542444
Not sure why you're getting so much negative attention. It's the best price/performance ratio. The 5000 series have been gathering dust for over a year at this point at MSRP so past the initial rush I imagine the same will be true of these.
>>
>>106542522
see
>>106542449
>>
>>106542445
Chroma really needs a long, detailed prompt to get consistent results with. The less detail you give it, the wackier the output is going to be.
>>
>>106542482
vae decode. upscale image with upscaler node. vae encode, sampler at low denoise
>>
>>106542528
I already mentioned that in my post. 5000 series hasn't exactly been flying off the shelves and I don't expect the revised models to either considering we're in a recession.
>>
>>106542496
Reminds me of those old esurance commercials. Also
>she won’t stop yapping
Kek, it’s almost charming at this point
>>
>>106542445
it made me go back to slopmixes on sdxl.
the prompt adherence can be superior but if i get another random art style change even though it has been described in detail in the prompt i'm just going to delete chroma and shitpost here about
>>
>>106542535
Not latent upscale?
>>
>>106542347
>Videogen peaked with HunVid.
It didn't, but I am excited to go back to legacy video models with a GPU from 2030 that can make a 5 second video in 15 seconds in fp16 at 720p and see what could have been if only we had the compute at the consumer level

>>106542544
5090 still sells out like hotcakes. I expect the super to as well because it is above 16gb of vram
>>
>>106542469
post your 4x 6000 pro sis, you're not a retarded poorfag, right? lool
>>
>>106542583
I guess that depends on your country. Here it's never gone out of stock, hence my perspective.
>>
>>
>>106542583
>Videogen peaked with HunVid.
NTA, but I think it would be more correct to say that Hunyuan peaked with Hunvid.
>>
>>106542586
There's a big difference between a 5070, a 5090 and a 6000 Pro. But yeah bro have fun genning on SDXL or whatever you will do on that shitty card. But I get it, you're poor, you don't have expendable income, so even $800 is a bad choice.
>>
how the fuck do i load gguf's with kijai's wrapper
>>
>>106542606
>me, NTA, but an sdxl genner on a 5070ti just catching random strays for no reason
Damn anon you don’t have to do us like that
>>
>>106542586
You got told to buy a 5090 and you skip to demanding a x4 6000 Blackwell setup, you're retarded. A 5090 is the best bang for your buck. No need to settle for a 5070 LMAO.
>>
>>106542611
You just load the model in the model loader node.
>>
fucking around with infinite talker and it's cool but still has janky lipsynnc issues because it's on outdated wan 2.1. Got it to work with a janky 2.2 allinone porn model and it was obviously better with lipsyncing even through all the noise and jank.

I'm assuming all the comfyui nerds are making 2.1 workflows for a reason though. What is it waiting on?
>>
>>106542637
No he wants a card that's already obsolete and can't even run modern models of 2025 let alone models in the next 2 years.
>>
>>106542535
>sampler at low denoise
how low? i keep getting these black scanlines on the upscale.
>>
>>106542637
>>106542606
>poorfag cope
Lol. Imagine not having like 5-7k$ after like month of work, poorfag children.
>>
>>106542706
broski is larping
>>
>>106542706
I don't know why you seethe like this
>>
>>
File: hy_generation_test2_0.jpg (490 KB, 2048x2048)
490 KB
490 KB JPG
More hunyuan slop!
>>
File: hy_generation_test2_1.jpg (384 KB, 2048x2048)
384 KB
384 KB JPG
Hunyuan seems to really like this inset photo frame composition. Not sure what triggers it
>>
File: hy_generation_test2_2.jpg (347 KB, 2048x2048)
347 KB
347 KB JPG
Was aiming for a flatter shading style here, but it least went with the limited palette.
>>
>>106542733
I want to know how to seemingly unrelated companies managed to produced the exact same slopped anime style.
>>
>>
File: hy_generation_test2_3.jpg (304 KB, 2048x2048)
304 KB
304 KB JPG
>>106542760
They really may be training on eachother's outputs + the same prepared datasets. I think that at the least they're doing their deranged "aesthetic tuning" pass with very similar targets.
>>
>>106542760
A general model is going to be reduced to the average of the dataset, ultimately what the consensus of what "anime" means.
>>
>>106542710
>>106542709
It's ok you're a poorfag who larps as having money for spending 2k$, you can just find another hobby lol
>>
>>106542767
Aka synthetic data, well known secret the chinks just use ai to train the ai
>>
File: hy_generation_test2_4.jpg (312 KB, 2048x2048)
312 KB
312 KB JPG
This one is interesting because the prompt actually avoided including the name "asuka" or anything that specific, while still trying to describe her.
>>
>>106542774
>no proof
>no terminal of your 4x6000PRO
LOL
>>
>>106542780
He's a tourist.
>>
>>106542774
Do you actually think it's difficult to have more than $10,000 in a bank account? Get a job anon. Even a normal incel can work 40 hours a week and make $1000/week and keep all that money living at home with mommy.
>>
File: HONK HONK.mp4 (2.93 MB, 512x640)
2.93 MB
2.93 MB MP4
reposting this in case anyone missed:

>Wansisters, we've OFFICIALLY and natively escaped 5 second hell. I got up to 15 secs and can't notice the seem, here's the info:

>How?

- the new Wan Context Windows (manual) node

>Workflow

- https://files.catbox.moe/aw54aq.json

Here video example, hopefully some of you anons will get better results (yyyyeeeeess, I know, lightx2v fried the first frame, kek)
>>
File: hy_generation_test2_5.jpg (202 KB, 2048x2048)
202 KB
202 KB JPG
Was supposed to be Sailor Mars. No idea how it became... this.
>>
>>106542790
Can you believe it, the problem isn't you *can't* do 15 seconds, the problem is no one does variations of the same movements for 15 seconds. You can tell it's just doing the same prompt on repeat.
>>
File: hy_generation_test2_6.jpg (287 KB, 2048x2048)
287 KB
287 KB JPG
Still not much luck deviating from the unistyle
>>
File: hy_generation_test2_7.jpg (390 KB, 2048x2048)
390 KB
390 KB JPG
I think that despite the higher resolution vae/output, the actual level of detail is lower than what we already get from the ~1MP models.
>>
>>106542790
Clown girl ai gf when?
>>
>>106542812
That's a general digital illustration, you should browse ArtStation to see what real artists do. Most artists don't do 1000 hour hyper-detail projects.
>>
what sampler/schedulers do you all use for wan
>>
>>106542825
I'm in the media industry and quite familiar. In the case of that test, the goal of the prompt was a high level of detail.
>>
>>106542847
Well "high level of detail" isn't what any model is captioned with, but you would know that as an expert.
>>
>>106542780
>well known secret the chinks just use ai to train the ai
kek, nothing is more slopped than western models, with Flux dev being the most slopped of all due to being entirely trained on synthetic data, as in nothing but output from Flux Pro

Only unslopped base model since SD1.5 is Chroma, even SDXL was doing a lot of synthetic training data
>>
>>106542790
What does this new node even do?
>>
>>106542790
slow motion
>>
>>106542801
True, its a hit or miss but yes, a lot of the gens do tend to repeat. The plus side is, its a step in the right direction away from the hacky workflows. Plus vramlets like me can load 240+ frames without OOMing. Reminds me of animatediff days.

>>106542856

>These nodes let you sample in sliding context windows instead of all at once, opening up new workflows for long sequences. Currently, only manual control is supported, and some WAN models still need tuning, but this lays the groundwork for more advanced scheduling and custom nodes.

Source: https://blog.comfy.org/p/comfyui-now-supports-qwen-image-controlnet?open=false#%C2%A7context-window-support
>>
>>106542790
Okay but, can you make her do a long sequence of things within that timeframe or is it going to loop like that? It's also in slow motion.
>>
File: 1751759780757968.mp4 (257 KB, 512x512)
257 KB
257 KB MP4
>>
>>106542895
The right direction would be allowing first frame + X frames which can be the last frames from another video with a new prompt. The problem never was "the videos aren't long enough" or rather that's a naive understanding of the problem, the problem is we need to be able to control the video over long sequences and a single prompt doesn't cut it, so you need a sliding window and you need to be able to interpolate the prompt across the sliding window.
>>
File: what do you WANt.jpg (24 KB, 474x365)
24 KB
24 KB JPG
>>106542897
Fuck knows. Try it out yourself, workflows there or load the new context nodes after updating comfy if you havn't. I only mentioned it because didn't see anyone talking about it.
>>
>>106542935
Message the devs, brother.
>>
>>106542095
>>106542163
>>106542434
>>106542496
Stop touching the mix with those latex gloves, you're ruining it dumb bitch
(Also still fuckign around with VibeVoice)
https://vocaroo.com/1cbgLlwQ15FK
>>
>>106542934
>256 color VGA
The good old days
>>
>>106542948
KEK
>>
File: file.png (31 KB, 933x260)
31 KB
31 KB PNG
holy shit ani is actually onto something with this sdcpp stuff
>>
File: 1746481131113230.jpg (941 KB, 1416x2120)
941 KB
941 KB JPG
>>
Hunyuan did this without even asking
>>
>>106542999
(the nip, not the censoring)
>>
whats the verdict on the new chink model? total slop? better or worse than qwen?
>>
File: 1731168167664444.jpg (934 KB, 1416x2120)
934 KB
934 KB JPG
>>
>>106542999
this the image model? wonder if it's fully uncensored like their video model
>>
Name a bigger scam than fp16 fast. You can't.
>>
File: hy_generation_test3_8.jpg (368 KB, 2560x1600)
368 KB
368 KB JPG
>>106543022
Yes, the new Hunyuan Image.

>>106543010
Many examples in this thread. Slop level is similar to Qwen.
>>
>>106542999
looks like a gay man with long hair
>>
>>106542790
i bet if you sped it up so its not longer slow motion it would be 5 seconds
>>
>>106542806
This image belongs to Anime Diffusion Thread.
>>
The stupid post above me is going to get like three (You)s
>>
>>106541117
show some respect to your king...

I think there are like 4 people here with 6000s I'm working on a C210 build with 4 in it :3
>>
File: 1754305612483354.mp4 (329 KB, 512x512)
329 KB
329 KB MP4
>>
>one more seed
>i know the next seed will be better
>>
anyone have that one 3 sampler workflow for wan?
>>
>>106543065
CFG skimmers
>>
>>106543170
I still think fp16 fast is worse. It absolutely butchers quality.
>>
File: 1744604216990248.mp4 (499 KB, 512x512)
499 KB
499 KB MP4
>>
File: file.png (10 KB, 1410x103)
10 KB
10 KB PNG
>update ComfyUI
>suddenly basic image generation takes up 100% VRAM
can anybody think of a fix or do I need to reinstall from scratch?
>>
File: 1743755857442206.mp4 (567 KB, 512x512)
567 KB
567 KB MP4
>>106543317
>>
File: 1.png (33 KB, 231x98)
33 KB
33 KB PNG
don't prompt large eyes on noob
>>
>>106543334
download more ram
>>
>>106543334
Just change the git head to the last commit that didn't do that?
>>
>>106543334
do a basic, separate clean install and test it. shouldnt take that long.

or use a backup of you comfy when it worked, you DO keep a backup, right?
>>
File: ComfyUI_00264_.png (1.25 MB, 1280x800)
1.25 MB
1.25 MB PNG
>>106543340
I used to love playing Beach Babe Killer when I was a kid
>>
File: wan22_light21_00661.mp4 (301 KB, 464x464)
301 KB
301 KB MP4
>>106542434
>>
>>106543367
>you DO keep a backup, right
>what is git reflog
>>
>>106543400
i know what it is but i just like to start shit while i wait for my prompt to finish
>>
>>106543414
I also like to start shit in this thread while my gens are genning.
>>
Are there any small local LLMs for prompt generation? Either booru or boomer prompts
>>
Maybe i'm fucking retarded but, what words do I have to give Chroma for it to make a room look like this. I tried

>a dark room with barely any light
>dimly lit room
>dark ambience
>horror theme
>television is the only source of light in the room

And they're still bright or fully lit.
>>
File: Screenshot_215.png (28 KB, 308x165)
28 KB
28 KB PNG
>just use the 2.1 4step lora on 2.2 bro!
>get this
>>
>>106543455
yes, multiple. one is even called promptgen IIRC, then dantaggen/tipo and so on

i am not very convinced of their usefulness as most of them don't have a particularly good grasp on what tags the models know or the creativity isn't necessarily very good

wildcard lists seem to perform better to me
>>
>>106543490
Looks like you're using the wrong vae to me.
>>
>>106543501
Makes sense. Maybe editing models are more suitable for my purposes actually.
>>
>>106543505
this is a screenshot of the sampler preview
>>
File: hy_generation_test3_4.jpg (233 KB, 2560x1600)
233 KB
233 KB JPG
>>106543455
Qwen and probably Hunyuan use Qwen 7B VL, so it would probably be the best. Hunyuan actually rewrites your prompt by default using it, but it enslopifies it.
>>
File: wan22_light21_00662.mp4 (283 KB, 464x640)
283 KB
283 KB MP4
>>106543490
works on my machine
>>
there must be more to life than this
>>
>>106543533
you're here forever, chud
>>
>>106543512
Looks like you're using the wrong vae to me.
>>
>>106543533
>than this
than what?
>>
File: AnimateDiff_00315.mp4 (2.32 MB, 480x720)
2.32 MB
2.32 MB MP4
>>106542314
>>
>>106542314
I don't think I could convince myself to buy another 24gb card knowing a 32gb card exists. Like my brain just won't let me.
>>
File: Rachel Nichols.jpg (533 KB, 1999x3000)
533 KB
533 KB JPG
What u guys think? Is the slight lost of detail is worth the smoother fps?
Default 16 fps: https://files.catbox.moe/6dwlns.mp4
Interpolate 4x (64fps) https://files.catbox.moe/gsca58.mp4
>>
Found something interesting about Hunyuan and artist names. It has an interesting behavior.

If it doesn't recognize the name at all:
Generate character with the assumed ethnicity of the name. "masamune shirow" or "yoji shinkawa" produce asians and "greg rutkowski" and "ilya kuvshinov" produce euros (sameface within that group).

If it recognizes the name, apply generic style associated with the name. For example, "yoshitaka amano" produces the generic moe anime slop style shown previously. Pic related.

Classical painters like Michelangelo and Caravaggio produce the same generic renaissance-like style.

Make of this what you will.
>>
>>106543566
proooooooompting
>>
>>106543675
I make nothing of it because as far as I can tell you are the one human being on earth using this model.
>>
>>106543668
Definitely prefer default, looks much more real
>>
>>106543683
>is there le MORE to this in life
you could say that about any hobby or anything really. if you're depressed, that's your problem but i quite like prooooooooompting and hearing about the advancements in the tech
>>
File: ComfyUI_01569_.png (1.42 MB, 1024x1024)
1.42 MB
1.42 MB PNG
>>
>>106543685
Comfy I think just added support so most were probably waiting for that.
>>
File: FluxKrea_Output_253622.jpg (2.68 MB, 1792x2304)
2.68 MB
2.68 MB JPG
>>
So I've come into the need to use *gags a little* SDXL for some img2img stuff. Is there anything on comfy UI that would let me inpaint faces at a higher resolution?
>>
>>106543724
For you? No.
>>
>>106543722
i dont know why but for a while i thought krea was a closed source model
>>
>>106543728
Fine I'll just use the crop and stitch node.
>>
File: 1756845697849391.png (181 KB, 1832x725)
181 KB
181 KB PNG
https://xcancel.com/bdsqlsz/status/1965593013687861603#m
the edit model will be released in 2 weeks (kek)
https://youtu.be/9v-33jcEDk4?t=19
>>
>>106543717
must be pretty bad if he didn't bother shilling it here
>>
>>106543765
i dont think it was confirmed anywhere that its an edit model
>>
>>106543791
https://xcancel.com/TencentHunyuan/status/1965433678261354563#m
>We’re just getting started. Stay tuned for our native multimodal image generation model coming soon.
>>
>>106543810
https://github.com/huggingface/transformers/pull/40771
Is that this one?
>- **Hybrid Attention**: Replaces standard attention with the combination of **Gated DeltaNet** and **Gated Attention**, enabling efficient context modeling.
>- **High-Sparsity MoE**: Achieves an extreme low activation ratio as 1:50 in MoE layers — drastically reducing FLOPs per token while preserving model capacity.
>- **Multi-Token Prediction(MTP)**: Boosts pretraining model performance, and accelerates inference.
>- **Other Optimizations**: Includes techniques such as **zero-centered and weight-decayed layernorm**, **Gated Attention**, and other stabilizing enhancements for robust training.
Or a unrelated new text only model?
>>
File: hunyuan_image_test_7_1.jpg (251 KB, 2048x2048)
251 KB
251 KB JPG
>>106543717
>>106543782
It looks kind of half-hearted, not sure if I should bother setting it up yet.

Meanwhile, here's the first hunyuan mutant I've gotten. I guess it's good that it took this long.
>>
>>106543840
no, it's a LLM from Alibaba, the edit model will be from Tencent
>>
>>106543840
Sorry, forget it, sperg out on my part.
Thats chink, but qwen. kek
What are meta/mistral etc. doing? China is killing it with local. I dont have much free time so I cant really keep up.
>>
File: 1739696168878945.jpg (730 KB, 2048x2048)
730 KB
730 KB JPG
https://github.com/comfyanonymous/ComfyUI/pull/9792
it looks exactly like Qwen Edit, wtf
>>
>>106543853
>What are meta/mistral etc. doing?
meta had some interesting video models but they didn't release it because "too dangerous", and mistral doesn't do diffusion models, only LLMs
>>
>>106543860
take it slow bro
>>
>>106543860
Fuck you Cumfy.
>>
>>106543879
isnt meta going full closed weight soon
>>
>>106543860
it's supported already? nice.

> it looks exactly like Qwen Edit, wtf
what exactly?
>>
>>106543887
they already did, they don't know how to make good LLMs anyways so it's not a big loss lol
>>
>>106543740
They have their own closed source one that's just called "Krea 1", Flux Krea is the open source version of it.
>>
File: ComfyUI_00489_.png (1.32 MB, 1024x1024)
1.32 MB
1.32 MB PNG
>>106543901
>what exactly?
it looks like the images comfy generated with qwen
>>
>>106543879
Thats right, I remember and found it.
https://ai.meta.com/blog/movie-gen-media-foundation-models-generative-ai-video/
I think they could have been the first for video+sound, months ahead. Now google does it.
I think thats part of why those google vids are popular on twitter.

But they arent ashamed to have step-mom and granny "companions" on facebook.
No idea what they are thinking.

I doubt wang will fix anything, he is just altmans former room buddy from college.
>>
File: 1753242055972492.jpg (1.9 MB, 4096x2375)
1.9 MB
1.9 MB JPG
>>106543901
>what exactly?
the style
>>
>>106543917
We've been saying the past few threads nigga.
>>
>>106543860
all chinese local models look the same. if it was good it would be api only. now add seedream api nodes
>>
>>106543922 >>106543917
one of them definitely has fluffier ears
>>
File: deathpenalty.png (2.5 MB, 1696x1296)
2.5 MB
2.5 MB PNG
>>
File: file.jpg (1.37 MB, 3840x2560)
1.37 MB
1.37 MB JPG
>>106543927
>if it was good it would be api only.
it's true, Seedream 4.0 is a good example of that, we're just getting the failures of those companies, instead of putting it in the trash, they throw it to the local ecosystem so that they can have some good boi points
https://xcancel.com/FAL/status/1965334053927752164#m
>>
>>106543922
Qwen Image looks better though, the one of the left is too bright, and it'll be a pain to run (with the refiner shit), and the licence is not Apache 2.0 (unlike Qwen Image), I don't see any reason to switch to HunyuanImage lol
>>
File: 1739700014591903.png (1.66 MB, 3288x1466)
1.66 MB
1.66 MB PNG
https://github.com/nunchaku-tech/ComfyUI-nunchaku/blob/main/example_workflows/nunchaku-qwen-image-edit.json
https://huggingface.co/nunchaku-tech/nunchaku-qwen-image-edit
Nunchaku Qwen image edit is out
>>
Should I even bother with imagine generations on my 12gb 4070. I've e tried text generation and works okay but I'm not sure about images
>>
>>106543945
at least qwen is apache 2.0. while hidream hunyuan and qwen might blend together, anything that shits on BFL is a win to me
>>
>>106543922
lol hunyuan is melting, look at the eyes. released later and they lost to qwen anyway
>>
>>106543961
Oh finally
ty 4 info
>>
Worth noting that the refiner model of Hunyuan isn't working yet. The reference implementation is broken and Comfy didn't try to fix it yet either. It's possible that the output quality could still improve. Unlikely to be any less sloppy, though.
>>
File: ComfyUI_I2V_00142.mp4 (3.54 MB, 1000x1000)
3.54 MB
3.54 MB MP4
>>106543668
why not just 2x (32 fps)?
>>
>>106543980
damn catbox?
>>
>>106543971
This was foretold: >>104602822
Hunyuan cannot catch a break, everything they release is either bad or surpassed within a week.
>>
>>106543963
sure, it works, won't even be very much trouble with SDXL / Illustrious / Noob or Chroma for example
>>
is 16gb vram enough bros? i'm currently at 8, just want to make acceptable videos
>>
>>106543992
>it's the fate of hunyuan to be mogged
kek, true, Tencent is Alibaba's bitch, we'll laugh about it again when they'll release their giant 30b edit model that will be worse than Qwen Edit lmao
>>
>>106543997
wouldnt be a great investment. im at 10 and im holding off making videos until I can get a 48gb+ card for a decent price
>>
File: wednesday.jpg (40 KB, 600x600)
40 KB
40 KB JPG
>>106543975
>inb4 it's 50 steps on both models
>>
>>106543975
this shit is 22 times bigger than SD 1.5 (Hunyuan Image -> 17b, SD 1.5 -> 0.75b) and it has the same quality, lmao those fucking chinks
>>
>>106543997
keep saving until you can get a card with way more vram
>>
>>106543997
If the 5070TiS is really gonna have 24GB then i'd take that as a minimum upgrade path
>>
>>106543997
blvckpill trvthnvke but not even 24gb is enough
>>
>>106543997
yes and no. it probably means more offloading to system RAM via comfui-multigpu or a smaller gguf quant OR less frames*resolution (not all at the same time if you don't want it tho)

and of course the 5090 and 4090 crunch the tensors faster

but it can do video
>>
>>106543997
24gb is the absolute minimum for acceptable videos, wait until you have enough money so that you can buy a 3090
>>
>>106543765
Imagine being this actress and hired just for this
>>
>>106544022
>buy a 3090
lmao it's way too slow
>>
File: ComfyUI_17145.png (3.18 MB, 1200x1600)
3.18 MB
3.18 MB PNG
>>106543455
I use Gemma 3 27b Q4, mainly because it has vision support.

>>106543997
24GB is like the bare minimum now. Start saving for that 6000 Pro!
>>
>>106542790
Tried slapping the node on high and low Wan 2.2 but shit ain't working
>>
>>106544009
a model is only as good as the training data. hmmmm i reallly wonder what was in the training data if it looks like sd1.5 slop. one can only guess…
>>
Pretty sad the only viable AI enthusiast card is $10K
>>
>>106544022
since comfyui-multigpu we had a really rather good way to offload a bunch to system RAM for a 20% ish slowdown (or whatever it was, I guess it depends on settings)

yes, it's even slower and of course your compute isn't like a 5090's but it will make the videos anyhow and you could reduce resolution somewhat
>>
>>106544026
to be fair, this fatfuck should be happy she got a role at all lol
>>
>>106543993
Thanks!
>>
File: ComfyUI_00700_.png (1.09 MB, 784x1232)
1.09 MB
1.09 MB PNG
>>106543961
oh i was just trying that out. not getting good results though but maybe that's an issue with qwen edit itself? i'm not sure
>>
>>106544062
>that's an issue with qwen edit itself?
it is, it zooms in randomly and there's no solution for that (even the multiply_of 112 doesn't really work), Alibaba said that they'll make a better version of it, so I'm waiting
>>
>>106543963
yeah, I was genning on a 8gb 2080
>>
So what is qwen image useful for? It seems for anything realistic it's BTFO by wan
>>
>>106544074
I never had it zoom in once in all the time I've been using it. No idea what people are doing that it apparently does that.
>>
File: BITCH.png (306 KB, 640x358)
306 KB
306 KB PNG
>>106544097
>I never had it zoom in once in all the time I've been using it.
lucky you
>>
rtx5090 prices are insane in my country
>>
>>106544109
do a conversion into usd and share with the class
>>
>>106543765
Why is all the information on this model seemingly coming from some guy called blue dragon saint?
>>
VRAM is overrated.
We need faster nunchaku quants and nunchaku Lora nodes
>>
>>106544097
I don't believe you. I think you're just too blind to notice.
>>
>>106544112
Here are some converted to USD

Asus ROG-ASTRAL-LC-RTX5090-O32G - $4,324.54
Asus TUF-RTX5090-32G - $3,806.37

Gigabyte GV N5090AORUSX W-32GD - $3,467.61
Gigabyte GV N5090WF3OC-32GD - $3,116.17

ZOTAC RTX 5090 SOLID OC 32GB(W-3782) - $3,194.08
Zotac RTX5090 Arctic Storm AIO 32GB - $3,116.16
>>
>>106544147
I use an image compare node, would be hard not to notice.
>>
>>106544027
you can wait for the 5080 super I guess. the 5070 ti is about the same speed not counting fp4 optimizations
>>
>>106544131
that's a good question, that man knows a lot
>>
File: 1752638855308849.png (3.09 MB, 1263x1535)
3.09 MB
3.09 MB PNG
https://xcancel.com/bdsqlsz/status/1965597326103421232#m
babe wake up, a new mid model got released
>>
>>106544196
>BAGAL
isn't that "BAGEL" instead?
>>
>>106544196
>Good
>Good
>Bad
GPT-4o is clearly better than both of those other models
>>
>>106544207
That's for resolution preservation
>>
File: sailors.png (293 KB, 716x617)
293 KB
293 KB PNG
>>106542796
Kek
>>
>>106544207
>resolution preservation
>>
>>106544207
Counterpoint! Then why did the chart label them as good? Are you retarded?
>>
>>106544207
nah, gpt4o is terribel, it changes the image and add the yellow piss filter
>>
File: 1745638981677569.mp4 (819 KB, 512x512)
819 KB
819 KB MP4
>>
File: 1755034182885160.png (177 KB, 1102x1230)
177 KB
177 KB PNG
>>106544213
>>106544217
>>106544219
Holy shills
>>
>>106544215
Was the cat from Sailor moon actually a person or am I confusing that with Sabrina the teenage wtich?
>>
>>106544227
>has to use google to understand what "resolution preservation" means
ngmi
>>
File: 1741252660926242.png (20 KB, 639x251)
20 KB
20 KB PNG
>>106544220
Only for the last two. It clearly replaces the butterfly and the girl the best, and has the best looking truck. Aesthetically I'd say BAGAL wins the brick wall, and the cottage one all looks roughly similar from the collage.
>>106544234
It's not a real term, shill.
>>
File: 1511667108879.png (298 KB, 512x512)
298 KB
298 KB PNG
So that Tensor RT only works if you "quant" the entire model like nunchaku?
>>
>>106544244
>It's not a real term, shill.
Are you fucking retarded? It's on the fucking image, it's a quote.
>>
File: 1738295931033426.png (1.36 MB, 2000x1600)
1.36 MB
1.36 MB PNG
>>106544244
at the end of the day, we're just arguing about which mid model is the best, nano banana is destroying everything at the end of the day
>>
>>106544256
It's a term not defined in the image, nor anywhere else. It's not a real thing.
>>
>>106544196
>NOOOO YOU CANT ALTER THE RESOLUTION WE WIN WE WIN
fucking lol "papers"
>>
>>106544267
wow you are insanely retarded
>>
>>106544271
of course an edit model isn't supposed to alter the resolution, if you never asked for a zoom in, it shouldn't zoom in, period
>>
>>106544267
The model looks like absolute shit, no wonder they don't even compare to Qwen, but even more shocking is you not knowing how language works.
>>
>>106544282
>increases your resolution a little bit
>Yeah this model is dogshit
>>
>>106544289
yep, that's right, don't settle for mediocrity anon
>>
>>106544279
If you use a 1024x1024 image in a GPT, you will get a 1024x1024 image out. I'd call that perfect "preservation" if that's what you're complaining about.
>>106544286
>bro you're supposed to just throw out jargon and people are supposed to assume the literal meaning of the words put together, that's how language works
>t. has never worked in the sciences
>>
>>106544227
>resolution preservation
>>
>>106544267
>>106544299
look at the border of the image, gpt4o's image is a different size from the original
>>
>>106544299
>shitty synthetically trained chink model that is forgotten before anyone ever loaded it into a workflow
>muh sciences!
Shaking hands typed this post
>>
>>106544321
Why are your hands shaking?
>>
>>106543065
>>106543204
>Name a bigger scam than fp16 fast
Should I not be genning wan videos with --fast? Haven't seen any "quality butchering" or issues with output quality that would have made me thought it was --fast over a bad prompt/seed/model limitation
>>
>>106544337
Take an image you genned with fp16 fast, re-gen without it, and have your mind blown.
>>
>>106544337
Oh it's bad bro. Look at their eyes and fine details between movement on fast vs normal fp16.
>>
>>106543533
>there must be more to life than this
There will be higher highs, but if you have figured out a fundamental truth about yourself and your values and priorities with AI you probably shouldn't ignore that
>>
If I use Distorch in comfyUI for Wan, how much ram am I supposed to offload.
With 16gb vram, and gguf 10gb models.
It's a high and a low 10gb model right, so it's 20gb, so you'll need to move at least 4gbs off?
>>
using kobold and sillytavern and GLM-4.5-Air-UD-IQ2_XXS.

I want the ai to edit my code and improve seo of a text I sent it.
It seems the ai is censored :(

Also copy pasting the text and code gets cut, do I increase the tokens??
>>
>>106544402
you only need to use one of the 10gb models at a time, it'll offload to ram when not in use
>>
>>106544409
you misfired into the wrong general broski
>>
>>106544402
just play with it until you hit 15gb used
>>
>>106544421
where post then?
>>
>>106544454
>>>/g/lmg
>>
File: hyimage.jpg (519 KB, 2048x2048)
519 KB
519 KB JPG
>>106544454
the text model thread, lmg
>>
>>106544348
nta but i just tested this out after having --fast enabled for over a year on comfy and while subtle, there's definitely an improvement on images.
>>
Is there a reason why Wan I2V needs a shift of 8 yet T2V works best with a shift of 1?
>>
>He leaves --fast off
Turn that shit on. Make the gen fast, dude. 4 seconds less on a 5 minute wait bro.
>>
>>106543997
>is 16gb vram enough bros? i'm currently at 8, just want to make acceptable videos
Everyone who replied to you doesn't have a 16GB card, so here's a reply from someone who actually BOUGHT a 16gb card for (video) AI
>>
>>106544460
Why do you hate hlky and Julien?
>>
>>106544525
A 16gb Blackwell is enough to cope and generate videos for now. I wouldn't buy a 4080 super. I wouldn't buy anything above MSRP. I wouldn't buy a 5080 for anything ever. I would 200% buy a 5090 at MSRP if I could do that instead of buying a 16gb card (4 months ago when I bought it the 5090 was still basically not real for less than 1000 over MSRP)


My biggest regret with going with a 16gb card: I am locked out of WAN Lora training. I consider this bittersweet because the loras I'd be interested in training would be very illegal

My biggest happiness going with a 16gb card: I actually "knew myself" and recognized that I would go through a honeymoon phase of genning videos and then get mostly burned out and bored after a couple of months, and I mostly have, and I have saved like $1500 as a result of that. And of course during that time I was able to generate content that doesn't exist anywhere else and was tailored specifically to my tastes and has gotten me over 100 great cooms
>>
>>106544547
I would prefer if you'd off yourself, pedo tranny.
>>
>>106544559
Once we have two minute long videos with audio I won't have a need to go outside and rape children at the park anymore don't worry
>>
>>106544402
Set it to 0 and it will take what it needs.
>>
>>106544490
I think fast is one of the more destructive speedups. More so than torch compile.
Sage attention being the best
>>
>>106544597
Sage attention is magic but torch compile is basically no quality loss. Any time someone posts a video comparison between compile and no compile it has the opposite effect where I am incredibly impressed with the free 30% speedup. Might be different for imagegen though
>>
>>106544654
This is gonna sound dumb, but on a 3090 using ggufs I notice like no speed up at all. is it even doing anything?
>>
>>106544022
Quants + 480p + ram offloading will let you output video with very little.
>>
>>106544725
Those concessions add up very quickly.
>>
>>106544694
Oh yeah it might not do anything for GGUFs I don't remember. But I'm pretty sure back in the wan 2.1 days with the op wan re ntry workflow everyone was using GGUFs with compile anyways
>>
>re entry is spam filtered
Sugma balls mods

>>106544725
I don't consider 480 "acceptable video". At that resolution, physics/reality takes a hit especially when you try to do stuff beyond a static shot like zooming in/out

Same issue with quants below q6. And if he's discussing a new GPU I bet he's on 16gb of ram and will have to upgrade his ram anyways.

I actually think that's the big thing stopping little zoomies still on their 2018 DDR3 rigs from participating. Even if they can convince their parents to get them a GPU, they'll probably need to upgrade literally everything else maybe even PSU all because they have only 16gb of ram if even that
>>
>>106544725
>480p
lol, lmao even, 720p is the bare minimum to get a decent video
>>
Do I need to reinstall all the cuda stuff if I swap to a new gpu?
>>
>>106544762
Obviously 720p is ideal, but the model supports 480p natively. If upgrading is out of their means for the time being, they can make it work with compromises.
>>
>>106544725
480p is cope. you arent a REAL video genner if you cant output crisp 720p with BARE MINIMUM Q8
>>
Attempted the fabled 3 sampler config with T2V instead of I2V and the output is unfortunately pretty shit. Sad.
>>
>>106544762
>MOOOM I NEED 2000 DOLLARS SO I CAN GENERATE PORN. IT'S REALLY BAD JANKY AI PORN THAT NO ONE FAPS TO CAPPED AT 8 SECONDS

Whogiovesafuck. I have a 5090 because this is a fun hobby that sucks up time and constantly gives me shit to tinker with. But no one needs this shit in their life. It wont make most people happy. Spending 2k just to make porn is insane. Spending 2k is like nothing if this is all you've been doing for 2-3 years since ai came out.
>>
>>106544821
>the model supports 480p natively
The model supports all resolutions very well, to the point where I think aspect ratio is more important to think about with regards to what you're trying to generate than resolution. Given the fact that official resources like the prompt guide have typos etc it's probably best to not worry about "native/official support" for anything regarding Chinese AI models

Only 12GB cards like the 3060 or base 5070 would be considered "480p only" cards and none are worth getting for AI video.

>>106544796
>>106544836
>720p is the bare minimum to get a decent video
At least one of you is baiting but I would seriously say the ABSOLUTE BARE MINIMUM is 540p to minimize issues like fingers/toes or background elements popping in/out when you zoom in or out.

>>106544884
>IT'S REALLY BAD JANKY AI PORN THAT NO ONE FAPS TO
Skill issue (have better fetishes)
Spending 2k on porn is not insane when it's porn that cannot exist in any other form (no one is gonna put a dogs penis in your highschool crushes mouth but you)
>>
>>106544873
3 samplers is cope, at least for t2v. 2 samplers with a balance of 2.2 and 2.1 lightning loras is the best way to have a good color balance while avoiding slow motion hell
>>
>>106544940
>Spending 2k on porn is not insane when it's porn that cannot exist in any other form
Normal people don't spend 2k on insane niche porn.
>>
>>106544956
They absolutely do all the time lol. You think non-normalfags pay for porn more than normies do?
>>
>>106544950
>2 samplers with a balance of 2.2 and 2.1 lightning loras
Any recommended weights for each?
>>
>>106544977
Yes. That is exactly what I think.
>>
>>106544992
0 on high 4 steaps. 1 on low.
>>
Wait what. Forge doesn't work with an rtx 5090??
>>
>>106545002
heh, get FUCKED. had the same thing happen to me but i figured it out. just download neoforge and save yourself the trouble.
>>
>>106544993
Ah ok, I bet you're that same anon who thinks every major city isn't falling apart due to non white animals right now so we may just live in different realities
>>
>>106545033
>every major city isn't falling apart due to non white animals
those hands were SHAKING typing this
>>
>>106545012
God damn it.

Well I'm installing it now. It's fun to use chatgpt to help me.
>>
File: 1736777642981642.png (2.8 MB, 2345x1314)
2.8 MB
2.8 MB PNG
https://github.com/ModelTC/Qwen-Image-Lightning?tab=readme-ov-file#-comparison-between-v1x-and-v2x
https://huggingface.co/lightx2v/Qwen-Image-Lightning/tree/main
>Compared to V1.0, V2.0 produces images with reduced over-saturation, resulting in improved skin texture and more natural-looking visuals.
ok why not, but I'd like them to do the same thing for the 8 steps one too
>>
>>106542328
Lol
>>
>>106545080
yeah, i was pretty upset too when it happened. i just switched to comfy. im just recommending neo to save you the trouble, especially if you're using chatgpt to set things up, no offense from that as there's nothing wrong with learning how you can.
>>
>>106545012
shaking
>>
>>106545047
If the hands of the monkey were shaking the Ukrainian refugee would still be alive but alas
>>
>>106545104
>one isolated incident reflects the motives of the whole
broski has no object permanence
>>
>stuck in the Wan 2.1 mines because the only lora i use hasn't been ported to 2.2
FUCK
>>
>>106542709
There are quite some anon that make 10k+\mo here, not me, but they are here
>>
>>106545108
>broski
I accept your concession
>>
>>106545113
You can use 2.1 loras on wan 2.2 mostly fine, just use it either only on the high noise or on both samplers
>>
>>106545154
>no rebuttal
LOL
>>
>LOL
go back tourist
>>
File: NeoForgeGODS.png (11 KB, 1403x219)
11 KB
11 KB PNG
THIS IS NOT A DRILL!!
NeoForge now supports Qwen!
https://github.com/Haoming02/sd-webui-forge-classic/tree/qwen
NeoForge compatible models: SDXL, Flux, Chroma Qwen, WAN

THIS IS NOT A DRILL!!
NeoForge now supports Qwen!
https://github.com/Haoming02/sd-webui-forge-classic/tree/qwen
NeoForge compatible models: SDXL, Flux, Chroma Qwen, WAN

THIS IS NOT A DRILL!!
NeoForge now supports Qwen!
https://github.com/Haoming02/sd-webui-forge-classic/tree/qwen
NeoForge compatible models: SDXL, Flux, Chroma Qwen, WAN
>>
>>106543533
https://youtube.com/playlist?list=PLFV_pTSMduaURY4rVeOeWDCIWcgdAW7PZ&si=L0PWAu3uc-xkho0i
The spirit of man
>>
>>106545159
well, it works but the penis comes out like a mangled mass of flesh so it's kind of unusable in 2.2
>>
>>106545172
at least they're not 8 months late (Flux) compared to ComfyUi this time kek
>>
>>106544956
Tell that to every artist commissioner
>>
>>106545172
>no API support
it's over
>>
>>106544956
the more niche, the higher the price. why do you think artists who draw furries net insane profits? though also to be fair i feel like the more niche the less likely said commissioner is "normal"
>>
>>106545199
>>106545199
>>106545199
>>106545199
>>106545199
>>
>>106544229
I honestly have no idea. I never watched the show. I've just seen enough memes over the years to form a vague idea.
>>
>>106544993
normies spend more on porn than anyone. The majority of money made in smut (like 90% of it) is camsites, and there's this whale culture where men will spend tons of money to 'own the room', go private, etc to have the girl to themselves. hunter biden shit ya know? A lot of them use girls as therpists too. Spending money on porn is rare, but hiring women directly isnt.

>>106544940
Don't worry, ai is great for femdom and more psychological porn. Im enjoying the shit out of it. Once wan 2.2 works with something like infinitetalker my electricity bill is gonna go through the roof with that, vibevoice, and glm air. I dont think we're going to see believable local ai girlfriends for a while but you can definitely have a penpal of sorts.
>>
>>106545306
>normies spend more on porn than anyone
how is it possible to be this detached from reality? your computer screen isnt the real world
>>
>>106545325
go ask ai how most money is spent instead of arguing with me. Also, I tried going outside just now and asking my neighbor how much he has spent on porn. he looked at me strangely and shut the door. Real world did not deliver.
>>
>>106544545
i don't?



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.