[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


Janitor applications are now being accepted. Click here to apply.


[Advertise on 4chan]


Never Converge Edition

Discussion of Free and Open Source Text-to-Image/Video Models and UI

Prev: >>106540158

https://rentry.org/ldg-lazy-getting-started-guide

>UI
ComfyUI: https://github.com/comfyanonymous/ComfyUI
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneo
SD.Next: https://github.com/vladmandic/sdnext
Wan2GP: https://github.com/deepbeepmeep/Wan2GP

>Checkpoints, LoRAs, Upscalers, & Workflows
https://civitai.com
https://civitaiarchive.com/
https://openmodeldb.info
https://openart.ai/workflows

>Tuning
https://github.com/spacepxl/demystifying-sd-finetuning
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/sd-scripts/tree/sd3
https://github.com/derrian-distro/LoRA_Easy_Training_Scripts
https://github.com/tdrussell/diffusion-pipe

>WanX
https://rentry.org/wan22ldgguide
https://github.com/Wan-Video
https://alidocs.dingtalk.com/i/nodes/EpGBa2Lm8aZxe5myC99MelA2WgN7R35y

>Chroma
https://huggingface.co/lodestones/Chroma1-Base
Training: https://rentry.org/mvu52t46

>Neta Lumina
https://huggingface.co/neta-art/Neta-Lumina
https://civitai.com/models/1790792?modelVersionId=2122326
https://neta-lumina-style.tz03.xyz/

>Illustrious
1girl and Beyond: https://rentry.org/comfyui_guide_1girl
Tag Explorer: https://tagexplorer.github.io/

>Misc
Local Model Meta: https://rentry.org/localmodelsmeta
Share Metadata: https://catbox.moe | https://litterbox.catbox.moe/
GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/
Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Txt2Img Plugin: https://github.com/Acly/krita-ai-diffusion
Archive: https://rentry.org/sdg-link
Bakery: https://rentry.org/ldgcollage

>Neighbours
>>>/aco/csdg
>>>/b/degen
>>>/b/realistic+parody
>>>/gif/vdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/trash/slop
>>>/vt/vtai
>>>/u/udg

>Local Text
>>>/g/lmg

>Maintain Thread Quality
https://rentry.org/debo
>>
File: Yay.gif (496 KB, 640x640)
496 KB
496 KB GIF
Made a gif of this >>106540916 for anyone that is interested. Just looping in celebration.
>>
Cursed thread of foeboat
>>
File: 1_00097_.mp4 (453 KB, 640x640)
453 KB
453 KB MP4
>>106542161
Thanks for making that, the cup part was silly.
>>
File: hy_generation_test_0.jpg (329 KB, 2048x2048)
329 KB
329 KB JPG
I did a few more tests with hunyuan image. Each one with the "reprompt" enabled and then disabled. The initial prompts I used are captions generated from original illustrations Qwen 7B VL.

spoiler: sloppy slop but maybe the higher resolution and vae will make it good for upscaling output from other models.
>>
>>106542182
That's the reason I edited it. The best bit was at the start.
>>
>>106542161
I'll always celebrate Futaba and 4chan.
>>
blessed thread of anime
>>
>>106542163
Ahaha based ty anon.
>>
>>106542184
>3dpd background with anime face
This model is so over.
>>
The reprompt didn't change much here
>>
>>106542182
catbox?
>>
>>106542184
>real ass shoes
>>
File: 00000_721664149.png (2.53 MB, 1152x2016)
2.53 MB
2.53 MB PNG
>>106542182
You're welcome. I downloaded it from Civit, I have no idea who she is, I don't like anime, kek.

Remember to post anime here! It's important for /ldg/ unity!
>>
>>106542192
i just love me pixar girls so much. i know everyone got it out of their system back with sd1.4, i'm a late bloomer.
>>
File: hy_generation_test_1.jpg (274 KB, 2048x2048)
274 KB
274 KB JPG
The reference for this prompt was a knights of sidonia manga cover
>>
>>106542206
y-you don't know yotsuba, the mascot of 4chan?
>>
>>106542196
>>106542087
"The girl wiggles her breasts before she smiles and blinks at the camera."
>>
No reprompt. The reprompter deleted information about the character / franchise name, very helpful.
>>
>>106542097
that works i guess but the 4step loras make it shit the bed
>>
File: hy_generation_test_2.jpg (317 KB, 2048x2048)
317 KB
317 KB JPG
Midjourney-flavored slop?
>>
What's the best model for horror?
>>
Why did the quality of this thread suddenly drop so low?
>>
No reprompt, followed the original prompt better.
>>
>>106542184
They're made to look like slop on purpose. Being open source, they intentionally make it look unrealistic
>>
>>106542239
640x640
>>
File: 1_00100_.mp4 (447 KB, 640x800)
447 KB
447 KB MP4
>>
>>106542184
Does it know artists?
>>
File: hy_generation_test_3.jpg (445 KB, 2048x2048)
445 KB
445 KB JPG
Last one, reference was a technical diagram for a Soviet rocket. Wondering if it can do fine detail lines/text with the high res vae.
>>
>>106542251
I tried a few so far and got nothing. Actually couldn't get it to deviate from its base generic moe slop style at all for anything anime-like.
>>
File: v4.jpg (386 KB, 844x1200)
386 KB
386 KB JPG
>>106542223
What language version is this because Vol4 is this.
>>
>chroma-unlocked-v50-annealed
what the fuck is annealed in this context?
>>
>>106542272
some schizo bake experiment. avoid
>>
>>106542272
dont even bother. stick to wan and qwen
>>
File: 1_00102_.mp4 (597 KB, 640x800)
597 KB
597 KB MP4
>>106542195
>>
>>106542272
Why are you using v50 at all? Stick to Base or HD.
>>
>>106542217
No, no idea, never watched it.
But that doesn't matter, keep posting it!
>>
>>106542288
>>106542290
uh, i was already downloading it kek. damn, that bad? im gonna try it
>>
>>106542272
Use either Chroma base or Chroma HD. The numbered versions 49, 50 and 50A are abortions. You can also get 48 as a benchmark.
>>
>never watched it
>>
File: 1_00103_.mp4 (692 KB, 640x800)
692 KB
692 KB MP4
>>106542241
>>
>>106542293
>>106542299
I like testing models to see the differences on fixed seeds. That's why. I downloaded base and HD too so i'll keep those.

Gonna try Qwen and the others later.
>>
>>106542135
Now that Nvidia will be finally offering 24 gb at a somewhat not completely insane price, what is the state of the current open source video models? Can they finally support start and end frames? Or is it still just prompt based?
>>
>try comfyui native t2v
>blurry and noisy
gonna have to bite the bullet on that wrapper huh
>>
>>106542248
stay mad vramlet
>>
File: 1376452934921.jpg (785 KB, 1134x1600)
785 KB
785 KB JPG
>>106542266
>>
>>106542184
>>106542195
Crazy how one is SEXO! SEXO! SEXO! while the other is Ehhh, Yeah. I guess I would.
>>
File: 1_00104_.mp4 (1.75 MB, 640x896)
1.75 MB
1.75 MB MP4
>>106542294
>>
>>106542314
i'm doing start and end frames right now but it's for looping pron
>>
>>106542314
Why buy a 5080 super when you can just buy a 5090?
>>
>>106542310
I don't blame you since lode explains literally anything ever. If I remember right, annealed was a merge that one random autist requested on his discord server and has next to no use case.
>>
>>106542314
>not completely insane price
You do realize it's going to be vastly more expensive than the 4090 still. You wont get it for MSRP.
>>
File: 1733668812356.webm (2.43 MB, 848x480)
2.43 MB
2.43 MB WEBM
Videogen peaked with HunVid.
>>
File: 1733674062968.webm (1.64 MB, 960x544)
1.64 MB
1.64 MB WEBM
>>106542347
>>
File: 1733745391308.webm (2.69 MB, 1280x720)
2.69 MB
2.69 MB WEBM
>>106542357
>>
>>106542340
I can't imagine spending tens of thousands and not even documenting anything. Like if he wants this to be a base model to be trained, how about some basic training settings for a full finetune?
>>
Am I tripping or does Qwen not look much different than Noob/Illustrious semi-realistic generations?
>>
>>106542383
qwen looks like coherent sd 1.5
>>
>>106542383
get your eyes checked
>>
File: 1729289276497737.jpg (770 KB, 1416x2120)
770 KB
770 KB JPG
>>
>>106542366
prompt for this one?
>>
>>106542383
it was probably trained on slopped aibooru outputs so...
>>
File: 1756135024080396.jpg (872 KB, 1416x2120)
872 KB
872 KB JPG
>>
File: 1_00106_.mp4 (957 KB, 640x928)
957 KB
957 KB MP4
>>106542192
>>
>>106542368
Lode is a super autist so I don't really think much of him not being straightforward. What does surprise me is how *nobody* in his circle has gone out of their way to help out the community and get some basic guides out for the model.
>>
>>106542334
>>106542345
The 5070 Ti Super is rumored to be sub 800 bucks. Better buck for money than a 5090 or 4090.
>>
After training a few loras and using it pretty often... chroma is just too hit or miss for me to want to sit here and gen constantly. The variance of quality when it comes to seed is actually insane. Super frustrating to deal with.
>>
>>106542444
what part of You wont get it for MSRP do you not understand
>>
>>106542424
I don't know. It's from anon https://desuarchive.org/g/thread/103457674/#q103458870
>>
>>106542444
>bang for your buck
If you're pinching pennies you're in the wrong hobby
>>
>>106542237
depends on what you mean by horror but any good model should play nice with "horror \(theme\)"
>>
>>106542444
How fucking poor are you that you can't just get a 5090? You could probably save up in a few months if you're really want it. I agree with the other guy, wrong hobby to be a fucking miser.
>>
File: 1396157476318.png (705 KB, 3304x3317)
705 KB
705 KB PNG
How do I run pseudo hiresfix/upscale/detailer or whatever you wanna call it, on chroma? The external hiresfix script doesn't on flux architecture and the flux upscale controlnet doesn't seem to do anything on Chroma. If it can be done with just UltimateSD upscale, then what settings? All I currently get are just enhanced image artifacts and not polishing.
>>
File: 1_00109_.mp4 (793 KB, 640x832)
793 KB
793 KB MP4
>>106542192
>>106542211
>>
>>106542482
holy shit, I was just about to ask that.
>>
File: 1_00110_.mp4 (455 KB, 640x640)
455 KB
455 KB MP4
>>106542482
>>
>>106542444
Not sure why you're getting so much negative attention. It's the best price/performance ratio. The 5000 series have been gathering dust for over a year at this point at MSRP so past the initial rush I imagine the same will be true of these.
>>
>>106542522
see
>>106542449
>>
>>106542445
Chroma really needs a long, detailed prompt to get consistent results with. The less detail you give it, the wackier the output is going to be.
>>
>>106542482
vae decode. upscale image with upscaler node. vae encode, sampler at low denoise
>>
>>106542528
I already mentioned that in my post. 5000 series hasn't exactly been flying off the shelves and I don't expect the revised models to either considering we're in a recession.
>>
>>106542496
Reminds me of those old esurance commercials. Also
>she won’t stop yapping
Kek, it’s almost charming at this point
>>
>>106542445
it made me go back to slopmixes on sdxl.
the prompt adherence can be superior but if i get another random art style change even though it has been described in detail in the prompt i'm just going to delete chroma and shitpost here about
>>
>>106542535
Not latent upscale?
>>
>>106542347
>Videogen peaked with HunVid.
It didn't, but I am excited to go back to legacy video models with a GPU from 2030 that can make a 5 second video in 15 seconds in fp16 at 720p and see what could have been if only we had the compute at the consumer level

>>106542544
5090 still sells out like hotcakes. I expect the super to as well because it is above 16gb of vram
>>
>>106542469
post your 4x 6000 pro sis, you're not a retarded poorfag, right? lool
>>
>>106542583
I guess that depends on your country. Here it's never gone out of stock, hence my perspective.
>>
>>
>>106542583
>Videogen peaked with HunVid.
NTA, but I think it would be more correct to say that Hunyuan peaked with Hunvid.
>>
>>106542586
There's a big difference between a 5070, a 5090 and a 6000 Pro. But yeah bro have fun genning on SDXL or whatever you will do on that shitty card. But I get it, you're poor, you don't have expendable income, so even $800 is a bad choice.
>>
how the fuck do i load gguf's with kijai's wrapper
>>
>>106542606
>me, NTA, but an sdxl genner on a 5070ti just catching random strays for no reason
Damn anon you don’t have to do us like that
>>
>>106542586
You got told to buy a 5090 and you skip to demanding a x4 6000 Blackwell setup, you're retarded. A 5090 is the best bang for your buck. No need to settle for a 5070 LMAO.
>>
>>106542611
You just load the model in the model loader node.
>>
fucking around with infinite talker and it's cool but still has janky lipsynnc issues because it's on outdated wan 2.1. Got it to work with a janky 2.2 allinone porn model and it was obviously better with lipsyncing even through all the noise and jank.

I'm assuming all the comfyui nerds are making 2.1 workflows for a reason though. What is it waiting on?
>>
>>106542637
No he wants a card that's already obsolete and can't even run modern models of 2025 let alone models in the next 2 years.
>>
>>106542535
>sampler at low denoise
how low? i keep getting these black scanlines on the upscale.
>>
>>106542637
>>106542606
>poorfag cope
Lol. Imagine not having like 5-7k$ after like month of work, poorfag children.
>>
>>106542706
broski is larping
>>
>>106542706
I don't know why you seethe like this
>>
>>
File: hy_generation_test2_0.jpg (490 KB, 2048x2048)
490 KB
490 KB JPG
More hunyuan slop!
>>
File: hy_generation_test2_1.jpg (384 KB, 2048x2048)
384 KB
384 KB JPG
Hunyuan seems to really like this inset photo frame composition. Not sure what triggers it
>>
File: hy_generation_test2_2.jpg (347 KB, 2048x2048)
347 KB
347 KB JPG
Was aiming for a flatter shading style here, but it least went with the limited palette.
>>
>>106542733
I want to know how to seemingly unrelated companies managed to produced the exact same slopped anime style.
>>
>>
File: hy_generation_test2_3.jpg (304 KB, 2048x2048)
304 KB
304 KB JPG
>>106542760
They really may be training on eachother's outputs + the same prepared datasets. I think that at the least they're doing their deranged "aesthetic tuning" pass with very similar targets.
>>
>>106542760
A general model is going to be reduced to the average of the dataset, ultimately what the consensus of what "anime" means.
>>
>>106542710
>>106542709
It's ok you're a poorfag who larps as having money for spending 2k$, you can just find another hobby lol
>>
>>106542767
Aka synthetic data, well known secret the chinks just use ai to train the ai
>>
File: hy_generation_test2_4.jpg (312 KB, 2048x2048)
312 KB
312 KB JPG
This one is interesting because the prompt actually avoided including the name "asuka" or anything that specific, while still trying to describe her.
>>
>>106542774
>no proof
>no terminal of your 4x6000PRO
LOL
>>
>>106542780
He's a tourist.
>>
>>106542774
Do you actually think it's difficult to have more than $10,000 in a bank account? Get a job anon. Even a normal incel can work 40 hours a week and make $1000/week and keep all that money living at home with mommy.
>>
File: HONK HONK.mp4 (2.93 MB, 512x640)
2.93 MB
2.93 MB MP4
reposting this in case anyone missed:

>Wansisters, we've OFFICIALLY and natively escaped 5 second hell. I got up to 15 secs and can't notice the seem, here's the info:

>How?

- the new Wan Context Windows (manual) node

>Workflow

- https://files.catbox.moe/aw54aq.json

Here video example, hopefully some of you anons will get better results (yyyyeeeeess, I know, lightx2v fried the first frame, kek)
>>
File: hy_generation_test2_5.jpg (202 KB, 2048x2048)
202 KB
202 KB JPG
Was supposed to be Sailor Mars. No idea how it became... this.
>>
>>106542790
Can you believe it, the problem isn't you *can't* do 15 seconds, the problem is no one does variations of the same movements for 15 seconds. You can tell it's just doing the same prompt on repeat.
>>
File: hy_generation_test2_6.jpg (287 KB, 2048x2048)
287 KB
287 KB JPG
Still not much luck deviating from the unistyle
>>
File: hy_generation_test2_7.jpg (390 KB, 2048x2048)
390 KB
390 KB JPG
I think that despite the higher resolution vae/output, the actual level of detail is lower than what we already get from the ~1MP models.
>>
>>106542790
Clown girl ai gf when?
>>
>>106542812
That's a general digital illustration, you should browse ArtStation to see what real artists do. Most artists don't do 1000 hour hyper-detail projects.
>>
what sampler/schedulers do you all use for wan
>>
>>106542825
I'm in the media industry and quite familiar. In the case of that test, the goal of the prompt was a high level of detail.
>>
>>106542847
Well "high level of detail" isn't what any model is captioned with, but you would know that as an expert.
>>
>>106542780
>well known secret the chinks just use ai to train the ai
kek, nothing is more slopped than western models, with Flux dev being the most slopped of all due to being entirely trained on synthetic data, as in nothing but output from Flux Pro

Only unslopped base model since SD1.5 is Chroma, even SDXL was doing a lot of synthetic training data
>>
>>106542790
What does this new node even do?
>>
>>106542790
slow motion
>>
>>106542801
True, its a hit or miss but yes, a lot of the gens do tend to repeat. The plus side is, its a step in the right direction away from the hacky workflows. Plus vramlets like me can load 240+ frames without OOMing. Reminds me of animatediff days.

>>106542856

>These nodes let you sample in sliding context windows instead of all at once, opening up new workflows for long sequences. Currently, only manual control is supported, and some WAN models still need tuning, but this lays the groundwork for more advanced scheduling and custom nodes.

Source: https://blog.comfy.org/p/comfyui-now-supports-qwen-image-controlnet?open=false#%C2%A7context-window-support
>>
>>106542790
Okay but, can you make her do a long sequence of things within that timeframe or is it going to loop like that? It's also in slow motion.
>>
File: 1751759780757968.mp4 (257 KB, 512x512)
257 KB
257 KB MP4
>>
>>106542895
The right direction would be allowing first frame + X frames which can be the last frames from another video with a new prompt. The problem never was "the videos aren't long enough" or rather that's a naive understanding of the problem, the problem is we need to be able to control the video over long sequences and a single prompt doesn't cut it, so you need a sliding window and you need to be able to interpolate the prompt across the sliding window.
>>
File: what do you WANt.jpg (24 KB, 474x365)
24 KB
24 KB JPG
>>106542897
Fuck knows. Try it out yourself, workflows there or load the new context nodes after updating comfy if you havn't. I only mentioned it because didn't see anyone talking about it.
>>
>>106542935
Message the devs, brother.
>>
>>106542095
>>106542163
>>106542434
>>106542496
Stop touching the mix with those latex gloves, you're ruining it dumb bitch
(Also still fuckign around with VibeVoice)
https://vocaroo.com/1cbgLlwQ15FK
>>
>>106542934
>256 color VGA
The good old days
>>
>>106542948
KEK
>>
File: file.png (31 KB, 933x260)
31 KB
31 KB PNG
holy shit ani is actually onto something with this sdcpp stuff
>>
File: 1746481131113230.jpg (941 KB, 1416x2120)
941 KB
941 KB JPG
>>
Hunyuan did this without even asking
>>
>>106542999
(the nip, not the censoring)
>>
whats the verdict on the new chink model? total slop? better or worse than qwen?
>>
File: 1731168167664444.jpg (934 KB, 1416x2120)
934 KB
934 KB JPG
>>
>>106542999
this the image model? wonder if it's fully uncensored like their video model
>>
Name a bigger scam than fp16 fast. You can't.
>>
File: hy_generation_test3_8.jpg (368 KB, 2560x1600)
368 KB
368 KB JPG
>>106543022
Yes, the new Hunyuan Image.

>>106543010
Many examples in this thread. Slop level is similar to Qwen.
>>
>>106542999
looks like a gay man with long hair
>>
>>106542790
i bet if you sped it up so its not longer slow motion it would be 5 seconds
>>
>>106542806
This image belongs to Anime Diffusion Thread.
>>
The stupid post above me is going to get like three (You)s
>>
>>106541117
show some respect to your king...

I think there are like 4 people here with 6000s I'm working on a C210 build with 4 in it :3
>>
File: 1754305612483354.mp4 (329 KB, 512x512)
329 KB
329 KB MP4
>>
>one more seed
>i know the next seed will be better
>>
anyone have that one 3 sampler workflow for wan?
>>
>>106543065
CFG skimmers
>>
>>106543170
I still think fp16 fast is worse. It absolutely butchers quality.
>>
File: 1744604216990248.mp4 (499 KB, 512x512)
499 KB
499 KB MP4
>>
File: file.png (10 KB, 1410x103)
10 KB
10 KB PNG
>update ComfyUI
>suddenly basic image generation takes up 100% VRAM
can anybody think of a fix or do I need to reinstall from scratch?
>>
File: 1743755857442206.mp4 (567 KB, 512x512)
567 KB
567 KB MP4
>>106543317
>>
File: 1.png (33 KB, 231x98)
33 KB
33 KB PNG
don't prompt large eyes on noob
>>
>>106543334
download more ram
>>
>>106543334
Just change the git head to the last commit that didn't do that?
>>
>>106543334
do a basic, separate clean install and test it. shouldnt take that long.

or use a backup of you comfy when it worked, you DO keep a backup, right?
>>
File: ComfyUI_00264_.png (1.25 MB, 1280x800)
1.25 MB
1.25 MB PNG
>>106543340
I used to love playing Beach Babe Killer when I was a kid
>>
File: wan22_light21_00661.mp4 (301 KB, 464x464)
301 KB
301 KB MP4
>>106542434
>>
>>106543367
>you DO keep a backup, right
>what is git reflog
>>
>>106543400
i know what it is but i just like to start shit while i wait for my prompt to finish
>>
>>106543414
I also like to start shit in this thread while my gens are genning.
>>
Are there any small local LLMs for prompt generation? Either booru or boomer prompts
>>
Maybe i'm fucking retarded but, what words do I have to give Chroma for it to make a room look like this. I tried

>a dark room with barely any light
>dimly lit room
>dark ambience
>horror theme
>television is the only source of light in the room

And they're still bright or fully lit.
>>
File: Screenshot_215.png (28 KB, 308x165)
28 KB
28 KB PNG
>just use the 2.1 4step lora on 2.2 bro!
>get this
>>
>>106543455
yes, multiple. one is even called promptgen IIRC, then dantaggen/tipo and so on

i am not very convinced of their usefulness as most of them don't have a particularly good grasp on what tags the models know or the creativity isn't necessarily very good

wildcard lists seem to perform better to me
>>
>>106543490
Looks like you're using the wrong vae to me.
>>
>>106543501
Makes sense. Maybe editing models are more suitable for my purposes actually.
>>
>>106543505
this is a screenshot of the sampler preview
>>
File: hy_generation_test3_4.jpg (233 KB, 2560x1600)
233 KB
233 KB JPG
>>106543455
Qwen and probably Hunyuan use Qwen 7B VL, so it would probably be the best. Hunyuan actually rewrites your prompt by default using it, but it enslopifies it.
>>
File: wan22_light21_00662.mp4 (283 KB, 464x640)
283 KB
283 KB MP4
>>106543490
works on my machine
>>
there must be more to life than this
>>
>>106543533
you're here forever, chud
>>
>>106543512
Looks like you're using the wrong vae to me.
>>
>>106543533
>than this
than what?
>>
File: AnimateDiff_00315.mp4 (2.32 MB, 480x720)
2.32 MB
2.32 MB MP4
>>106542314
>>
>>106542314
I don't think I could convince myself to buy another 24gb card knowing a 32gb card exists. Like my brain just won't let me.
>>
File: Rachel Nichols.jpg (533 KB, 1999x3000)
533 KB
533 KB JPG
What u guys think? Is the slight lost of detail is worth the smoother fps?
Default 16 fps: https://files.catbox.moe/6dwlns.mp4
Interpolate 4x (64fps) https://files.catbox.moe/gsca58.mp4
>>
Found something interesting about Hunyuan and artist names. It has an interesting behavior.

If it doesn't recognize the name at all:
Generate character with the assumed ethnicity of the name. "masamune shirow" or "yoji shinkawa" produce asians and "greg rutkowski" and "ilya kuvshinov" produce euros (sameface within that group).

If it recognizes the name, apply generic style associated with the name. For example, "yoshitaka amano" produces the generic moe anime slop style shown previously. Pic related.

Classical painters like Michelangelo and Caravaggio produce the same generic renaissance-like style.

Make of this what you will.
>>
>>106543566
proooooooompting
>>
>>106543675
I make nothing of it because as far as I can tell you are the one human being on earth using this model.
>>
>>106543668
Definitely prefer default, looks much more real
>>
>>106543683
>is there le MORE to this in life
you could say that about any hobby or anything really. if you're depressed, that's your problem but i quite like prooooooooompting and hearing about the advancements in the tech
>>
File: ComfyUI_01569_.png (1.42 MB, 1024x1024)
1.42 MB
1.42 MB PNG
>>
>>106543685
Comfy I think just added support so most were probably waiting for that.
>>
File: FluxKrea_Output_253622.jpg (2.68 MB, 1792x2304)
2.68 MB
2.68 MB JPG
>>
So I've come into the need to use *gags a little* SDXL for some img2img stuff. Is there anything on comfy UI that would let me inpaint faces at a higher resolution?
>>
>>106543724
For you? No.
>>
>>106543722
i dont know why but for a while i thought krea was a closed source model
>>
>>106543728
Fine I'll just use the crop and stitch node.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.