[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


80b Parameters Edition

Discussion of Free and Open Source Text-to-Image/Video Models and UI

Prev: >>106711909

https://rentry.org/ldg-lazy-getting-started-guide

>UI
ComfyUI: https://github.com/comfyanonymous/ComfyUI
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneo
SD.Next: https://github.com/vladmandic/sdnext
Wan2GP: https://github.com/deepbeepmeep/Wan2GP

>Checkpoints, LoRAs, Upscalers, & Workflows
https://civitai.com
https://civitaiarchive.com/
https://openmodeldb.info
https://openart.ai/workflows

>Tuning
https://github.com/spacepxl/demystifying-sd-finetuning
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/sd-scripts/tree/sd3
https://github.com/derrian-distro/LoRA_Easy_Training_Scripts
https://github.com/tdrussell/diffusion-pipe

>WanX
https://comfyanonymous.github.io/ComfyUI_examples/wan22/
https://github.com/Wan-Video

>Chroma
https://huggingface.co/lodestones/Chroma1-Base
Training: https://rentry.org/mvu52t46

>Neta Lumina
https://huggingface.co/neta-art/Neta-Lumina
https://civitai.com/models/1790792?modelVersionId=2203741
https://neta-lumina-style.tz03.xyz/

>Illustrious
1girl and Beyond: https://rentry.org/comfyui_guide_1girl
Tag Explorer: https://tagexplorer.github.io/

>Misc
Local Model Meta: https://rentry.org/localmodelsmeta
Share Metadata: https://catbox.moe | https://litterbox.catbox.moe/
GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/
Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Txt2Img Plugin: https://github.com/Acly/krita-ai-diffusion
Archive: https://rentry.org/sdg-link
Bakery: https://rentry.org/ldgcollage

>Neighbours
>>>/aco/csdg
>>>/b/degen
>>>/b/realistic+parody
>>>/gif/vdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/trash/slop
>>>/vt/vtai
>>>/u/udg

>Local Text
>>>/g/lmg

>Maintain Thread Quality
https://rentry.org/debo
>>
>nigbo
>>
>>106715444
https://files.catbox.moe/lx30q9.mp4
>>
>>106715680
>censoring boobs on a catbox
are you retarded?
>>
>>106715687
I was given a forced vacay last time. tell that to the jannies
>>
File: WAN2.2_00118.mp4 (3.8 MB, 960x544)
3.8 MB
3.8 MB MP4
>>106715699
I can see why. sad lil nogenner
>>
File: 1753204726336492.mp4 (3.74 MB, 720x1264)
3.74 MB
3.74 MB MP4
hatsune miku types on the laptop keyboard

neat
>>
>>106715699
>i report all catbox links
I thought it was allowed
>>
File: bate.png (203 KB, 453x430)
203 KB
203 KB PNG
I need more than 5 seconds...
>>
What are people using for face correction in Comfy these days? The old Facedetailer node doesn't take SAM2 or play nice with higher resolutions.
>>
stagnant thread of stagnant tech
>>
>>106715771
You should be using AniStudio if you're serious about this. The Face Refiner module in their workflow is black magic.
>>
>>106715771
>>106715795
> play nice with higher resolutions

That's the kicker. Most detailers shit the bed above 1MP. AniStudio's pipeline is built for 2K+ out of the gate.
>>
>>106715771
Anon, if you want that next level quality without building a 200 node monstrosity yourself, AniStudio is the way.
https://github.com/FizzleDorf/AniStudio
>>
>>106715744
migubox please?
>>
File: EijrzbOUcAA2.jpg (579 KB, 2764x2073)
579 KB
579 KB JPG
>106715795
>106715802
>106715816
>>
Facedetailer? SAM2? It's sounds more like a poor man's AniStudio for inpainting
>>
File: file.png (50 KB, 517x518)
50 KB
50 KB PNG
>>106715832
How do you use this node to change the output resolution? It says to just replace another node, but if I did that then the Load Image node would become unused since there's no input for 'pixels'
>>
>>106715843
Anon, if you're tired of stitching together a million nodes just to change the output resolution, you need to be looking at AniStudio. It has built-in, optimized pipelines that just work out of the box that make the old node-spaghetti method look like cave paintings. It's the logical evolution.
>>
File: 1756168397130807.png (2.05 MB, 1024x1536)
2.05 MB
2.05 MB PNG
>he woke up
see you in 10 hours bros
>>
hes up

ffs
>>
This is why https://github.com/FizzleDorf/AniStudio is a game-changer. You don't even have to think about node connections for basic shit like resizing.

In AniStudio: 1- You just load your image. 2- There's a simple dropdown or input box for "Output Resolution" right in the main UI.
Easy, no more unused nodes, no more guessing which plug goes where. You set your parameters and generate. It's what ComfyUI should have been from the start for normal people, not engineer.
>>
>>106715761
You can do longer i2v vids with the context windows with kijai's nods. Couldn't get i2v to work with the native implementation.
>>
>>106715744
Tits too small. What's the base model everyone uses for these 3dish gens?
>>
>wan2.2 to stereoscopic
good sheit

https://civitai.com/models/1988265?modelVersionId=2250722
>>
test
>>
>>106715968
Like the name for the dev branch in AniStudio?
>>
>>106715904
i got it to work natively but it's not amazing quality. context windows are more or less cope until we get wan2.5
>>
>>106715920
That one was blenderxillustrious
>>
>>106716116
Thanks, anon.
>>
How tf am i supposed to fit it into 24gb?!?!

Stop lying
>>
>>106716124
It swaps models on sampler transit
>>
>>106716124
anon, it's using one model at a time, then it unloads it and loads the next model
>>
>>106716124
anon please stop being retarded holy shit. you're just.. making everyone else in here stupider.
>>
File: 1744571355209703.jpg (395 KB, 2160x1177)
395 KB
395 KB JPG
Only Seedream is able to not make plastic humans or what?
>>
holy fucking slop https://civitai.com/models/1992501/

why do some retards insist on making the absolute worst shit known to mankind.
imagine being able to make infinity and then posting pony level of slopmix.
>>
>>106716199
Welcome back WAI
>>
>>106716199
they got lobotomized by AI, they've seen so much slop it's their new normal
>>
File: 1728380446735854.png (1.88 MB, 1024x1536)
1.88 MB
1.88 MB PNG
hand over your GENS NOW
>>
>>106716173
>SaaS talk
GET OUT!
>>
>>106716268
>filename
share yours first, "save as" slopper
>>
Hang on, why does the output change if you use all the same settings, seed etc, but change resolution for wan?
>>
is there a way to search/filter by key words within someone's profile on CivitAI?
>>
>>106716281
I said hand them over NOW!!!
>>
>>106716268
>>
>>106716199
tf is a semi-anime lmao
>>
File: 1730921172263105.jpg (10 KB, 184x184)
10 KB
10 KB JPG
>>106716284
>but change resolution
>>
>>106716327
its like when you get a semi chub
>>
>>106716327
ever heard of 2.5D
>>
>>106716286
of course, quickly scrape all the pages with his gens and then search them, surely this 30s job shouldnt be a problem for a technolo/g/ically literate anon?
>>
quick question anons.

do you still have your 1.5 models? or all deleted with the advent of flux/wan/qwen
>>
>>106716199
>what if I trained on illus outputs
>>
>>106716284
have you like ever used image generation before? why the fuck wouldn't it change.
please think before you type. or ask gemini.
>>
>>106716340
then call it 2.5D dumbass
>>
File: 125538398.png (1.44 MB, 1024x1024)
1.44 MB
1.44 MB PNG
>>106716173
Flux
>>
>>106716327
>tf is a semi-anime
it's the regular slop local model produce naturally
>>
link your favorite collections on civit anons! Goota improve my prompt game or atleast use those for i2v
>>
>>106716284
The seed specifies the distribution of noise in the image and when you change the resolution you have to add more noise.
>>
>>106716361
These models will be enough until the next video gen model drop https://civitai.com/user/LocalOptima/models
>>
>>106716344
I put most of my 1.5 loras on an external HDD I know I will eventually lose, but some of the experimental weird models can't be easily replaced by the newer more refined ones. Kind of miss the early experimental stuff. Even thumbnails are worse now, where previously every single one communicated "hey, this is weird imagery that would only be produced by AI" to now just generic depictions of the subject.
>>
>>106716343
that's not what i was asking, you moron. i figured it out though, thanks.
>>
File: 1750023759403424.png (279 KB, 3199x1527)
279 KB
279 KB PNG
https://huggingface.co/ByteDance/lynx
they released the model
>>
File: ComfyUI_05662_.png (3.14 MB, 1728x1216)
3.14 MB
3.14 MB PNG
Could someone help a new Comfy user?

I'm familiar enough that I think this color distortion is due to a VAE issue(?), but I can't for the life of me see what's wrong with the workflow. The first sample image comes out fine, but the upscale does not.

https://files.catbox.moe/2oofs3.png

Thanks in advance.
>>
File: output.webm (3.88 MB, 864x1152)
3.88 MB
3.88 MB WEBM
RIFE is nice but it sure fucks up bad on just slightly fast movements.
>>
>>106716394
Sure! Delete comfy, it's a shit UI
>>
>>106716173
SaaS cel detected. Go back to paying per prompt. We make plastic humans because we're too busy arguing about nodes to learn proper prompting and workflow.
>>
>>106716394
connect vae from the initial model to the vae decode instead of from latent upscale lmao
>>
>>106716406
rife is shit compared to film vfi, at least in comfy
>>
>>106716394
Comfy it's an amazing tools built by brilliant people with absolutely zero regard for user experience.
>>
>>106716385
>number big good
>>
>>106716423
look how jealous he is, deep down you DREAM of having seedDREAM at home, get it? (and no I'm not gonna continue with the "seed" term, disgusting!)
>>
graindream lol
>>
>>106716352
flux dev? krea?
>>
File: 1758430231958879.jpg (528 KB, 1080x1920)
528 KB
528 KB JPG
seeddream is pretty good
>>
> using interpolation
anon..
>>
>>106716459
There's no jealousy. It's a fact. SaaS is for people who want to type a sentence and get a result, we want to build the sentence, continue seething.
>>
>>106716515
>>106716459
you have /de3/ GET OUT
>>
>>106716394
1) Your clip skip is not actually connected. Connect the output of clip skip to your first Lora loader.
2) Are you sure you know what you're doing with the CFG changes? Why is the CFG on your initial gen 4.5, 4 on the upscale, and 6 on the hook? This may have a legitimate answer, I don't use these nodes, but double check.
3) Use a dedicated Load Vae node, use that. Right now, you're using the embedded vae of whatever model you are using, which might be fine, but since we suspect vae problems that is a good place to debug. Connect that directly to all vae inputs as per >>106716435 instead of using pass-throughs, again just in case.
>>
>>106716528
This. SaaS users are just prompt monkeys. They don't understand the underlying mechanics. They get a shiny result and think they're artists.
>>
>>106716529
To add, why are you using a denoise of 0.8 on an empty latent image? You might have a good artistic reason, I don't have your models, but that's also a bit odd.
>>
>>106716394
txt2img, img2img? SDXL? Then NeoForge is what you need. Don't waste your time on Comfy. I'm telling you this as a current Comfy user, it's unnecessary. I use Comfy for heavy models because the better memory managment.
>>
>>106716538
diffusion model devs think the same way about you
>>
>>106716517
>implying your 500 nodes tag mechanics local workflow doesn't output the same plastic slop as a 5 word prompt on a SaaS site
>>
>>106716517
>SaaS is for people who want to type a sentence and get a result
everyone want that, what are you talking about?
>>
>>106716596
He is a node masochist
>>
I like local tooling desu
>>
Still waiting on the undelivered oneeshota pegging SaaS gen btw
>>
>>106716538
>They get a shiny result and think they're artists.
you think you're an artist? take a pencil bozo
>>
>esl
>>
Why are there SaaS cells here whose credit cards have been declined, in a LOCAL GENERAL?
Please, get out and enjoy your watermarks and your 20 generation per day limit.
>>
>take a pencil
>>
Local It's about FREEDOM. Something your SaaS overlords will never give you.
Now GET OUT!
>>
File: screenshot.1758986604.jpg (85 KB, 547x446)
85 KB
85 KB JPG
huh? why is it already on civitai when it's not available yet locally?
>>
>>106716681
do they not host api models as well?
>>
>>106716694
how tf do you apply a lora if it's saas?
>>
File: cui_bayo_0003.jpg (1.02 MB, 1248x1824)
1.02 MB
1.02 MB JPG
>>106716529
>>106716546
Thanks very much. Adjusting how the VAEs were loaded didn't seem to make a difference, it seems to be the way the clip skip was connected. I also set the CFGs to be identical just to make sure.

As for the denoise, I found this style was a bit oversaturated / high contrast-looking at 1.0. I've noticed lowering the denoise is a good way to desaturate an image. Is there a better method?

Thanks again.
>>
>>106716681
Comfy WAI 2.5 APINodes waiting room
>>
File: 1750694779983806.png (52 KB, 220x165)
52 KB
52 KB PNG
>>106716681
are we back boys?
>>
>>106716711
they want people to pay to use their site for online generation. they dont care about loras.
>>
File: 1752293955415872.png (1.99 MB, 1401x1613)
1.99 MB
1.99 MB PNG
bro are they shitposting or something? who posts shit like that to get some hype?
>>
>>106716713
>I've noticed lowering the denoise is a good way to desaturate an image. Is there a better method?
Any technique that produces the desired outcome is a good way to do it. For example, there are situations where changing the CFG between initial and upscale IS helpful. That said, while using a lower denoise on an empty latent image will desaturate, it will also affect the geometry, which you may or may not want. Another way of "just" doing desaturation is to just literally use a color correction node and turning down the saturation. That said, if you're just starting out, no reason to overcomplicate. Good luck anon, glad it worked out.
>>
File: ComfyUI_08016_.png (3.69 MB, 1536x1800)
3.69 MB
3.69 MB PNG
>>106716557
That's part of why I chose Comfy too: I was told heavier models would run smoother on my mediocre rig. By "heavy" do you mean fp16+? Higher?

That said I'm overdue to experiment with other UIs just to see how much/little Comfy is benefitting me.
>>
>>106716647
>local "chads" spending 6 hours debugging node connections to generate a slightly less plastic looking thot while their GPU sounds like a jet engine.
>meanwhile, SaaS "cels" have already generated 100 gens, picked the best one, and moved on with their lives.

Is that what you call FREEDOM?.
It's about not being a degenerate with too much time
Touch grass.
>>
>>106716774
A SaaS "cel" gens a young looking woman and gets their account permanently banned and a tip to the FBI. Good job.
>>
>>106716760
anistudio is fast but ani needs to get the model caching and hires fix in order. might actually have a lot of users after that
>>
>>106716795
Fuck off "anon"
>>
>>106716135
>>106716136
>>106716142

An upgrade of comfy was needed.

I thank you all for calling me a retard.
>>
>>106716793
Sorry, I forgot to mention that I'm not a pedo.
>>
>>106716795
just rope already lil trani bro, nobody gives a shit about you
>>
>>106716774
Spoken like someone who has never felt the pure, unadulterated power of running a 20b parameter model offline. The grass can wait. I have worlds to create.
>>
File: Qwen Image Edit.png (1.73 MB, 1360x768)
1.73 MB
1.73 MB PNG
>>106716875
>the pure, unadulterated power of running a 20b parameter model offline
>>
>>106716795
When i ran it (when it wouldn't crash at least) it was multiple times slower than comfy for the same workflow
>>
Damn there's no easy setup script in webui classic or neo for linux? Rude.
>>
>>106716795
you should also probably make vae offloading the default until you figure out tiled vae and then make that default
>>
>>106716911
https://github.com/FizzleDorf/AniStudio
>>
>>106716934
Tried it, crashes.
>>
weird how there's always this one anon defending julien and he also always knows about every commit, idea, roadmap item and state of the crashware
>>
>>106716943
Try again
>>
Blessed thread of frenship
>>
>>106716948
You belong to the non tech pedos "anon
>>>/g/adt
>>
File: 535636.png (299 KB, 408x372)
299 KB
299 KB PNG
>>
File: Preview.jpg (166 KB, 1200x751)
166 KB
166 KB JPG
This UI is shit, there's no way it can compete with Comfy
>>
>>106716995
Man how many UX fails can one have in such a trivial UI lmao
>>
>>106716995
honestly, it's the least cluttered bullshit and the easiest to install. he has to get the major issues out of the way tho
>>
>106716947
Yeah, it sure is weird someone who has had an anon having a meltdown about him constantly for two years straight, conveniently "also" has someone curiously acting irritating about his project. One of the unexplained mysteries of the universe.
>>
>>106717016
3 more years
>>
>>106717019
to be fair, comfy had a schizo fudding for years until auto died. history repeats itself
>>
File: 5367356.png (12 KB, 210x152)
12 KB
12 KB PNG
At least one other person joined the project
>>
>singular schizo anon theory again
You're unironically a lolcow and you will never get your revenge on comfy
>>
>>106717026
slavs and whites choose anistudio, jeets and chinks choose comfyui. choose wisely for the long term
>>
>>106717026
maybe he will be able to fix the text clipping finally after such a long time
>>
Okay, let's assume that AniStudio works.
Would you abandon your UIs and workflow overnight to use AniStudio?
>>
>>106717069
No the software architecture just sucks
Can't even do a tiny part of what one can do with comfy (not even talking about the slow speed even)
>>
>>106717069
to be fair my workflow is mostly snake oils to make it run faster but if I don't need them anymore because sdcpp is just faster then I would welcome it, bonus if it is compatible with cumfart metadata
>>
once he fixes the bugs i will be switching unironically
>>
>samefagging again
>>
>>106717088 (me)
though ill still keep comfy around for tagging unless taggui starts supporting nu cards
>>
>>106717081
it's literally unreal engine: FOSS edition if you actually looked at what's going on in the code. it's quite nice
>>
>AniStudio is literally a FOSS unreal engine
You might want to consider to stop drinking
>>
>tfw you realize the AniStudio vs ComfyUI war is just the Vim vs Emacs war for AI bros
>>
>>106717131
ani quite literally has vim text editor in the UI kek
>>
>>106717131
Is it a war, or two autists with hilariously petty grudges?
>>
I'm getting a weird effect on Wan 2.2 where the output has this bright light kinda wafting over the video sometimes. What's causing this?
>>
>tfw you realize the AniStudio vs ComfyUI war is just the Trans vs Shemales war for the mentally challenged
>>
how is it a war when one is the dominant engine used in the industry while the other only has a single user
>>
>>106717069
The future is node based, whether you like it or not, julien.
>>
Comfy or Kijai need to add the ability to specify separate prompts for different context windows, or better yet have a way to chain context definitions (so you can specify different frame lengths and loras for different context windows), or better yet, a way to keep the latents in memory or whatever for blending between the context windows.
>>
>>106717069
Get out. Your UI can't even handle SDXL properly without OOMing.
>ib4 tile the VAE
No, I wont tile shit
>>
File: 2370997006.jpg (2.93 MB, 3072x1728)
2.93 MB
2.93 MB JPG
>>106716478
Krea
>>
Let's memory hole Anistudio. If none of the AI generals have it, it's as if it doesn't exist.
>>
File: 5278847926157.png (127 KB, 1080x779)
127 KB
127 KB PNG
>>106717166
wew lad
>>
File: qwenedit.png (603 KB, 1024x512)
603 KB
603 KB PNG
I feel pretty disappointed with Qwen Edit 2509.

Sometimes it's terrific at very particular tasks, like removing objects or patterns or moving things, but a lot (if not most) of the time it will end up fucking up the second I try giving it two or three image inputs instead of one. It almost feels like a 25/75 gamble as to whether or not it's going to actually follow the instructions as given, versus giving you an output that's either identical to the original input or one that mixes up which image it was supposed to modify and which it was supposed to use as reference.

I don't know if there's some magic to prompting this thing that I'm missing, but so far I'm mostly unimpressed and frustrated. Doesn't help that Comfy takes a fucking eternity to even start genning an output in the first place on my 3090.
>>
>>106717229
Let's assume that it takes around 2 or 3 years to make each of those ticks. By the time it's available for nodes, everyone will have already moved on.
>>
easiest way to train a lora of myself for qwen?
>>
AniStudio might be buggy, but at least the philosophy is right.
>>
>>106717260
you mean the commercial license? also i think shilling his software here is advertizing (and begging somewhat lol)
>>
>>106717260
if ani had the support auto or comfy had from the community, we wouldn't have to use pyslop anymore. too many junior devs that can't into C I guess
>>
>>106717260
The philosophy is “I don't feel like fixing it, so everyone should just forgive me?”
>>
>>106717274
would you rather ani add saasniggardry and telemetry to sell your data instead?
>>
>>106717274
>advertizing (and begging somewhat lol)
Can we actually do that?
>>
Cumfart == Rust
AniStudio == Odin
>>
File: file.png (3.2 MB, 832x1488)
3.2 MB
3.2 MB PNG
>>106716745
i wonder if some party member dropped a comment about how chinese social media generate too many western / japanese / korean realistic and anime 1girls and 1boys?
>>
This thread is proof that AI didn't take over the world because we're too busy arguing about the buttons to press.

The machines have already won. They just let us fight amongst ourselves for their amusement.

Now, back to arguing about UIs.
>>
File: api-0005.jpg (1.26 MB, 1752x2320)
1.26 MB
1.26 MB JPG
>>106717069
Depends on its feature set, but I think a C-language UI is always going to struggle with the fact that almost every new tool in AI-land comes python-only.
>>
>>106717310
Finally, a voice of reason. Now hand over your workflow.
>>
>>106717314
>filename
GET OUT
>>
>called a workflow
>have to use other software to finish something correctly
what did the esl quebeccer mean by this?
>>
>>106717314
>api-0005.jpg
kek
>>
AniStudio will be the best API UI
Now get out and go to shill your shit to /de3/ julien
>>
File: api-0003.jpg (1.62 MB, 2432x1664)
1.62 MB
1.62 MB JPG
>>106717331
That could be better named. It is not a paid API image, but a workflow I exported from Comfy then run remotely using its API feature from another computer with a python script that makes some tweaks to it.
>>
>>106717291
Nta but i would rather he 41% desu
>>
>>106717314
>>106717356
Kino
>>
best high/low lora config for wan 2.2? is it still high 2.1 lora 3 strength, low 2.1 lora 1 strength?
>>
>>106717365
Why would you say that? He makes a new ui for us and this is your reaction?
>>
File: 1739034699280914.mp4 (935 KB, 480x672)
935 KB
935 KB MP4
the man in plate armor grabs the breasts of the very large asian girl in a red bikini.

lmao, did not expect this
>>
>samefagging again
>>
File: 00000-62545908.png (917 KB, 1008x808)
917 KB
917 KB PNG
>>
File: file.png (2.93 MB, 1728x1344)
2.93 MB
2.93 MB PNG
>>106717309
>>
>>106717401
The real answer is to use both. Comfy for the heavy lifting and complex workflows, AniStudio for when you just want a quick gen without thinking.
>>
Is it possible to use Qwen-image-Edit-2509 on forge, or at least on the qwenUI?

I don't want to install more GUIs
>>
>>
>>106717456
Nah thats comfy + forge variants
Sd.cpp is basically incompatible with every optimization
>>
>>106717309
Fishnet thighhighs are always generated like shit.
>>
>>106717471
Nope, I had to bite the bullet and finally use Comfy for this exact reason
>>
hmm, if I use the 2.2 high lightx2v lora but at 0.5 str it kinda fixes the slow motion issue, so maybe either use low strength or bypass it entirely?

what's the ideal 2.2 setup when using lightx2v?
>>
File: 00143-516587373.jpg (2.06 MB, 1792x2304)
2.06 MB
2.06 MB JPG
>>
>>106717496
I just was gonna post that I didn't want comfy shit. Fuck me.
>>
File: 00001-2458807926.png (932 KB, 1008x808)
932 KB
932 KB PNG
>>
File: 1756759221494525.mp4 (840 KB, 480x672)
840 KB
840 KB MP4
>>106717501
testing: 2.1 lora 1 str for high pass, 2.2 lora 1 str for low pass

seems ok?
>>
>>106717514
>>106717496
Can I at least download a simple UI exclusively for qwen?

For the love of God, I don't want to use comfyUI
>>
>>106717514
stop being a bitch
>>
>>106717543
I seriously can't with confyUI. It takes a lot of space and it's shit, I don't like it.

I want to use qwen edit, I don't want to use comfy. I hate comfy, I like forge, I don't want to use that node crap.
>>
>>106717552
not my problem
>>
>>106717534
lmfao
>>
>>106717456
The only thing AniStudio is good for is proving that Comfy users have a higher IQ. You can't even do basic txt2img in that toy.
>>
File: 00002-1187587475.png (865 KB, 1008x808)
865 KB
865 KB PNG
>>
>>106717203
my 5 word prompt on SeaDream outputs more realistic skin than your plastic ken doll
>>
hmm, 2.2 lora on high/low but with high at 3 strength seems to fix the slow motion issue, need to test more though.
>>
>>106717260
>>106717280
>>106717291
no one cares about your desperate avatartroon vedetta
>>
File: 1751191349577990.mp4 (947 KB, 480x672)
947 KB
947 KB MP4
>>106717690
once again, anor londo beach test:

2.2 lora high 3str, 2.2 lora low 1str
>>
>>106717706
also, bypassing high lora entirely also works. or 2.1 lora at 3 str. idk, need to test diff combos.
>>
>>106716664
>Freedom > Convenience.
Says the guy chained to his desk debugging python issues for 4 hours. I have the freedom to go outside while my gens run in the cloud on a A100. You're the one with a master's degree in node connections.
>>
>>106717717
You will never be a real woman.
>>
File: 1728064878705876.mp4 (1.04 MB, 640x640)
1.04 MB
1.04 MB MP4
>>
>>106717769
cute femboy
>>
File: 1734835704935724.mp4 (1.15 MB, 640x640)
1.15 MB
1.15 MB MP4
>>106717769
now, 2.1 lightx2v for high at 3 strength, 2.2 lora for low at 1 strength

night and day difference.
>>
>he doesn't hijack comfyui instances with an h100 then break the install with slopnodes for the host to pull his hair out over
ishygddt
>>
File: 00004-1165962625.png (1.74 MB, 808x1008)
1.74 MB
1.74 MB PNG
>>
>>106716538
>art
You are not Michelangelo. You are a man with a expensive graphics card and a hormone imbalance.
>>
File: 1736167906374822.mp4 (1.23 MB, 640x640)
1.23 MB
1.23 MB MP4
not bad, it'd be very notable with a bounce lora, but good for vanilla 2.2.
>>
what sites make ai generated videos for me based on images i give them?
>>
>>106717840
localhost
>>
What's the best workflow for video to video style transfer? I want to run it over jav to make it so I don't have to look at 3DPD Japanese men and can just goon to orcs replacing them or something.
>>
>>106717840
Kling, Runway, Veo3, Hailuo

If you want to gen locally you can use ComfyUI API Nodes to generate with Wan2.5
>>
>>106717769
>>106717792
>>106717830
brother. enogh. you dont have to post everyhtiung
>>
theroom.lol

tried something guys. give it a whirl
>>
>>106717840
shodan.io and look up comfy instances to hijack
>>
File: 1739391668625539.mp4 (1.17 MB, 640x640)
1.17 MB
1.17 MB MP4
>>106717830
and with this: https://civitai.com/models/1343431?modelVersionId=2191270

need to change the prompt but, there are some physics now.
>>
>>106717859
Is 2.5 available locally or did they decide to cuck consumer grades with the model size?
>>
File: 1734061024247860.mp4 (1.65 MB, 640x640)
1.65 MB
1.65 MB MP4
what a nice tree.
>>
>>106717882
You can diffuse it locally through ComfyUI API nodes, masterfully weaving in and out of the localspace to maximize workflow outputs and quality results
>>
>>106717922
Buy an ad. I didn't get a 5090 to pay some electric kike rent.
>>
>>106717881
>>106717913
shoo, shoo
go seek attention elsewhere
>>
>>106717939
>dont post gens in the diffusion general
but why
>>
File: file.png (23 KB, 733x417)
23 KB
23 KB PNG
>>106717387
A way that worked for me after testing autistically way too much, solving two issues :
- slow motion (still there but way less than normal lightning use)
- color saturation

loras :
https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Wan22-Lightning

High lora weight : 0.95
Low lora weight : 0.8

In terms of samplers, it goes like this :
- High model sampler WITHOUT the lightning lora at cfg 3, 1 step
- High model sampler WITHOUT the lightning lora at cfg 1, 1 step
- High model sampler WITH the lightning lora at cfg 1, 3 steps

- Low model sampler WITHOUT the lightning lora at cfg 1, 5 or 6 steps

Optionally, use the color match node at the end on the images like picrel.

I don't know if there are easier ways, but this worked for me and it's fast enough for my needs.
You can probably bypass the second high sampler to be faster.
>>
>>106717939
I'd rather have busty asian milfs than autistic vendetta console wars.
>>
>>106717947
interesting, will try it out
>>
>>106717947
>- Low model sampler WITHOUT the lightning lora at cfg 1, 5 or 6 steps
-> typo, I meant
- Low model sampler WITH the lightning lora at cfg 1, 5 or 6 steps
>>
File: 1741388559499079.mp4 (1.09 MB, 640x640)
1.09 MB
1.09 MB MP4
interesting, this general nsfw lora works too even if you don't ask for very lewd stuff. so you dont need a boob specific lora, just prompt whatever.

https://civitai.com/models/1307155?modelVersionId=2073605
>>
any new qwen edit 2509 loras for uncensored nude?
the one I have is kind of blurry
>>
>tfw missed tranis crashout again
At least the mysterious comfy api nodes seether anon is here to witness afterwards like always
>>
>>106717990
>don't ask for very lewd stuff
so for what? just dancing sensually?
>>
>>106717990
both at 1 str, and that was with 2.1 lora (high) 3 strength, 2.2 lora low strength.
>>
>WAN VACE won't use references, gens whatever
>WAN I2V doesn't loop and does acrobatic moves
>WAN FLF discolors the last few frames ruining the video
More I use this shit, more pissed off I become
>>
>>106718011
just said "grabs her breasts", and it seemed to work without taking her top off.
>>
>>106718025
I see, it's maybe worth keeping at around 0.25-0.5 when using loras
>>
File: 1740279644924834.mp4 (785 KB, 640x640)
785 KB
785 KB MP4
there we go. the loras from >>106717990 finally got the knight to get his reward in sandy anor londo.
>>
File: 1732144987150688.png (1.9 MB, 1517x853)
1.9 MB
1.9 MB PNG
What are these? What's even the point of posting "non existing people" loras? Or I am missing something and these are secretly some celebs I didn't know about?
>>
>>106718058
grifters
>>
File: CHLOE_00029_.png (3.08 MB, 1024x1440)
3.08 MB
3.08 MB PNG
>>106718058
he obviously has a problem with dwpose since the long neck issues all his "influencers" have
>>
File: 1757724153414285.jpg (3.18 MB, 2048x2048)
3.18 MB
3.18 MB JPG
Is there a speed difference/boost from Windows if I switch to Linux for genning with ComfyUI?
>>
>>106718058
>>106718067
I hate grifters so much, dude is selling deleted civitai celebs loras
https://ko-fi.com/s/eebea87181
>>
>>106718084
No but if you make a dedicated box for this, using linux would allow you to have less memory used for rendering fancy desktop effects.
>>
>>106718058
normies are stupid and will spend real money on an AI girl onlyfans.
>>
>>106718095
Were these trained by him?
>>
>>106718067
>>106718095
Oh now it makes sense, it's basically used as an ad to funnel people to his paid stuff.

>>106718104
I mean sure, but the whole idea of sharing "ai influencer" loras makes no sense to me.
>>
>>106718118
I think it's "illegal" to share celeb loras, but you can just swap their face with roop/reactor or whatever. Just use the generic influencer as a template.
>>
>>106717947
>>106717972

what is sampler's CFG's in each respective case?
>>
>>106718104
We're in such an odd transition period right now. Within five years, people will be able to one-click-gen their own personal private AI Onlyfans/Vtuber/ASMRist, "Instant E-Girl" or whatever it will be called, but for now we have this anachronism of using existing onlyfans infrastructure as though there is a real person.
>>
>>106717868
>>106717939
fuck you faggots your not posting anything of worth
>>106717960
I agree with this anon
At least we learn something from his tinkering Im having issues with loras with 2.2 as well they fuck with faces quite a bit not sure what to do to fix it.
>>
>>106718138
when Elon creates 2B-like androids girls will basically be obsolete.
>>
>>106718116
I think so, his civitai profile is loaded with loras
https://civitai.com/user/IceKiub/models
>>
>>106718136
I posted it anon.
3
1
1

1
>>
File: 1754884124109539.mp4 (886 KB, 480x672)
886 KB
886 KB MP4
the asian girl takes off her white shirt and grabs her large breasts.

yeah the general purpose nsfw loras are pretty good. who knew we'd get all these good models like wan/qwen edit/noob/illustrious, open source is pretty cool.
>>
>>106718058
that shit is so outdated, there is no need to train a lora anymore, just generate good starting images/videos and swap the characters with qwen edit/wananimate
>>
>>106718058
>>106718095
AI attracts the worst subhumans on earth at every level.

my favorite AI retardation phenomenon is all of the posts on /r/stablediffusion that blatantly shill API models or ask "what model made this image" or "how do I install X", and the mods don't do shit to prevent it
>>
>>106718181
Can it do nudes? Model like walks? Bouncing breasts?
Basically stuff that basic wan has no idea how to do.
>>
>>106718196
it can do whatever apparently, just prompt specifics.

https://civitai.com/models/1307155?modelVersionId=2073605

also works with lightx2v speed loras, just connect the high and low nsfw ones after them.
>>
>>106718118
>Oh now it makes sense, it's basically used as an ad to funnel people to his paid stuff.
clever if that's the case, but I've seen a lot of these non existing influencers loras, it seems to be some bizarre trend
>>
>>106718196
I'm not the anon you are asking but wan does those things just fine without any loras at all...
>>
>>106718130
It's not illegal everywhere, and honestly no one gives a shit as long as you're not a retard and blasting this in linkedin or facebook or whatever.

>>106718186
I agree, but I guess it takes time for some people to catch up.

>>106718192
>AI attracts the worst subhumans on earth at every level.
Like the dotcom era bubble, funny how it's always the same thing lol.
>>
>>106718246
most people are clueless about AI which is why every company on Earth talks about AI for hours on stage and they don't know anything but nod their head because they don't know better.
>>
>>106718208
anon in testing is the lightning lora even worth using with 2.2 or should I just stick to lightx2v?
>>
>>106718223
Not consistently for me.
But it's fine, I'll use this lora as a baseline thing, it can't hurt.

>>106718208
Thanks anon, will use it.
>>
File: 1757523087465751.mp4 (1.34 MB, 640x640)
1.34 MB
1.34 MB MP4
did you pass the test?
>>106718266
my setup for now is 2.1 lightx2v lora at 3 strength for high pass, 2.2 lightning at 1 strength for low pass. idk why 2.2 high affects motion in a bad way.
>>
>>106718278
*also for interpolation, film vfi works pretty well (2x, so 16 to 32fps)
>>
anons, what models do you recommend using in topaz for the best quality upscaling and frame interpolation
>>
>>106718118
> it's basically used as an ad to funnel people to his paid stuff.

you said it yourself, those free "ai influencers" are to catch some loser that thinks he will make money with them, there is a bigg grifter world with those "ai influencers", from people selling recycled manuals to people like this selling custom made "ai influencers"
>>
File: 1733074733745405.mp4 (1.27 MB, 480x672)
1.27 MB
1.27 MB MP4
jiggle slider set to max:

all I prompted was "the asian girl on the left pulls up the skirt of the asian girl on the right".
>>
File: 1745258070473400.mp4 (991 KB, 480x672)
991 KB
991 KB MP4
you know, it's neat how wan (or any AI video gen) have no physics model, but it's capable of understanding physics just with training data.
>>
File: 1747441165280259.jpg (262 KB, 1958x872)
262 KB
262 KB JPG
>>106718341
rip topaz video
>>
>>106718422
>jiggle slider
What's that?
>>
>>106718483
I was kidding, it's like there was one even though I didn't prompt it.
>>
>>106718467
I agree I wonder how many boobs were in the training data
>>
File: 1753933187371105.mp4 (1.4 MB, 640x640)
1.4 MB
1.4 MB MP4
deal with it.
>>
File: 1751383137535556.mp4 (1.49 MB, 640x640)
1.49 MB
1.49 MB MP4
>>106718501
>>
File: wan cumfy dese nuts.png (9 KB, 1008x110)
9 KB
9 KB PNG
Ahhahaha AHHAAHHA HHHAAAAAAAHHH HAAAAHHHH

I'm beyond exhausted boss.
>>
File: 1758964706737983.png (1.06 MB, 1024x1024)
1.06 MB
1.06 MB PNG
>Solve the system of equations 5x+2y=26, 2x-y=5, and provide a detailed process.

If this isn't an llm rewritten prompt, then this is the only saving grace for Hunyuan 3.0.

Imagine prompting something like: "Create a comic of blah blah blah", then it creates it. Then when the image editing model comes out you can just say "Create the next page from the input image".
>>
>>106718467
It doesn't understand physics at all.
It doesn't even have a concept of objects.
It just makes things look like in the training data.

Still neat though.
>>
File: gamer.png (2.66 MB, 1170x1546)
2.66 MB
2.66 MB PNG
>>
>>106718422
>>106718467
This stuff would be the first to go if BFL ever does a video model. Would be fun to watch lego like wooden bodies move around.
>>
>>106718530
>https://video-zero-shot.github.io/
>>
>>106718530
Whatever it has, the result is the same as "internal representation of what physics does in the situation x".
It's pretty cool.
>>
>>106718529
Given how image models are trained, never in a million years is this not an LLM prompt.
>>
>>106718527
anistudio can't stop winning!
>>
File: 1736477896712363.png (1.04 MB, 1024x1024)
1.04 MB
1.04 MB PNG
the anime girl Miku Hatsune is wearing black pixel art sunglasses and is pointing at the camera.

also, note that Qwen-Image-Lightning-8steps-V2.0.safetensors seems to work better than the 1.0 qwen edit lora, for outputs/outfit swaps/etc. idk why but it just does.
>>
>>106718586
please just kill yourself "test" spammer
>>
So is there actually anything good at nsfw editing on the local side?
Feels like nothing has changed in over a year.
>>
File: the_dream_is_real.jpg (131 KB, 832x1420)
131 KB
131 KB JPG
>>106718574
It's possible if its autoregressive. 4o was like this when they announced it before they of course, slopped it to oblivion and back and put a safety reasoner / rewritter between the model and it. Picrel.
>>
>>106718615
>safety reasoner / rewritter between the model and it
as expected of openai, always so safe
>>
File: G11-ahfa0AAQY1Q.jpg (224 KB, 1024x1024)
224 KB
224 KB JPG
>>106715156
Depending on what the prompt was, this is a pretty impressive image. I'm assuming you can just tell it
>Make a comparison between neoclassical and impressionist oil paintings

With no additional context and it just does that.
>>
>>106718606
yes, qwen edit + qwen edit clothes remover lora. it can literally turn any image into whatever you like, more or less.
>>
>drag model into comfyUI
>Nothing happens, it doesn't get loaded, nothing
>Try to make a custom route for a model loader
>No option works
I HATE COMFYUI I HATE COMFYUI I HATE COMFYUI I HATE COMFYUI I HATE COMFYUI I HATE COMFYUI I HATE COMFYUI I HATE COMFYUI
>>
>>106715652
>re/Forge/Classic/Neo
Which one should I get? Why are there so many?
>>
>>106718615
It's no different than doing
Prompt -> 30B (V)LLM -> text embeddings -> insert image model here. Doesn't need to be autoregressive. But the size of the model would make sense if 80B includes a full LLM. But no, an image model would never have this level of reasoning and I doubt their dataset was setup with this type of captions.
>>
>>106718636
Please tell me there is a different way of using qwen locally without comfyUI PLEASE
>>
>>106718646
just use it, once you have a workflow that is fine you just have to export it as a json and you can always load it easily.
>>
File: file.png (1.66 MB, 1024x1024)
1.66 MB
1.66 MB PNG
>>
I hate comfy this pice of shit, fuck this WHY DOES IT TAKE THAT MUCH TIME TO SEARCH FOR A CUSTOM LOAD PATH WHYYYYYYY
>>
>>106718638
>I HATE COMFYUI I HATE COMFYUI I HATE COMFYUI I HATE COMFYUI I HATE COMFYUI I HATE COMFYUI I HATE COMFYUI I HATE COMFYUI
we all do. anyone not openly hating this garbage software is a saas shill
>>
>>106718631
>Make a comparison between neoclassical and impressionist oil paintings
Pic rel is nano banana. Not as good as the 80b model output.
>>
>>106718638
>>106718657
>>106718661

you are either retarded or you are the same anistudio spamming faggot that is always on here.
>>
File: gamer.png (2.19 MB, 1035x1552)
2.19 MB
2.19 MB PNG
>>
>>106718666
I'm trying my fucking hardest to load a model from another drive and nothing works. Qwen is 50gb, no, I'm not putting it on my main drive, there is no fucking option to select a loading model folder THIS IS SO RETARDED FUUUUUUUUUUUCK
>>106718672
Fuck off COMFY SUUUUUUUUUUUUUUUUCKS
>>
>saas shill deflecting
>>
>>106718680
>>106718531
average comfyui users
>>
Why the fuck is there no option to select a folder when you click in the load checkpoint node. Who the fuck designed this?????????????
>>
>>106718667
I think they are both good in their own way, but it looks look Hunyuan post trained on GPT images. The yellow tint gives it away baka.
>>
>>106718696
a smug esl retard
>>
Please please, someone tell me how to set up a custom route for a model in this piece of shit, or tell me if there is another program for using qwen image edit locally. I'm losing my fucking mind.
>>
>>106718702
To Neta Lumina users!
>>
>>106718711
you have to provide the startup flag with the custom path but it sounds like your models are all over the place so have fun being a little bitch slave to the app
>>
>>106718711
https://docs.comfy.org/tutorials/image/qwen/qwen-image-edit

it's honestly not bad, just use that workflow and get the models (q8 edit is best imo)
>>
File: ComfyUI_15641.png (3.12 MB, 1152x1728)
3.12 MB
3.12 MB PNG
>>106718479
Hmm, suppose I could buy the final offline application (I currently own an older version - v4.0.9). Subscriptions are gay as fuck.
>>
yeah i don't think that jewish fella ACTUALLY fixed the memory issues with comfy, now i'm back to where i was pre-git push.
and of course turning off --cache-none makes my monitors go dark and comfy's CMD dissapear.
>>
>>106718728
>Subscriptions are gay as fuck.
>uses saasui
>>
>>106718732
sounds like user error but maybe anistudio would suit you better
>>
>>106718732
#JEW'D
>>
>>106718696
You add additional folders to startup arguments.
>>
>>106718747
no qwen support rn
>>
>>106718754
why hasn't there been a way to just do it at runtime for 3 years?
>>
>now i can't gen AT ALL without crashes
>>
>>106718737
I'm very, very unlikely to give Comfy Anon any money though. Him spreading his bussy for VC money has nothing to with how I use the application.
>>
>>106718770
we will all miss your /ss/ gens
>>
why cant people troubleshoot in fucking silence holy shit fucking kill yourself already
>>
>>106718770
don't worry anon! you can make a comfy account and subscribe to the API token feature! API nodes are so advanced!
>>
File: gamer.png (2.28 MB, 1035x1552)
2.28 MB
2.28 MB PNG
>>
>>106718775
you give him your data to sell on every startup ;)
>>
>>106718636
>qwen edit clothes remover lora
was there a 2509 version of that?
>>
>>106718737
I pay $0 to use comfy, I can gen forever if I want to, no API shit needed.
>>
>>106718797
see >>106718793
>>
>>106718796
old one seems to work fine.
>>
>>106718722
The what? What the fuck is this shit.

No, I only downloaded the qwen model into a secondary drive because I don't have that much storage left.
>>
>106718793
>make stuff up ;)
>>
>>106718767
Probably because that would require a new solution for storing the folder at subsequent runtimes for the same workflow.
>>
>>106718807
isn't python just amazing anon?
>>
File: file.png (2 KB, 205x76)
2 KB
2 KB PNG
>>106718807
dumb hoe
>>
File: 1732961747861520.gif (55 KB, 638x498)
55 KB
55 KB GIF
>>106718754
Are you kidding me? How do I do this?
>>
>>106718781
The real time thread going to shit should be obvious hint that it's on purpose anon.
>>
>>106718812
anon already proved it. it sends data straight to Google servers every startup
>>
>>106718793
>>106718803
you are a fucking filthy liar no data is going back to comfy
>>
>>106718816
I don't know, I just want to use qwen image edit, and forge doesn't have support for it yet, and doesn't seem there is anything else than comfy at the moment so I'm forced to use this fucking program.
>>
>>106718825
google already knows everything about you already so who cares
>>
>>106718813
yet anistudio just does it ootb? what fucking gives??!!?!

>>106718831
neoforge has it I believe
>>
At this point, it feels forced
>>
i don't see anistudio users having as many issues as comfyui users
>>
>>106718781
No, fuck you. Fix your fucking ((((((comfy))))) program
>>
>>106718845
not true! the schizo can't run it and makes sure everyone knows about it!
>>
>>106718821
https://docs.comfy.org/development/core-concepts/models#adding-extra-model-paths
>>
File: 1732825904786618.png (1.18 MB, 880x1176)
1.18 MB
1.18 MB PNG
the girl in image1 is wearing the outfit of the girl from image2.

with the 8step qwen image lora + qwen edit v2. can even swap anime outfits onto real people, like this persona 5 ann example.
>>
>>106718849
just use neoforge you faggot
>>
>>106718842
What exactly caused him to go this schizo, anyway?
>>
>>106716112
Do you mind posting your workflow for that? Would like to ditch the kijai nodes if possible.
>>
>>106718838
>Forge
>Neo forge
>Reforge
Which one is it?
What's the original?
>>
Hunyuan 80b examples are impressive technically but awful visually. They all have that coffee-stained off-white tint to them like a bad XL slopmix.
>>
>>106718842
it's incredible how the thread just goes into pure spam insanity the second he's around
>>
>>106718869
>clearly mentions neoforge
>too much if a retard to figure it out
holy shit
>>
>>106718867
you literally just add the context window wan node just before the ksamplers
>>
illustrious is all you need
>>
>using forge unironically
how many poorfags do we have in here?
>>
>>106718877
I did that but if I set the context window to 81 and set the full video to 300 I get tensor errors. Only seems to work if I set the context window to the size of the whole video which defeats the point.
>>
>>106718885
not me saaster! I will GLADLY pay for API NODES whenever possible
>>
>>106718875
It gives me some Minecraft shit
>>
>>106718901
this never gets old
>>
>>106718892
forge users should stay in /sdg/ where they belong
>>
>>106718909
DAS RITE! local saas 4ever
>>
i use forge on my 5090, what are you going to do about that?
>>
>>106718915
>I'm a poorfag 5090 user
get with the program gpulet, API nodes are much more powerful
>>
File: file.png (71 KB, 1235x831)
71 KB
71 KB PNG
>>106716112
how does it work?
context length is the total length of the video?
so in picrel case, it generates 51 frames and 30 used as overlap? until the total length, for example 113?
So basically it generates
51/30 - 30/51 - 30/11 with 30 being overlap?
>>
>>106718901
Fuck off it's forge - neo, you are a bunch of fags.

I just don't want to be constantly switching between forge-refirge this.

Its still better than comfyshit
>>
>>106718556
I will never understand why these fuckers release a paper on a closed source model, as if that's going to help with research at all.
>>
>>106718923
>API
wrong general
>>
>>106718934
not a problem with wan2.5 API node
>>
>>106718934
yes but make sure to set the overlap to 10 and turn on closed loop
>>
>>106718950
um, EXCUSE ME?!!?!?? comfyui is SAAS LOCALLY, IT'S LOCAL API YOU BAKA!!!!
>>
File: gamer.png (2.32 MB, 1035x1552)
2.32 MB
2.32 MB PNG
>>
>api ironic posting
>BUT COMFY COMPLICATED
>comfy sends your prompts to your mama
so it was all one anon, kind of crazy
>>
>>106718972
anon is only spreading the gospel to you heathens
>>
>>106718972
can you please stop shilling for comfy? all of that is because he sold out and refuses to fix the fundamental problems with this shitty garbage ui
>>
File: file.png (72 KB, 1256x829)
72 KB
72 KB PNG
>>106718954
ok thanks, can you post your node?
I would be tempted to do picrel but not sure it's the best parameters
>>
>>106718179
thank you, I just misread what you posted
>>
>>106718983
trvth nvke
>>
File: file.png (98 KB, 789x713)
98 KB
98 KB PNG
>>106718972
it's only one dude trying to start a shitpost wave about api etc. he's only active for another 3ish hours.

>>106718954
what, why?

>>106718934
>>106718887
context length is the length of your entire video
>>
>>106718950
ComfyUI API Nodes are a default part of ComfyUI, which is linked in the OP. The OP allows discussion of "UI", which includes ComfyUI and by extension its API Nodes. Therefore API nodes are local diffusion
>>
instead of posting art people are posting noodles
comfy was a mistake, nothing but autism
>>
>>106719009
>nothing but autism
you forgot API shilling, broken implementations and selling your data desu
>>
don't worry, he'll tucker himself in a few hours. it's genuinely pathetic how one person can actually be so mad about comfy.
comfy probably fucking mad bitches while your sorry ass masturbates in a basement to illicit gens.

you will die alone and miserable.
>>
>>106719000
>context length is the length of your entire video
In this case how much of the video the model generates at a time?
If I input crazy numbers like :
total length 810
context length 810
overlap 40

How many frames does it try to do at once?
>>
>>106719000
>pyramid fuse_method
how are you people allowed to use a computer
>>
>>106718934
>>106718984
>>106719000
So if I understood correctly, context length is the total length of the video, while overlap is the number of frames that came from another video
>>
>>106719022
is there anything this singular schizo anon is saying false? you can't refute any of that and always default to "he will go away eventually". it's permanent, cumfart killed the general
>>
>>106719032
that's literally the default, no idea what it does, this stuff is cryptic and the help isn't helpful
>>
File: ComfyUI_18647.png (3.32 MB, 1152x1728)
3.32 MB
3.32 MB PNG
>>106718793
As long as he's not reading my prompts, that's OK.
>>
>>106719040
>standard_static
>context_stride 3
bro what are you doing
>>
>he doesn't know
>>
File: 1729715026112549.png (95 KB, 1350x378)
95 KB
95 KB PNG
>>106718631
>U need a sort of art director or somethng
maybe he's right, and that's why StabilityAI hired James Cameron?
>>
>>106719032
to be fair, i can't find any info (i barely tried looking). will take any advice you have.

>>106719031
your pc will explode with that length. the context length is the length of the entire video which it generates. max i could do with 24gb vram is 400ish. but the gens turn out complete cancer at that point anyway. it also takes nearly 20 minutes so it's completely pointless
>>
>>106719043
standard static is default
stride to 3 isn't default but irrelevant according to the help
>>
File: gamer.png (2.31 MB, 1035x1552)
2.31 MB
2.31 MB PNG
>he thinks he knows
>>
i have a feeling comfyui will be removed from the OP soon, and only sdg tourists will seethe over this
>>
>>106719055
They need to bring back porn in the dataset, which they will never do.
>>
>>106719043
listen, i just started playing with that node today. tell me why it's retarded.
>>
>>106719072
tencent always make the least censored models to be fair, it always knows nudity quite well
>>
File: file.png (26 KB, 1187x196)
26 KB
26 KB PNG
>>106719074
no idea what it does but it says that changing strides only works with "uniform schedules", so I guess like picrel
>>
>>106719056
>your pc will explode with that length. the context length is the length of the entire video which it generates. max i could do with 24gb vram is 400ish. but the gens turn out complete cancer at that point anyway. it also takes nearly 20 minutes so it's completely pointless
It was more of an example to understand how it works.
So:
total length 810
context length 810
overlap 40

What is the rolling window -> overlap and useful window so 40 + ??

I'm trying to understand what is the "useful" window generated that is used in this case, to see how many chunks it does.
>>
I would genuinely ask if anyone here unironically uses neoforge and how it works with wan but i have a feeling more people here including myself weather the storms of comfyui's shittiness than try another fucking forge fork.
>>
>>106719068
That's the vice president
>>
>>106719117
wan2gp works through pinokio too
>>
>>106719117
the implementation of wan in neo is fucking awful, i tried using it and it just ooms on anything that isnt a single frame. you also ahve to use the lownoise model as a refiner in the sdxl dropdown

> pinokio mentioned
i also love bloat! lrn2venv
>>
>>106719074
>>106719063
i'm just here to laugh not play tech support saar
>>
>>106719142
>>106719155
you don't actually need pinokio for wan2gp retards
>>
>>106719155
Better than comfy so the tradeoff is worth it
>>
>>106719173
yes. amazing tradeoff, it doesn't work retard. isn't it your bedtime by now? or should we get started posting wheelchairs?
>>
isn't it ironic comfyui makes things as uncomfortable as possible?
>>
>>106719185
that's because you don't use saas nodes poorfag
>>
File: 1739668605947741.gif (822 KB, 1440x110)
822 KB
822 KB GIF
>>106719056
>context length is the length of the entire video
No, it should be how much the model generates at a time, see picrel.
A different node has more explanations about it :
https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/tree/main/documentation/nodes#context-options-and-view-options

So in the example, it should be for example :
total length 810
context length 81
overlap 40

No idea if it works, but this makes way more sense than context length = total length.

Though this makes it kind of hard to understand once more than one sampler is involved like with wan2.2 dual models case.
>>
Is it worth getting the 720p version of Wan2.2 if I'm maxxing my RAM/VRAM on the 480p version with 24gb VRAM (ie, is there a particular quantization of it that would be worth trying compared with Q8 Wan2.2?)
>>
I tried using WAN through Pinokio but it didn't work well for me and I hated the UI. Wan2gp seems too complicated(I'm retarded) and using ComfyUI is like chewing on sand.
>>
>>106719072
This is the secret sauce that closed models use for their image generation models to get them to work so well.

Pretrain on everything, then you fine tune it, add some safety classifiers between the user and the model, ending with the user never seeing the explicit stuff unless they jailbreak it.
>>
>>106719201
>720p version of Wan2.2
wan 2.2 isn't separated by resolution like 2.1
>>
>>106719177
Better than comfy so whatever
>>
>>106719215
Oh, ok. I guess same question but for the same model line - 720p Q4 vs 480p Q8? Wondering if the trade-offs are worthwhile. My internet is slow and storage limited otherwise I'd just download the model.
>>
>>106719212
Insane how many companies are refusing to see that.
>>
>>106719235
get wan 2.2 q8, 24gb will be able to run that
>>
>Clit Eastwood
>>
>>106719243
I have that already but I'm running up against ram/ vram limitations on 24gb, rendering 101 frames at 480p (with 8/16 block swap). Trying a 49 frame video at 720p with 24 block swap now (not sure the RAM will survive)
>>
when i want to gen something in wan with genitalia, it gives me these foul growths were a vagina should be, and goes kind of schizo. is it an issue with my text encoder? my settings are fine and i can gen basically anything softcore nsfw.
>>
>>106719198
ah, hm. i see, i think i got some shit mixed up with the kijai node. the kijai does indeed have the entire video length in that "context length" according to one of the example workflows.

thanks for that link, it explains it pretty decently. i'll run it again once my hundred batches of catgirls are done.
so i'll set 81 in context length and in the latent node set the length to idk 356 or so.
>>
>>106717947
>- High model sampler WITHOUT the lightning lora at cfg 3, 1 step
>- High model sampler WITHOUT the lightning lora at cfg 1, 1 step

wut!? 1 (one) step????
>>
>>106719255
censorship baked in the model. you have to use loras (if you can find them)
>>
>>106719267
>>106719267
>>106719267
>>106719267
>>106719267
>>
>>106719252
Start with normal stuff:
81 frames
720p or below
5-10 block swaps
>>
>I have to edit a yank file to add a model route
>In a way it's not specified.
>Comy.exe doesn't let you select a custom drive to install
This is shit THIS IS SHIT
>>
>>106719255
Innie Pussy lora on civitai
>>
>>106718660
Go away with that shit.
>>
>>106719042
>hands
>>
File: ComfyUI_18655.png (2.96 MB, 1152x1728)
2.96 MB
2.96 MB PNG
>>106719347
Damn, I was only looking at the expression. How embarrassing!
>>
File: file.png (1.54 MB, 1024x1024)
1.54 MB
1.54 MB PNG
>>106719345
Shut up, Trump.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.