Eternal Kingdom of Heaven EditionDiscussion of Free and Open Source Text-to-Image/Video Models and UIPrev: >>106873109https://rentry.org/ldg-lazy-getting-started-guide>UIComfyUI: https://github.com/comfyanonymous/ComfyUISwarmUI: https://github.com/mcmonkeyprojects/SwarmUIre/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneoSD.Next: https://github.com/vladmandic/sdnextWan2GP: https://github.com/deepbeepmeep/Wan2GP>Checkpoints, LoRAs, Upscalers, & Workflowshttps://civitai.comhttps://civitaiarchive.com/https://openmodeldb.infohttps://openart.ai/workflows>Tuninghttps://github.com/spacepxl/demystifying-sd-finetuninghttps://github.com/Nerogar/OneTrainerhttps://github.com/kohya-ss/sd-scripts/tree/sd3https://github.com/derrian-distro/LoRA_Easy_Training_Scriptshttps://github.com/tdrussell/diffusion-pipe>WanXhttps://comfyanonymous.github.io/ComfyUI_examples/wan22/https://github.com/Wan-Video>Chromahttps://huggingface.co/lodestones/Chroma1-BaseTraining: https://rentry.org/mvu52t46>Neta Luminahttps://huggingface.co/neta-art/Neta-Luminahttps://civitai.com/models/1790792?modelVersionId=2203741https://neta-lumina-style.tz03.xyz/>Illustrious1girl and Beyond: https://rentry.org/comfyui_guide_1girlTag Explorer: https://tagexplorer.github.io/>MiscLocal Model Meta: https://rentry.org/localmodelsmetaShare Metadata: https://catbox.moe | https://litterbox.catbox.moe/GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-oneTxt2Img Plugin: https://github.com/Acly/krita-ai-diffusionArchive: https://rentry.org/sdg-linkBakery: https://rentry.org/ldgcollage>Neighbors>>>/aco/csdg>>>/b/degen>>>/b/realistic+parody>>>/gif/vdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/trash/slop>>>/vt/vtai>>>/u/udg>Local Text>>>/g/lmg>Maintain Thread Qualityhttps://rentry.org/debo
>>106879198>>106879200>>106879203Fucking CivitAI man. Im having fun generating HiRes image for my blonde loli and suddenly i got cucked. FUCKFUCKFUCKFUCK
Why is there no rentry for Qwen Image Edit?
>>106879231get the cheapest 5090 you can buy, or the cheapest 4090 if you can find one, and enjoy unlimited cunny at home
>>106879231If you're that poor then buy a 16GB AMD card as befits your station.
>>106879266>AMD>AI stuffLol you sure ??
>>106879266That's cruel, anon.
>Took 10 - 20 minutes per image to generate 960x540 (Upscaled to 1920x1080) image with 20 steps.i- is this normal for 8gb GTX 1080 or what ??
bait
>>10687926616GB is shit no matter what. If he's poor he should look at a used 3090 or 7900 xtx, or used professional or server cards.
Blessed thread of frenship
>>106879402>7900 xtxShitpost
>>106879447it can run all the latest models like qwen and wan in comfyui, it's a perfectly viable card.
>>106879354what model are you genning with? for SD1.5 that sounds like cpu only speed
a 3090 (preferably 2) is the bare minimum nowadays
>>106879470its slower than a 5070 (non-ti) for all AI tasks
>my landscape in collageNICE THANKYOU SO MUCH!
>>106879471I think its Illustrious Model with Hatsune Miku pic in it. Im using Automatic1111 UI.
> genned on a 4GB AMD card
>>106879231LOCALGENERALFAGOT!
>>106879500how many hours
>>106879503Its related since im [spoiler]forced[/spoiler] to use a Local UI now
>>106879480FUD. it takes roughly the same time as a 3090 to do a 20 step 1024^2 chroma gen. And a 5070 12GB is laughable, larger models like Qwen will run like dogshit.you read some shitty benchmarks conducted by people who don't know how to use AI. they simply fucked up their AMD installs, probably benchmarking on Windows too.
>>106879554never used auto1111
>>106879554Check cuda.
>>106879554what the hell is this?
>>1068795199001
>>106879600what models
Im going to kill myself
>>106879500ok but why did you have to include the trashy tattoo
>>106879500jesus, my dick.
was jesus a mushroom?
>>106879620the asian lady is wan t2v, then wan i2v for animation.this one is flux then wan i2v to animate. both are about 30mins on a 3090 to gen.
>Nunchaku requires insight face which only works with numpy 1.26, but other custom nodes require a higher numpy version to worksigh
>>106879243No no, the 4090D or the 5000 Pro is the right choice. 32GB is not enough for many LoRA training tasks, and it's borderline for some inference tasks. 48GB gives you some future-proofing.
So just getting into wan. I have a 4090. I loaded the template from comfy for the 14b i2v wan 2.2 and.. It works I guess. It uses the lightning loras and scaled fp8 models. Takes around 4-5 minutes for 720p, and looks to be offloading to RAM. What's the best way to use the hardware I've got? The lightning loras and native workflow? I've seen people talking about kijais nodes.. Ilooked at the gguf models and they're about the same size as the fp8 scaled models from the native workflow.. I'm a complete newb when it comes to wan, if you have any tips I'll gladly take em
>>106879748q8 gguf is better than fp8, use that instead.native workflow is better than kijaibe aware that using lightning loras removes the negative prompt(cfg 1) and greatly diminishes the quality of the motion.(dumbed down)use this sampler instead of the dual ksamplershttps://github.com/stduhpf/ComfyUI-WanMoeKSampler/tree/master
>>106879648Can I have your GPU
>>106879818Based thank you kind anon. 3/4 gens I've made so far have done a weird ghosting thing, what's with that?
>>106873991Oh, I just woke up. That's two LoRAs; one for Gwengoolie and another for big boobs. It looks like that because of the Gwen LoRA, it over trained on the studio background and gives everything a plasticy look. I need to go back and edit her into different backgrounds (which should be easy now with QIE) and redo it.>>106879748Drop down to 1152x*. I upscale and interpolate with TVAI before downscaling back to 1280 for upload, so there's no benefit (for me!) to genning at the higher resolution.
>>106879872What GPU you use and can i work under your CEO dad ?
>only blue board ai>filled with a** and t***the internet already has infinite porn, there is no real use case for ai
>>106879925Not my specific scenarios.
>>106879925Use case for complaining about use cases???
>>1068798722 loras in the same run or made first image and then reused that image again in a new run ?
NVIDIA DGX Spark support for ComfyUI!ComfyUI continues to cook and improve everyday!https://x.com/ComfyUI/status/1977886707245797707
>>106880034LFG!
>>106879913>Asus ROG Strix OC 4090 It's nice because I can limit it to 400W and it's still a hair faster (literally just one!) than a stock 4090... and also, my dad is retired.>>106879949Same run.
Ok I am doing 5 gens with Chroma 2k and the gen is running really slow for me, anyone know what it could be? is this the famous OOOOMING? how can I prevent that?
im getting better results in WAN I2V with no prompt that using a prompt.should I be referencing the subject?
DONT BUY GPU RTX 6090 WILL BRING THE 5090 DOWNI WANT TO BELIEVE
>>106880134good.
>>106880095>*WAN 2.2 Image-to-Video Formula, the source image already establishes the subject, scene, and style. Therefore, your prompt should focus on describing the desired motion and camera movement.Shouldn't need to unless it's not properly recognizing something.
if this AI sh*t is so next level then try making a picture of a dog wearing sunglasses on a skateboardgo ahead I'll wait ;)
get better bait
I'm reading on github that an RTX3060 user took 7.5 hours to generate an 11s 720p video, oh fuck
maybe the future will be training models on 4bit (meaning we'll be able to go for bigger models on 24gb vram cards), and guess what, it's Nvdia that'll save us from Nvdia lmaooo >>106880242
>>106880252Too good to be true, at least for general application.There is bound to be a catch the p-hacked arxiv paper isn't telling, as always.
>>106880180should I literally prompt something like "camera moves" or just "moving"
>>106880249the human eye can't see past 480i anyways it just diminishing returns after that point
if anyone's bored, make some NES style pics. I need inspo for my game and sometimes there is cool stuff in the details. My computer's too potato.NES games that don't exist, 'this and that but it's a NES game', 'kid icarus underground forest region NES' I dunno.
>>106879924how do you get this style of panties? do they have a specific name?
>>106880334here on the cutting edge of ai, at the very brim of innovation, we ask the most complex of questions
>>106880319Not your personal army
>>106880347panties are fascinating, so indeed
>>106880233Nice
>>106880334The prompt only specifies "white panties, bow panties." The latent upscale helpfully added the lace on its own. But latent upscale likes adding details anyway, so you'll probably get that consistently too if you specify lace panties.
>>106880305here's the official prompt guide from alibaba; https://alidocs.dingtalk.com/i/nodes/EpGBa2Lm8aZxe5myC99MelA2WgN7R35yscroll down to Basic Camera Movement and Advanced Camera Movement, they've got prompt and video examples of the types of camera movements.
>>106880237enjoy
>>106880334low leg pantieshttps://civitai.com/models/933778/adorable-low-panties
I finally got around to trying open pose because I wanted to do something with it but it appears to be kind of worthless. Am I just doing something wrong? I fiddled with all the sliders and nothing is making it consistent. I guess one of the other controlnes would be better for stuff like this?Does anyone have advice for doing character turnarounds and such?
>>106880377thanks anon
'388Heh. Nice try, but no.
is WAN 2.2 even worth trying out with 16gb vram? is it possible to train LORAs with that amount? i've seen some impressive img2vid and txt2vid but i dont know if i should even bother
>>106880390oh nice find
>>106880237The technology isn't quite there yet, unfortunately.
>>106880397>Does anyone have advice for doing character turnarounds and such?Openpose on its own is not great for this. You probably also want a dedicated turnaround Lora (or wait six months when we have 1click image-to-model workflows).
>>106880471>(or wait six months when we have 1click image-to-model workflows).What's that about?
>>106880397openpose got significantly worse for sdxl onward
>>106880416No problem. To double check, I tried a no-frills upscale instead of my usual modifications. Adding "lace panties" consistently produces that pattern. The original gen probably automatically used lowleg panties as >>106880390 says due to the retro artstyle lora, but prompting it explicitly also ensures that the base gen has the correct cut.>>106880347We're doing that exact underwear scene from Weird Science. I can't wait for the adaptation twenty years from now.
>>106880397qwen image edit can do it.
>>1068803881.5 sucks. do 1.4. I deleted 1.5 yet again. 1.4 stays on the drive. It's SOVL
>106880435you're on your own if you don't like it, shitposter
>>106880549It just so happens that I've been meaning to try it, where do I get 1.4 and whatever GUI works with it?Civitai has on 1.4 pic on it, it is pretty close to looking like a 1.5 pic.
the VRAM optimizer goat is backhttps://github.com/comfyanonymous/ComfyUI/pull/10308
>>106880590let's see it in stable-diffusion.cpp.
https://civitai.com/articles/20211STOP NOTICING
is wanimate good?
>>106880631Did you use a refiner? (serious question)
>>106880478Obviously I can't predict the future, but multiple companies have tried their hand at creating tools that take a gen and automatically turn it into a fully textured and rigged 3D model. None is practical quite yet (to my knowledge, happy to be corrected on this point) so six months is a placeholder, but people are actively working on it for obvious reasons. If you can turn a gen into a textured and rigged 3D model, then turnarounds and other poses etc "come for free."
>>106880520yeah it's pretty nice, I got tired of getting the most plain ugliest panties, so might as well use the right terminology to get better looking ones
>>106880639imo the way is um what do you call those ai that take control of your keyboard and mouse and use apps? One that uses Blender, based on references you supply sounds like the way to go.
Holy fuck changing the UI makes night and day difference.
>>106880649If you haven't already, download the danbooru autocomplete or another tagging database. Lots of specific panty types that models are likely to know.
>>106880665what do you mean
>>106880634no?
>grok imagine cannot do a woman carrying a man in a fireman/bridal carryLocal can still win!
>https://www.illustrious-xl.ai/sponsorHave you heard of the Korean salaryman with two (2) active braincells?
>>106880515That makes sense. I thought it was supposed to be like rock solid for posing so I was confused with it being so inconsistent.>>106880542Don't you need a lot of vram for that? I use this sometimes for small edits but I'd want to do this locally. Also in this test it wasn't great. Usable though I suppose. https://huggingface.co/spaces/multimodalart/Qwen-Image-Edit-Fast>>106880639I thought we would get 3D generation before video and boy was I wrong lol.
>>106880746Doesn't look like her. idk. funny, admittedly.
>>106880729Im the guy with Poorfag GPU. Im using ComfyUI now and i can generate 1440x1440 image with 1.5x Upscaling with my old ass GTX 1080 in about 5 minutes on each image. It use every single of my RAM and VRAM but its way quicker now
>>106880764>Im using ComfyUI nowi dont understand. what were you using before that was slow?
wake me up the day local will be able to do this kino >>>/wsg/5998181
>black and white and (red:133742069) all overQwen is playing coy and ignoring my token weights
>>106880771I have no idea. I just deleted it. last time i used it before CivitAI incident was like 3 years ago
>>106880717oh yeah forgot about that node, I'll install it, good idea
>>106880397Google Nano banana and Qwen Edit 2509 made open pose redundant to the most obtuse of use cases. When I try to use open pose, the perspective/proportion is fucked up and isn't naturally matching. Straight genning is superior to trying to fuck around with the angles/proportions.
what is the point
>>106880397If you’re using sdxl or derivatives like noob/illustrator/etc, open pose is total shite. You’ll get much better results with the sdxl control net and the invert or depth_anything_v2 modules. Open pose must have been great for 1.4 or something cause you still see it recommended tons but it just doesn’t work.
>>106880881>>106880948Thanks guys.
>>106880947I guess the yellow buzz (the one that allows you NSFW) is more expensive?
>>106879234Rentry anon is long dead and no ones picked up the torch
>>106880948I cannot tell if this was real or if I just imagined it, but wasn't there something about some of the SDXL controlnets being trained with the wrong channels, and if you switched them around to be "wrong" in the interface (and therefore correcting the training error) they worked better (but not yet SD1.5 tier)? Does anyone else remember that?
>>106880940
>>106880940Damn I love that hair. Anime hair in a semi-realistic style just triggers something amazing. Also, booba.
>wan21 had a good cumshot lora>wan22 cumshot loras all make the futa spit cum out her mouth
>>106880965>I guess the yellow buzz (the one that allows you NSFW) is more expensive?Looks like it gives compensation as blue instead yellow now. At least majority of it.
For a character lora, is this enough for tagging?
>>106880986No idea. Every module seems to work fine except openpose ones. Generally I just stick to the two I mentioned though, cleanest results.
>>106880986It was something about gree nand blue color channels switched, but even if you use channel switching node, openpose is often hit or miss.
>>106880986>Does anyone else remember that?It was some simple change for the code that required you to switch around some limb colors for controlnet pose. Not 100% sure but I think it made pose work with SDXL. NoobAI should have their own CN pose model that just works with Illustrious too.
>>106881060>>106881077>>106881094Ah, thanks, that was it.
Any grifter news? I want to griftMAXXX
>>106880947Cuck currency
>>106881013
Brooooo it only took 7 Minute to generate 2K Image (using hires fix). Fuck Civit AI i think my old shit GPU can generate Loli Miku now. Why the fuck i only know this until now ??
>>106881170Paedophile thread is over there >>>/g/adt
>>106881007
>>106880134naruto is always appreciated
>>106881046Which captioner is this?>>106880782I don't think Qwen's text encoder supports that.Same story with t5
https://futurism.com/future-society/openai-chatgpt-smut Is this the end for all other models?
>>106881217https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
>>106881240I don't trust anything that OpenAI says, especially after the enforced ChatGPT-5 safety router and other garbage. Call me when they have an Ani stripdancing on screen who suggests sexy prompts on her own volition. Then and only then I will believe.
>>106881046IMO you can disable the first one and prepend the characters name to every caption. I can't remember the last time JoyCap mentioned the resolution by default desu.
>>106881240Very nice that special needs people get to write their own articles these days.
>>106881240>trust me bro, in 2 weeks my models will be uncuckedhe said this shit for years loool
>>106881278Modern reporting is such garbage it has gotten me to retroactively question all of recorded history. These people are barely even conscious, but can we be sure that's a purely modern phenomenon?
>>106881287>it has gotten me to retroactively question all of recorded history.don't forget that only the winners write history so yeah...
Can anyone tell me why the hell is this the case? This is genuinely driving me nuts.On illustrious and noob, I can do second pass upscaling with low denoising (typically between 0.25 and less than 0.4 denoising strength), using karras schedulers. It works fine.I prefer to use this instead of adetailer or GAN models that just sharpen the image because I like the additional details second pass can add to the higher res image, while sometimes fixing the minor errors in the initial image.However, I just can't fucking get this to work on base SDXL or Flux. It's always blurry as shit compared to what I get on illustrious.So why the hell is this the case? I can buy, it's just a different model shit with Flux maybe but shouldn't it also work on SDXL? JUST WHY DOESN'T IT WORK????Regardless, does anyone know how to get it to work on these models without cranking denoising high? (I very much prefer it to stay low because I hate the deformed hallucinations high denoising likes to add when upscaling)To illustrate my point further, here are two catboxes with an upscaling workflow. Same prompt, samplers, seed, etc. Just the models are different.It upscaled fine on Noob while SDXL is blurry as shit.https://litter.catbox.moe/qr2lfzmixpygkd5s.pnghttps://litter.catbox.moe/gyuw51hozgtp42tt.png
>>106881287>can we be sure that's a purely modern phenomenon?https://www.amazon.com/Looking-Back-Spanish-George-Orwell/dp/6257120985Can recommend, easy read
>>106881179
>>106881314Can you just summarize it for me with a bot?
babe wake up, bytedance released finetunes of the flux text encoder (that seems to act like PuLiD)https://huggingface.co/ByteDance/FaceCLIP
>>106881340>bytedance releasedimagine if it was "bytedance released Seedream 4.0" :(
>>106881303Different models handle this low denoise thing differently. It's not something you can really avoid in the way you're hoping. However, there is a way to make this work.>I very much prefer it to stay low because I hate the deformed hallucinations high denoising likes to add when upscalingOne way to solve this is by adding a controlnet to the second pass. Depth sometimes, lineart sometimes. You want it to end early (~0.6-0.8) so that the second model can fill in additional details, and you can reduce the strength (~0.7-0.8) so that compositional errors get fixed.
>>106881314I'll probably check it out, thanks anon. I remember liking his writing style a lot from 1984.
What model makes the best [spoiler]plants[/spoiler]?
>>106881370>I remember liking his writing style a lot from 1984.1984 is a great book, but a bit verbose imo, I prefer Camus he's more straightforward, I guess I just don't like fluff lol
>>106881240big doubt. they nuked all the celebrities, and censored even more things. unless they revert to the uncensored version, it's over.
>>106881340Is it drop-in, or will we have to wait for updates for support?
closed source ComfyUI when?
>>106881390not soon enough
>>106881340also, the github was taken down?pickle files are known to be a bit risky.
>>106881340you do notice world largest fuel tank
>>106881340>The GitHub code repository returns with a 404Wake me up when this shit hits Comfyui, until then, I sleep.
>get up much earlier than usual>start genning while eating breakfast>it's like 3 times faster than yesterday at the same settings???
>>106881340>they're still using fluxthat's brutal, Qwen Image exists lol
>>106881415That's usually a sign your GPU is starting to fail. It's giving out it's swansong for you.
>>106881340bytedance always release poop for free. fuck them for real
>>106881240No, they won't. Even if they did, I've never in a million years generate NSFW content on someone else's server(cloud/api). You will be profiled. Your usage will be studied, monitored and analyzed extensively. Like having sex in an interrogation room.
we already have all the models we need why do you yearn for more
>>106881422this, they're the epitome of companies that only release failed experiments and keep their succesful ones to themselves lol
>>106881421Bro, it's like a 2month old 5090, pls no.
>>106881439they're all subpar compared to government backed ones
>>106881415you forgot something. Like you're on schnell or something. No free lunch.
>>106881415>first thing anon does when he wakes up is genaddiction
>>106879818>native workflow is better than kijaiaccording to??
>>106881415Put your PC on better airflow. Stop putting your PC outside, Clean your PC. If anything fails RMA your GPU
>>106881446Good enough for me, and that's all that matters. I'll take NSFW subar models over censored, locked down normie shit any day.
>>106881340Is this for the edit model? what are the faces in the corner?
>>106881474flux isn't an edit model, it's like PuLiD, you reproduce the human by giving it its face, and desu this is a really outdated method, like edit models can do that any more by itself
>>106881240>Government issued gooning
>>106881464native has always been better than kijaimeme, it's faster, it uses less vram and it allows you to run gguf models
>>106881483>flux isn't an edit modelI meant flux kontext.
there is no future for local bros. we are doomed to use wan, qwen, chroma, illustrious foreveeeeeeeeeer
>>106881415Do you powerlimit your gpu? If so, check your settings. I've had an occurrence where my gens were faster for some reason and I found out it was because MSI Afterburner had defaulted back to stock settings.
>>106881496Did they ever release Wan 2.5 or what ? If it does then its truly over. Fuck Jews and Fuck Payment Processors
>>106881496and it sucks for wan, I want to get rid of that 2 models MoE meme, fuck this shit
>>106881496and kontext.
>>106881503>Did they ever release Wan 2.5 or what ?they won't release that model, maybe there's a tiny chance they'll do it if they make wan 3.0 and that one is competitive with sora 2 but I doubt it A LOT
>>106881352Interesting. I use the first pass image as the controlnet then, right?
>>106881514Welp. Its O V E R then. I guess its time to draw again
God i hope Wan2.5 or 3.0 will be API only
>>106881464Yeah, I'm not finding that to be the case either. The default workflow has stopped producing anything but noise after one or two gens multiple times now (with zero changes to settings) and I have to fully restart Comfy to get it to work again.>>106881492While Kijai may be doing a hundred stupid things in his workflows (custom everything), it really has consistency going for it. It just works!
Where did all the good genners go
>>106881516You got it.1) Gen the original image.2) Feed this base gen into a depth or lineart preprocessor or both, depending on the style you're going for.3) Use these as your controlnet, applied to the second sampler only. That means you can crank up the denoise (an resolution, or both) a little more, though this obviously comes with more change.You can also take advantage of the different way models deal with low denoises by changing the model on your second sampler, instead of reusing the first. For example, imagine genning your base with SDXL, then using your noob to denoise the higher resolution version. Etc etc. Just experiment.
>>106881340what's the point? edit models can do this shit with humans and anime characters
Is there a way to inpaint faces automatically with wan 2.2?
>>106881543>The default workflow has stopped producing anything but noiseYou're clearly doing something wrong. I've done hundreds of wan gens from native without any issues. Kijai's workflows are overly convoluted and verbose.
>>106881563Sometimes it means that python's messed up.
>>106881563>Kijai's workflows are overly convoluted and verbose.not only that but he coded this shit in a way that makes his nodes incompatible with the rest of the ComfyUi's ecosystem, it sucks ass
>>106881562there is
>>106881553I appreciate the info. Results may be pretty good. We'll see. It's basically dev.
>>106881591TELL ME
>>106881496>there is no future for local bros. we are doomed to use wan, qwen, chroma, illustrious foreveeeeeeeeeerI won't doom that hard, but you have to understand that the curent SOTA local models we have are really decent, so it's getting harder and harder for companies to surpass that, it's not 2022 anymore, you can't just release a poop (SD1.5) and be considered as the second coming of jesus christ, you have to put some efforts now to get the localkeks' attention, and for some companies, it's just too much work, if they can make great models they'll just try the API route, at this point it's just too expensive and time consuming to just give their secret sauce to everyone
>>106881604same as sdxl
>>106881584yeah thats what irritates me the most. I cant just swap out a node to use a kijai based workflow. I have to redesign the entire thing. if I see someone use kijai for a wan workflow i just don't bother.dont get me wrong he has some awesome nodes that I use, but his wan wrapper pipeline is cancer.
>>106881623SDXL doesn't do video.
>turn on vae tiling>oom
For wan, does entering the loras in the text prompts also work? I just noticed it popping up as I was typing.
>>106881682maybe you went for values bigger than the output itself, it has to be lower
>>106881546we had good genners? when?
>>106881717It doesn't confine the grid within the image resolution, but expands? Classic.
>>106881740if you have a 1024x1024 image but decided to go for 2048 tilted grid you're not using that node well, maybe that's not something that you did, but the grid size has to be lower than the image itself (duh)
>>106881303Try to sample again to remove blurriness.
>>106881303Keep in mind that sdxl isn't going to respond as well when you're resizing to a larger size. Also, mess with your CFG. In general, the lower you can keep it the better but you need to test.
>>106881787Does this work for video as well?
>>106881787Have you gotten FaceCLIP to work?
>>106881880>mask
>>106881880Dude, I'm calling the cops.
>>106881880>deletedAnon's going to jail
>>106881787I guess I can try some more but didn't really help.Setting denoising as high as your image would also defeat the point.>>106881801>Keep in mind that sdxl isn't going to respond as well when you're resizing to a larger size.Yes at high denoising it is going to be rough.Low denoising it should be fine if I can find a way to get rid of the blur.>Also, mess with your CFG. In general, the lower you can keep it the better but you need to test.Well this was CFG 3
>>106881880What's supposed to be illegal about this?
it's been confirmed it was a shock collar lolhttps://www.youtube.com/watch?v=zxUlZEdPStc
ok
Saars.
>>106882209wtf is this picture doing on fucking civitai on all places lmao
>>106882209>river cleanAI slop
>>106856257This is cool anon, I shared it on twitter>>106882209how did you get that picture of me
Guys wake the fuck up, they improved the lightning I2V version, fucking finally!https://huggingface.co/lightx2v/Wan2.2-I2V-A14B-Moe-Distill-Lightx2v/tree/main/loras
i love how the noodles only succeed to connect half the time
>>106882209I'm proud of you
>>106882209i don't get
>>106882243The fuck is a moe distill. And why isn't it v2.
>>106882243>lora key not loaded:bruh, it's not compatible with ComfyUi AGAIN, when will they learn?
>>106882342>The fuck is a moe distill.Wan 2.2 is a MoE model, it means it's actually a 28b model but uses half of its weight (HIGH model) for one specific task (the first steps) and the other half of its weight (LOW) for another specific task (the last steps)distill means it's meant to work on cfg 1 and low steps (4 steps for those lightning loras)
Nunchaku Qwen lora?
>>106882243>>106882343waiting for kijai's version
>>106882376the [insert asian slur] are holding the PRs hostage. There are TWO (2) implementations waiting to be merged.
Remember this?https://www.reddit.com/r/StableDiffusion/comments/1o488hl/cancel_a_comfyui_run_instantly_with_this_custom/Comfy just added an official commit that makes the interuption instant, was about fucking timehttps://github.com/comfyanonymous/ComfyUI/pull/10301/files
>>106882397thanks for the heads up, I'll try them out
>>106882430poggers
>>106882430he was waiting for somebody else to do it?
>>106882430I remember anon's post about it an hour before the plebbit repost, yes.
>>106882452>he was waiting for somebody else to do it?that's his motto, if no one complains, he won't be bothered to do it
>>106882430I thought he'd refuse to ever add that out of pure spite.
>>106882482yeah same, that's surprising (in a good way)
why isn't there an android app for comfyui
>>106882546if there would be an app it would be iOS only
>>106882479everyone complained and he said no. someone else does it and praises them so he does by stealing the code. it's that simple
>>106882594all the code does is to add an exception on all the forward passes if you press "interupt", like you don't need to be a genius to figure this shit out, I guess he thought it wasn't important and realized how important it actually was was when that reddit post appeared
>>106882243>>106882343I tried to convert the loras with the musubi-tuner but it still says it's not compatible when running it on ComfyUi, weird
I'm trying to follow this guide: https://rentry.org/wan22ldgguide and everything works but I'm getting this error on the WanVideoSampler: ValueError("type fp8e4nv not supported in this architecture. The supported fp8 dtypes are ('fp8e4b15', 'fp8e5')")Which makes no sense because the WanVideo Model Loader doesn't error out when I set the quantization to fp8_e4m3fn_scaled and it loads fine.
>>106882676the setting has to match the model you downloaded. if you have a 30 series download e5m2
>>106882676>he has 30 seriesLOL
is kontext good?
new wan 2.2 i2v loras dropped:https://huggingface.co/lightx2v/Wan2.2-I2V-A14B-Moe-Distill-Lightx2v/tree/main/loras
>>106882985too late anon >>106882243and it doesn't seem to be working well at 4 steps, all I got got is some blurry shit
>>106882987apparently 12 works well (6/6 or whatever)ill have to test myself though
>>106882720>>106882883Thanks anons, that was it.
>>106882987>>106882985>lora key not loaded
>>106883028>he fell for it
>lightx2vOwner29 minutes agoThis model focuses on training for high noise. For low noise, using the LoRA from Wan2.1 directly still yields good results.huh
>>106883175gonna try the new 2.2 for high and 2.1 on low and see what happens.
>>106883186the man puts down his banana on the desk in his office, and drinks some coffee from a mug.oh man, hallucination generator.
I got the most unhinged video with the new loras. The guy literally pushed his dick through the back of the womans head and cummed on her breasts.
>>106883210that's exactly what I got as well, ghosting shit
>>106883210that was 3 high, 3 low (new lora). next test, 2 steps high (2.2 new), 8 steps low (2.1)what the fuck is going on.
>>106883243next i'll try 2.1 lora high, and 2.2 (new) low noise, default 3/3 steps.
>>106883210That's like 90% of my Wan gens now. I don't know what changed, but with or without low step LoRAs the output looks like an after school special's first hit of weed.
>>106882985it has a readme now>This approach allows the model to generate videos with significantly fewer inference steps (4 steps, 2 steps for high noise and 2 steps for low noise) and without classifier-free guidanceright... I did that and the results are horrible
>>106883274>>106883028
>>106883254okay, high noise one is doing something fucked, now it's at least functional.
in a few hours or whatever someone will figure out what the issue is, something must be fundamentally different in their sampler setup or whatever.
2.1 high, 2.2 (new) lowthe man runs down the street in new york.lmao
https://huggingface.co/lightx2v/Wan2.2-I2V-A14B-Moe-Distill-Lightx2v/discussions/2maybe they will clarify there, they posted one thing (it's high focus, or whatever)
>>106883274https://huggingface.co/lightx2v/Wan2.2-I2V-A14B-Moe-Distill-Lightx2v/discussions/3#68ee0c1746daa22bb8c791d0>I tried the ksampler, and it gave me drunk/blurry/oversaturated results, tried 8 different configurations, its a mess, the MoEKSampler fixed it.what the fuck? what difference could that make? it's just a sampler node
>>106883323hes a normie retard with 0 understanding of how stuff works.the moeksampler just decides which model to use for which step, that's it.I think for 4 steps it does 1 high 3 low, its been a while since I used wan at all, im back to the 1girl grind
so you need a new node?its so fresh, the model card isn't even deployed :DUPDATE: its updated!, though a working workflow would be much appreciated!KSampler for Wan 2.2 MoE for ComfyUI is required!by author: stduhpfIn Comfyui use the "Customs Node Manager" to install it.Afterwards, use these settings by u/ucrenhttps://imgur.com/a/iuYsmUuSigma Shift: can be 3.0 to 5.0, depending on how much motion you want.
>>106883345>https://imgur.com/a/iuYsmUu>just go for 12 steps brofuck that, the previous loras worked fine at 4 steps lmao
>>106883345KSampler for Wan 2.2 MoE is in node manager, gonna test and see wtf it does.
>>106880034>>106880049kys
so it works with one node instead of two. now we see the results/difference.
>>106883375>12 stepsif it doesn't work at 4 it's completly useless lol
>>106883345it doesnt work still, ghosting is gone but it doesnt respond to promptsprompt was:>the man puts on a pair of glasses
>>106883398I know. I just wanna test to see if this shit works with it or if it's still drunk-o-vision.
>>106883405For comparison, using the previous wan i2v lora.
the man runs down the street in new york.okay, it's clear at least and not drunk vision, need to test more/less steps.
>>106883420how many steps was that one?
>>10688342812, trying 8 now, the sampler wants to switch at 3 steps apparently, but idk since I just started trying it. now with 8 steps it is doing 3/5.
>>106883432*also thats with the other sampler "KSampler for Wan 2.2 MoE"cant say if it's better or worse but at least this one had no drunk vision.
there is discussion on the huggingface apparentlyI don’t mean to be rude but there’s so much misinformation it’s baffling. You don’t need no useless custom nodes. Seems people don’t even bother reading simple instructions anymore. You MUST :use EULER sampler;use SIMPLE scheduler;use shift 5;use 2 steps for HIGH;use 2 steps for LOW;possibly use official settings ( 480x832/720x1280) (81 frames) ( 16fps) but some slightly variations are possible and more or less working.SHIFT 5 => SIMPLE scheduler => 2/2 steps=> Sigmas used during distillation. You can very easily verify this by yourself inside comfyui.
>>106883485that's what we tried, and we got ghosting shit
>>106883493kijai will fix it
>>106883503>kijai will fix itKJBoss is here, I'll test out his lorahttps://huggingface.co/Kijai/WanVideo_comfy/tree/main/LoRAs/Wan22_Lightx2v
>>106883530my hero
the anime girl eats a slice of pizza.wtf (this isnt with the kijai one)
>>106883530I can confirm it's working again with Kijai's format
>kijai lora didnt give the key errorso now we see what it can do
>>106883530i've been gone for 5 days, how is this lora different from the other 2.2 light loras?
based kijai>>106883549now im hungry
>>106879714If that takes you 30 minutes you need to fix your flow bro
>>106883566the previous 2.2 I2V lora had a big motion problem, it had slo mo shit, let's hope that one doesn't
3/3 steps, new lora high, 2.1 lora low (only the high one is out so far for kijai version), comfy template workflow/settings.the anime girl eats a slice of pizzawe are so back. now I will test with the old 2.2 low lora.
>>106883414>>106883405now its working
>>106883581>2.1 lora lowwhere do you find that one?
>>106883581with 2.2 lightning low, and 2.2 high (new kijai one):seems good also? so this new 2.2 high (kijai) fixes the old 2.2 high lora which fucked motion, I guess. 2.2 low lightning lora seems to work with it fine.>>106883599you just use the regular 2.1 i2v lora, before you used one for both.
>>106883608>you just use the regular 2.1 i2v loraI don't have it, that's why I'm asking lol
>>106883611https://huggingface.co/lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v/blob/main/loras/Wan21_I2V_14B_lightx2v_cfg_step_distill_lora_rank64.safetensors
>>106883614thanks
wanbros, dare I say we're so back?
>>106883530it's really annoying that everytime they release a lora, its format never work on ComfyUi, you'd think they learned how to nail that at some point
>>106883626well this new lora apparently fixes what 2.2 loras fucked up (motion), kijai one is working + no ghosting bs. and 2.2 low (old one) seems to work fine. the high one caused all the slow mo before.
>>106883630their qwen loras work good, im not sure whats the issue with wan that they always need them to be fixed by kijai. It was like that the last time too, I think we had to do them at 0.3 strength
the anime girl eats a slice of pizza2.2 high lora (kijai, new), 2.2 lightning low noise lora, 1 strength for both:yep I think we're back.
>>106883649the anime girl picks up a bucket of popcorn and eats some popcorn from it.https://huggingface.co/lightx2v/Wan2.2-Lightning/tree/main/Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1low noise one from therehttps://huggingface.co/Kijai/WanVideo_comfy/tree/main/LoRAs/Wan22_Lightx2vhigh noise from here. I still need to test though if 2.1 is better than 2.2 low noise though.
>>106883649>>106883626we're so back is not even real
>>106883661not sure what these vids are supposed to prove
>>106883673the new kijai lora works and fixes the ghosting bullshit. it also has normal motion unlike the old 2.2 lora.
you know it's funny I said kijai will fix it and who knew he uploaded a lora around the same time. mvp for fixing this shit.
the anime girl walks to the left through a door, and closes it.not exactly, but we got more motion now.
>>106883691KJGod is always here to save us anon, remember that!
>>106883699the anime girl runs to the left through a door in the white room, and closes it.a few more specifics and poof:HOLY SHIT WE GOT MOTION based lightx2v + kijai. setup: 2.2 lora (kijai), 2.2 lightning low noise.
>>106883744*new lora for high noise model, old 2.2 lightning low for low noise model. 1 strength for both.wan 2.5 came early anons (not really but this lora fixes the 2.2 issues)now we got the 2.2 motion instead of 2.1 (loras werent working before for 2.2).
Kijai [+1] 7 points 32 minutes ago Something is off about the LoRA version there when used in ComfyUI, the full model does work, so I extracted a LoRA from that which at least gives similar results than the full model:https://huggingface.co/Kijai/WanVideo_comfy/blob/main/LoRAs/Wan22_Lightx2v/Wan_2_2_I2V_A14B_HIGH_lightx2v_MoE_distill_lora_rank_64_bf16.safetensors
the japanese girl turns around and runs far away on the beach.now i'm gonna try with the new lora for low too. but it works well.
>>106883777same lora for high/low (kijai)yep we're back
>>106883788>same lora for high/lowwhat do you mean? only the new high lora can be applied on both high and low model and it'll work?
>>106883800well, it sure seems to work with this test. high worked great and seems to work great with low too (and the motion is fine)interpolated but it's very good:ill use a new image for more tests.
>>106883808updateKijai [+2] [score hidden] a minute ago Just on the high noise, they didn't release any new low noise LoRA since the old 2.1 lightx2v distil LoRA works fine on the low noise model.
The new kijai light lora just seems to hang on my system.
>>106883818and yes, the 2.2 (new) + 2.1 distil lora does in fact work as you can see.
is the old 2.1 low lora better in any way than the 2.2 4step low lora?
>>106883846here's a more fun example to test.
>>106883016Oh I see a nipple, goodbye anon!
>>106883882kijai said 2.1 distil lora works fine, so try that first
a large dog punches the man with glasses in the face, and a huge lightning strike hits the man, who falls to the ground.lmao, just in time for horror month.
>>106883894why is he wearing a bra
>>106883969thats why he's so upset I guess
>>106883969Manssiere!
>>106883951the man holds up a black remote and presses a button, causing a huge lightning strike to hit the dog behind him, who bursts into flames.holy shit. wan 2.2 is back.
>>106883999
>>106883990take your schizo back
hm rife vfi node for interpolation is faster than film vfi, maybe not the same quality but it's good for fast gens.
>>106884119back where? because, personally, to me that loooks like an avatartranny that finally found the cesspit its kind belongs in
>>106884204>>106883549>>106883581>>106883608>>106883649>>106883661>>106883699>>106883744>>106883760Explain this retard then
>>106883951
>>106884226a new lora came out and im testing it, meds.
>>106884226looks like an anon testing stuff out, not an ostracized faggot begging for the attention its parents never gave it
>>106884235>>>/g/sdg/
>>106884231kek wan was made for this stuff
>miku apologistsno wonder your general is always headed for page 10
image source is qwen image edit + miku and scam altman (2 sources)the anime girl on the left fires the black pistol, and the man on the right falls to the ground. she gives a thumbs up gesture.we are SO back. now the motion is way better, and using the 2.1 loras was faster but gave you 2.1 motion not 2.2 with no lora motion.
>>106884278>image unrelated
>>106884278better fate for scam:
Migu faggots get the rope
>>106882133nice
>>106884317anime website
this new lora is great, the motion of the doggo is even better now.
DORYA!
new>>106884374>>106884374>>106884374>>106884374