You Have No Taste Edition Discussion of Free and Open Source Text-to-Image/Video Models and UIPrev: >>106550941https://rentry.org/ldg-lazy-getting-started-guide>UIComfyUI: https://github.com/comfyanonymous/ComfyUISwarmUI: https://github.com/mcmonkeyprojects/SwarmUIre/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneoSD.Next: https://github.com/vladmandic/sdnextWan2GP: https://github.com/deepbeepmeep/Wan2GP>Checkpoints, LoRAs, Upscalers, & Workflowshttps://civitai.comhttps://civitaiarchive.com/https://openmodeldb.infohttps://openart.ai/workflows>Tuninghttps://github.com/spacepxl/demystifying-sd-finetuninghttps://github.com/Nerogar/OneTrainerhttps://github.com/kohya-ss/sd-scripts/tree/sd3https://github.com/derrian-distro/LoRA_Easy_Training_Scriptshttps://github.com/tdrussell/diffusion-pipe>WanXhttps://comfyanonymous.github.io/ComfyUI_examples/wan22/https://github.com/Wan-Video>Chromahttps://huggingface.co/lodestones/Chroma1-BaseTraining: https://rentry.org/mvu52t46>Neta Luminahttps://huggingface.co/neta-art/Neta-Luminahttps://civitai.com/models/1790792?modelVersionId=2122326https://neta-lumina-style.tz03.xyz/>Illustrious1girl and Beyond: https://rentry.org/comfyui_guide_1girlTag Explorer: https://tagexplorer.github.io/>MiscLocal Model Meta: https://rentry.org/localmodelsmetaShare Metadata: https://catbox.moe | https://litterbox.catbox.moe/GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-oneTxt2Img Plugin: https://github.com/Acly/krita-ai-diffusionArchive: https://rentry.org/sdg-linkBakery: https://rentry.org/ldgcollage>Neighbours>>>/aco/csdg>>>/b/degen>>>/b/realistic+parody>>>/gif/vdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/trash/slop>>>/vt/vtai>>>/u/udg>Local Text>>>/g/lmg>Maintain Thread Qualityhttps://rentry.org/debo
Cursed thread of foeboat
So was the tencent flux dev tune proven a scam?
>They thoughtered'ed Tencent was gonna save local
>>106553794You dropped >>106553804
>>106553814Looks like they were right, funny how Krea was DOA without ever managing to get any audience while a 10 minute tune on a destilled model managed to unslop it so much
>>106553688i’m using the all in one installer from https://github.com/deepbeepmeep/Wan2GPi don’t think it has the regular workflow things like comfyui. i tried using comfy first but id kept getting the same vram errors all the time. this one just worked for me. even if the guide said 8vram was workable and i seen other anons gen good stuff on 8vram too
>>106553814The krea one looks pretty bad. Not saying SRPO nails it but I prefer it over the rest there.
>>106553814>remove the slop without touching performanceGood proof of concept to me!
>>106553853lol thanks
>>106553814what the fuck is this prompt?
>>106553840How much ram do you have?
Is there a big difference in how the motions turn out between gguf, fp8 and fp16?Trying to work out a workflow that lets me spam some prompts to see what sticks and then take it up to the fp16.
>>106553907I didn't find the motion gets mangled, more the image quality when quanting. the distills kill motion
>>10655389564gb and for page file i think it’s set at auto or 32gb.
Posting more random Hunyuan Image tests (slop). The refiner is active with 50 steps base model / 4 steps refiner (comfy does not support the refiner yet). I don't think I've gotten anything particularly impressive, especially considering how massive the model is (64gb of vram). It's extremely similar to Qwen. The refiner does improve fine details, but it also adds dithering artifacts to some textures. Even though I don't think the output is great, I'll keep posting it in case anyone notices anything useful, since it's difficult to run the full model.I'm still thinking the main use case for this model may be for upscaling output from other ones. Maybe using the refiner model exclusively.
>>106553935Post more using it as a refiner on noob gens, curious how well it actually works
>>106553935The girls look like SEXO but it's too sloppy for me. Imagine if that was realistic.
>>106553878>without touching performancethat's bullshit, look at the shirt, the details are destroyed, and it's been the case on the last thread you should see the images
>>106553814Ok now try Chroma which shits on all 3.
>>106553940Yeah I agree this is probably the way. I still need to figure out how to set up their inference code for it but I will when I can (or comfy implements it first).
>>106553814>removes the slop>destroys the detailyou can't save flux ain't it? looks like nothing is free if you want to touch distilled models
>>106553928I'll stick to fp8 then.
>i... it's bad because it was a DISTILLED model!!!No, the technique is shit.
>>106553814Is there a reason why no one gives a fuck about Krea? I thought it was a sucessful attempt to unslop flux no?
>>106553969>destroys the detailThere doesn't exist a model that has the "detail" you're looking for. Blurring a background is just a way to hide a model's inherent limitations in representing accurate details.
When I tried generating images at 1536x2560 (one of the recommended resolutions) half of them came out totally mangled like these. Also note the creepy face thing.
>>106553983>There doesn't exist a model that has the "detail" you're looking for.Flux dev, the point is that SPRO has worse details than the model it's supposed to save >>106551018
>>106553993>the point is that SPRO has worse details than the model it's supposed to saveDo we have any guarantee that we aren't just running the model wrong?
>>106554001>Do we have any guarantee that we aren't just running the model wrong?it doesn't seem to ask for something specific, I hope I'm wrong and in reality the model can do better than thathttps://github.com/Tencent-Hunyuan/SRPO?tab=readme-ov-file#inference
The oral insertion wan 2.2 lora can only do large dicks.
>>106554009No, those are average. Yours is just ludicrously small so your baseline is off.
>>106554012kek
Somehow one area Hunyuan actually seems rather good at is generating "imperfect" humans
>>106554012
>>106554016>"imperfect" humansyeah I've seen that before, but they don't look more natural, those "women" look more like troons, that's not a good thing at all
>>106554009these are large to you?>>106553159>>106553121
>>106554016
>>106554033
how long until we can gen blowjobs in the material world?
>>106554016>>106554033>>106554041that's probably the sloppiest model I've ever seen, why didn't they use their own SPRO method to unslop their own model?
>>106554052Probably a different team
fp8 = 57 secondsfp16 = 280 seconds5090 bros, you're about the same?
>>106554016IMO it's just barely less "perfect" than the norm Lotta buttchin tho
>>106554060Indeed
>>106553935i don't really like the 1girls you posted but the details on the clothes look pretty good. is it good at these, perhaps?
>>106553934That should be enough despite the low vram just with offloading. Do you remember what workflow you used?
Where/how do I start learning how to make use of nodes that I want?Pic related(colormatch/reference image), I see stuff in other workflows that I need, but have no idea how to plug it in etc.
https://xcancel.com/TencentHunyuan/status/1966029651350028701#m>Thanks for all the love and feedback on HunyuanImage 2.1! and this is exactly why we only get slopped shit, those "people" suck the dick of those companies because they release free shit when they should instead make it clear that we're getting tired of the sloppy skinYou get what you tolerate.
>>106553838I swear you're the same anon who constantly lies about non-existent BFL "takedowns" of NSFW loras>>106553810SRPO has MORE bokeh than the original Dev and worse details almost all of the time so I'd say so yeah
>>106553851SRPO is pixelated and blurred to shit there, IDK how anyone can prefer it, especially her face is way more fucked in SRPO than the other two
>>106554107either the author documented the nodes or notif not and you can't figure it out from the labels there's the python code or learning by trial and error
>>106554107Like what exactly?
>>106554120>>106554123Krea was finetuned on the undistilled version of flux, they had an unfair advantage
>>106554120>I swear you're the same anon who constantly lies about non-existent BFL "takedowns" of NSFW lorasThis is like when visa and mastercard said they didn't literally, actually, really, "force" steam to remove thousands and thousands of games. No need to play dumb.
>>106554130this
>>106553814i don't see anything here other than that it seems likely that none of these had a good nsfw finetune
>>106553981if you look at the CivitAI gallery people are definitely using Krea, a "dead" model is more like HiDream or SD 3.5 IMO which got WAY less images posted at a far slower pace over time
>>106554078I think it can do some fabrics and patterns better than other models although there are still some notable artifacts. Maybe the 32x vae showing its strength.
>>106554139yeah, Idk, no one talk about that model either on reddit or here
>>106554105yeah it was the one linked in the 2.1 guide. rentry.org/wan21kjguide> /ldg/ Comfy I2V 480p workflow: ldg_cc_i2v_14b_480p.jsoni used 2.1 since thats the one a anon was using on /b/ to gen blowjobs. now i’m using 2.2 with the all in one installer and videos finally gen but really fuzzy.
>>106553935Flux chin
>>106554156I mean both the subreddit and this thread are basically dominated by vidya these days, WAN in particular (for both t2v and t2i)
Hunyuan image sucks, not only it's slopped but they can't seem to make the eyes right, I would believe it if they said it was just a failed attempt
>>106554167that depends, we got spams of Kontext and Qwen Image Edit renders here, when a model is good and fun, people tend to spam this shit, and yeah, Wan is consistantly fun so it'll always remain relevant
Hunyuan kpop clones
>>106554168Did you really try to generate at 640?
>>106554171a lot of gens in this thread also aren't even labelled as being from a specific model though, and are posted by people who don't say anything at all
>>106554126>>106554127A lot of github reading then. That image color match was pretty straight forward, just plug it inbetween two nodes. And it actually worked, it got rid of the light changing for a FFLF loop.I just want to learn the tips and tricks I see in various workflows to make my own. Next one I'll try and figure out is the unload model so it stops bricking my pc up when it offloads into ram when my vram should handle it.
>>106554179He took an image from someone on reddit who uses a shit workflow on a tiny resolution with no refiner.So, ignore it.
>>106553814I still believe that GPRO method is valid, they need to try it on an undistilled model, like Qwen Image, and they will do it actuallyhttps://github.com/Tencent-Hunyuan/SRPO/issues/2#issuecomment-3274994781I love the chinks because they just tell it how it ishttps://github.com/Tencent-Hunyuan/SRPO/issues/2#issuecomment-3278296462>Personally, Flux became obsolete around the time of Dream 2.1, roughly half a year ago. Flux was simply a subpar model—its prompt following was abysmal, rendering it unusable, and its Chinese-style output was nothing short of garbage.
>>106554208>defending the refinerQwen Image doesn't need a refiner and it looks better than the slop HunyuanImage is producing, don't settle for trash anon
>>106554220I'm not defending it, just stating a fact. From my tests I don't really like Hunyuan, so far. It's got some NSFW concepts and that's about all it's got going for it over Qwen at this point.
>>106554241fair enough
>>106554220>>106554241Qwen looks almost exactly the same as Hunyuan, but Hunyuan will output higher resolution (with the refiner being a significant part of that). And it does seem to have a lot of NSFW content. Enough that it sometimes appears unintentionally, like with a lot of the popular tuned 1.5/XL models.
>>106554257>Hunyuan will output higher resolutionQwen Image has no issue going for 2k images aswell though
>>106554257>Qwen looks almost exactly the same as HunyuanI'd disagree. Once the concepts get more niche or complex, it harshly deviates from any kind of photorealism, at least with some of the prompts I tried, compared to Qwen who still sticks to a photoreal (albeit very Qwen-like) look.>Hunyuan will output higher resolutionI gen Qwen at 2496x1392 and I imagine going higher would be no problem at all, even if there's no real reason to as I'm upscaling afterwards anyway.The NSFW concepts are, for me, the only thing that sets it apart. But with the other shortcomings (which may or may not be salvageable) I don't really feel like using the model much at this point.But it's still early, most gens I made were from an undercooked workflow or through the diffusers python library directly.Once the nodes are out I'm certainly going to test it more (and, of course, gen 100 plots nobody but me cares about).
>>106554293>The NSFW concepts are, for me, the only thing that sets it apart.I would agree with you but I don't want to load a second model just to "refine" things, refiners are a meme and we must get rid of this shit
Good thing I never had to learn how to use nodes.Thanks Haoming
>>106554143it does look pretty detailed to me. not bad.
>>106554143Looks like she stayed in the pool too long lol
>>106554345It looks fucking awful. Like how a child draws wrinkles. The whole model is garbage. Evidenced by the fact they open sourced it. It's just qwen that got too fat and bloated.
>>106554331How many weeks we giving this one before they have an autistic breakdown and dumps their fork?I give three.
>>106554365>It's just qwen that got too fat and bloated.this, I still don't get why they released it at all, do they have a humiliation fetish? it's the second time they're getting humiliated by Alibaba
>>106554367Haoming has been working with a low profile for some time now. I started to follow him since Panhovix retired again and without saying anything the man has advanced a lot and continues to advance.He is not an attention whore like Panchovix, so I think he will be more stable and sane.
>>106554409Four weeks then.
>>106554414I think with Qwen image, he wouldn't need to keep adding models until . Major release of a new one. Maybe adding Neta Lumina would be good for the anime ecosystem.
will i get a speed increase in neoforge if i only use illustrious in forge? or is this a one of those "dont bother unless you use the big boys like wan and chroma?
>>106554282Sure, but Qwen is trained for 1328 and you're limited in how far you can push that before it degrades too much. Hunyuan starts at 2048, so you can go past that too as long as you have the vram.>>106554293I'm curious about what cases gave you very different results with Qwen vs. Hunyuan. The prompts I've tried on both gave me very close results. Could you give me a prompt to test?>>106554309If the refiner can be used effectively on images from other models, it could end up being really effective. Don't count on it though
man i have no idea why my gens look like this..https://files.catbox.moe/1qe7qk.mp4https://files.catbox.moe/di2om8.mp4https://files.catbox.moe/yijxri.mp4
>>106554436Yes, its made for that model at core
>>106553814The srpo is indeed worse than krea and flux and is very similar to chroma: it seems to have mangled secondary concepts and has no clue how to resolve them (so yes, chroma is unironically better than srpo, because while it's spatially retarded, at least it can do nudes)But we knew it already: one should not attempt to finetune flux. There are better architectures that won't fight you every step of the way, making you lose money and waste compute. Dead architechture should be obsoleted asap, and this is just another piece of evidence.As for the algorithm itself, I guess we'll just have to wait and see.
I get you're happy about forge and all, but there is nothing forge can offer me to convince me to stop using comfy UI.
bros is there a node that takes multiple inputs and outputs the first valid one?or something to achieve fast switching of models/values? I'm stuck at picrel currently
>>106554477>we knew it already: one should not attempt to finetune flux.this, you can't save distilled models, it worked on krea because they got access to the undistilled one
>>106554478I'm using comfy, but as soon as I see neoforge adding new features at acceptable speed, I'll jump back.
>>106554210his stuff is exactly what I am looking for though.
>>106554477>As for the algorithm itself, I guess we'll just have to wait and see.they're gonna try on Qwen Image after that one, I hope it'll be succesfull
>>106554474nice, i'll give it a go later, i saw that screenshot of haoming saying you shouldnt update or some shit. not sure if its a good idea to install it right now if thats the case
Guys I saw a screenshot of haoming telling me to drink my own piss. What kind of cup should I use?
>>106554501Mine, be my guest. You could at least reply to me if you're going to be sassy.
Honestly don't know who owns the A1111 license but if Haoiming actually gets off his ass and work his UI properly he could easily btfo Comfy. A1111's interface is literally the definition of comfy.And Comfy it is clearly built for API/service shit and plug and play workflows. Try doing any actual post production abd you'll want to kys, it's not made for it.
>>106554521>he could easily btfo Comfy.You clearly have no idea why people use comfy and assume they are forced into using it.
>>106554483found the node. lazy bros, we won!
Sometimes Hunyuan will change the apparent medium a little bit! Only a little, though.
>>106554549Part of what makes Bakemonogatari so attractive is its art style.
>>106554549holy slop
>>106554530Why are you responding like an angry woman? You didn't develop your point of view.
>>106554530>they are forced into using it.We were, because until now all auto forks were abandonware and barely got any new features. If they pick up pace, I'll jump ship right away.
Are the interpolated frames, in comfy, considered to the vram buildup or is that process dealt after the models have been dumped?
>>106554615Because the idea that model support is the only reason people use comfyui is a complete misunderstanding of why it's popular. If you've ever had to do something more complicated than hit the generate button you'd understand this.
>>106554644After. Interpolation has nothing to do with the model itself.
Looks like the pixel space model version 0.1 was published a few hours ago:https://huggingface.co/lodestones/Chroma1-RadianceDid anyone manage to use it?
>>106554681Very nice.
>>106554712>Did anyone manage to use it?it's a pain to have to change forks just to use this shit, what is Comfy waiting for to implement it?
I imagine you with your arms crossed and huffing. Express your ideas. Don't leave things unsaid, you sound like my wife. Comfy has always been about a NEED versus WANT issue. Hobbyists versus actual people with real demands and problems>Do you WANT to create a video of Hatsune Miku dancing in Dubai, near a pink elephant with an apple in its head?Use Comfy>Do you NEED to complete a project, share it, and coordinate with over 40 people around the globe, who may or may not be using the same operating system or even a PC? Or have a good GPU in a long time lapse?Use DreamStudio, use Invoke
>>106554712>ChromaHe is not laundering money? At this point it makes no sence
>>106554770Did you WANT to be a low IQ retard or did you NEED to in order to get paid or get attention?
>>106554775>laundering money?he only managed to get 25000 dollars and had to pay 150 000 to finish the finetune, he's pretty bad at laundering money
>>106554741it looks like a bunch of code, people will have to review it I supposehttps://github.com/comfyanonymous/ComfyUI/pull/9682
>>106554712nice, no gguf yet though
>>106554834if you have 24gb of vram you can run the bf16 without issues, and if you don't you can use this new node that allows you to offload a bit to the cpu for safetensors modelshttps://github.com/pollockjj/ComfyUI-MultiGPU
>>106554712Is it better at pixel art or am I confusing things here?
>>106554847Did he fix the ooms with distorch2?
>>106554870no, it's not using VAE with the pixnerd method
https://xcancel.com/bdsqlsz/status/1966034419183124527#mSeedream 4.0 is confirmed to be a 26b model
>>106554879seed is open ?
>>106554886it's not, it's to show how big API models are? it means we can actually reach their level, if a 26b model can do that, we can reach it
anos, I have a get last frame node in my wf so that I can use that to make a FFLF longer frame video, but the colors/lighting sometimes is really off. Is there any fix for that?
babe wake up, another edit model got released, by bytedance this timehttps://github.com/bytedance/OneReward
>>106554847thx, i'll keep this one in mind
>>106554912>The following contains a code snippet illustrating how to use the model to generate images based on text prompts and input mask>input mask
>>106554912https://huggingface.co/bytedance-research/OneReward>Finally, based on FLUX Fill [dev], we are thrilled to release FLUX.1-Fill-dev-OneReward, which outperforms closed-source FLUX Fill [Pro] in inpainting and outpainting tasks, serving as a powerful new baseline for future research in unified image editing.
>>106554912>edit modelfrom what i see this is basically a regular model specialized to help in inpainting, its not an "edit model" any more than default flux is
>>106553814There are a lot of issues with clothing I haven't figured out how to get around yet , I had "well-worn" bleed into the clothing with my prompt constantly until I negged it away... BUT! SRPO really likes my EQ VAE trained LoRA, it nails the face 99.99% of the time now (gonna have to train her body back in). Flux Lines are unfortunately back, but I haven't gone back to an old CFG workflow yet (still using my current NAG wf).All in all, I'm finding it to be pretty solid. Much better than Krea so far.>>106554780How much did it cost to create SD 1.5? He might've got a deal making his own.
>>106554210lol faggot has a meltie because I didn't spend half an hour per gen on shitposts
>>106554870as the other anon said, it's about trying to remove the VAE altogether, not a pixel art finetune but generating in pixel space rather than latent space
just started messing with onetrainer, do i just point this to a safetensor checkpoint file or do i need anything else?
>>106555060No, IIRC, you need all the files from HuggingFace for it to work (scheduler/text encoder/blah blah) and put them into a folder, then it'll work. If your preferred checkpoint doesn't have a HuggingFace repository, then you're fucked.
it's over for amerigoyboomersQwen lorahttps://civitai.com/models/1923241?modelVersionId=2203783
>>106555104
>>106555104>>106555108
>>106555108
>>106555114
>>106555120
>>106555104>>106555108>>106555114>>106555120>>106555127bot or new turboautist, call it
>>106555129Indeed, only the newest turboautists care about realism
>>106555144>realismSocialist Realism is the only true art form.
>>106555144>Qwen>realism
This new dpmpp_2m_sde_heun sampler is great
>>106555218>heun sampler is greatThis is the same as saying more steps is great.
>>106555216>tranimeretard is blindMany such cases
>prompt wan for point of view >over the shoulder shotevery time
>using wai illustrious>my background are shitHow do I generate my anime smut with decent backgrounds?
>>106555098>If your preferred checkpoint doesn't have a HuggingFace repository, then you're fucked.that's fucking gay
>>106554879>26bThat's really good to hear. Even if they're holding out on us now, we will get there.
>>106555278By not using wai illustrious
>>106555224>This is the same as saying more steps is great.>4090 at 1328 and no sageattn or --fast>dpmpp_2m_sde_heun 2.50s/it>dpmpp_2m_sde_heun_gpu 2.37s/it>euler 2.30s/itWell it's not double the time. picrel is gpu version. euler had a deformity
how long should it take to gen on 8vram with wan2.2 itv? i have 64gigs of regular ram too. I was thinking it'd be 5-10 minutes per 5 second clip, not hours..
>>106555399>8vram>wan2.2>I was thinking it'd be 5-10 minutesbahahahahahAHAHAHAHAHAHAHA
Turn off "cpu", turn on "cuda".
By --fast do you mean also the --fast fp16_accumulation or are these separate?
>8vram>Wan 2.2So that's how having severe mental retardation looks like
>>106555456>--fast [FAST ...] Enable some untested and potentially quality optimizations. >--fast with no arguments everything.>You can pass a list specific if you only want to enable specific.>Current valid optimizations: fp16_accumulation cublas_ops autotune
>--fast with no arguments *enables everything.F
>>106555408I mean people made it seem that way>Just try the lora out>Just play around with the settings. That made me think it'd be fast enough so you can just play around gen and see what works, and 1 gen alone cant be enough to tell if something is good or not right? So hours is the appropriate time it should be taking? if so that's all i got to know.>>106555451Hmm I don't see that option. closest thing to it I think might be this? I'm using Wan2GP since the comfyui setup did not want to work for me no matter what options i put, i kept getting a not enough vram error when i was using the smallest model available. >>106555457An anon on b was doing fills on bj requests and he'd hammer out a ton and it wasn't hours inbetween, he was using a 2070 and wan 2.1, Just trying to do some simple bj gens
>cumfartui (comfyui) qwen image still doesnt work with sageattention>when generating images in a batch they still dont save their own individual seeds in metadata>there isnt a universal .safetensors and .gguf node so you have to add or switch them manually>the default load image node doesnt auto-propagate its dimensions into nodes that require them so you have to change them manually>wherever there are resolution height and width options there isnt a button to instantly swap them in nodes>ram and vram allocation still a disaster>takes 30s + tip to start the project without any updates off a gen 4 ssdAnything more that I missed when it comes shit daily cumfartui usage experience?
>>106555488>>cumfartui (comfyui) qwen image still doesnt work with sageattentionit does, if you go for this
Posting a few more different Hunyuan Image outputs before I go to sleep
>>106555496megasadge that it doesnt work with svdquants :(
>>106555104>>106555108>>106555114>>106555120>>106555127saaaar
>>106555518It added Chinese flag-looking patches to their suits by itself!
>>106555527Nothing too notable about environments so far. Not really seeing any higher level of detail yet
>>106555496So you need an external node from kijai for something that is enabled through cli arguments in the main project that, when only enabled through those arguments without connecting that node, breaks every qwen image workflow completely by genning a black image but only half way through.Meaning it doesn't actually work natively at all while still breaking the generation instead of at the very least skipping even trying to use sage.
>>106555533
>>106555542
>>106555541I prefer when it's on a node form, at least you can disable it in real time, with a cli command, if you want to change it you have to restart comfyUi
>>106555533The text is so fucked compared to qwen. Is this with the refiner or still standalone?
>>106555546
>>106555555Refiner is active, 50 steps main model / 4 steps refiner. Text is really bad overall.
>>106555549A node that you have to connect for every workflow, meaning it's best to keep it an option in some quick settings as worst case then, but as long as it works for a specific model theres not much reason to disable it
>>106555556>buzz lightyear meme>this illus checkpoint is the best o them all!
>>106555570Only 4? What if you set it to half and half like Wan?
>>106555555Grats on the 5s btw
>>106555496Still doesnt work for me
Can I inpaint video? Her eyes get fucked up from the motion.
>>106555581>that bridge towerWelcome back sdxl>random cyrilicConsidering the model looks like a product of industrial espionage based on qwen, it's so much worse than it, lmao.
>>106555580Good question, the 4 steps are hard coded in their inference scripts but still easy to change. I'll test it later. 50 for the main model is their recommended default which I've been sticking to to give them the benefit of the doubt.
waiting 5 hours for this...https://files.catbox.moe/0viwoz.mp4fucking kill me.
Bicycles are really hard for humans to draw properly, similar to hands, so this isn't a bad result
>>106555606
Generic average sci-fi content
>>106555611my paralysis demon about to suck my soul out
>>106555555checked
Slop unistyle. Even so, the clarity is nice.
>>106555356What is a good model then?
>>106555606Why don't u test with lower resolution and length then increase it once u got the settings right?
>>106555606SOVL
Not sure what's going on here
Generic western style?
One more extra sloppy slop. Hope these have been informative. Going to try experiments with the refiner and running it on preexisting images later.
>>106555631yeah that's what ima try now i put the lowest settings on as much as i could. it only took 12 minutes this time!https://files.catbox.moe/yn79qx.mp4now this is something i can play around with even tho the result is beyond bad LMAO
>>106555368How many steps?
How do you prompt this shot?
>>106555676top kek
>>106555681e621: three-quarter portraitdanbooru: cowboy shot
>>106555681cowboy shot
You wouldn't let this happen to you, right, Anon?
>>106555681>>106555687>>106555690Also "cropped legs"
B-Bigma status? 2 more weeks right?
>>106555664yeah sadly they look so sloppy. maybe playing around with the 2nd pass values will help, but I'm not holding my hopes too high
>>106555710Would having a chinese moded 4090 48GB make her happier or angrier?
>>106555718>>106555690>>106555687Thanks guys
>>1065556065 min on 3090https://files.catbox.moe/z1cgvs.mp4https://files.catbox.moe/srj0zh.mp4
>>106555749fuck so good!! im on a 3070.. once i figure this out I'm definitely upgrading my gpu. this is on 2.2?
>>106555765yes https://civitai.com/models/1818841/wan-22-workflow-t2v-i2v-t2i-kijai-wrapper
>>106555749Lora for the first one?
There's a guy on simpcity who makes some detailed looking vids too, I think he talked about his 2.2 workflow but I didn't look through to see what's the thing specifically adding this crisp detailhttps://saint2.su/embed/PTvCHZn0Obihttps://saint2.su/embed/r6WlcShlOeShttps://simpcity.cr/members/prison-mike.1645756/
>locked out of pc for like 7 minutes because vram got filledThanks.
>>106555793https://civitai.com/models/1923528/sex-fov-slider-wan-22
>>106555773I should just try and fix the comfyui version then.. i had it installed but i always got vram errors even on the lowest settings. currently im using this thing and it doesn't take workflows
anyone here train loras for wan 2.2? considering renting something on runpod to make a couple