Discussion of Free and Open Source Text-to-Image/Video Models and UIPrev: >>106661715https://rentry.org/ldg-lazy-getting-started-guide>UIComfyUI: https://github.com/comfyanonymous/ComfyUISwarmUI: https://github.com/mcmonkeyprojects/SwarmUIre/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneoSD.Next: https://github.com/vladmandic/sdnextWan2GP: https://github.com/deepbeepmeep/Wan2GP>Checkpoints, LoRAs, Upscalers, & Workflowshttps://civitai.comhttps://civitaiarchive.com/https://openmodeldb.infohttps://openart.ai/workflows>Tuninghttps://github.com/spacepxl/demystifying-sd-finetuninghttps://github.com/Nerogar/OneTrainerhttps://github.com/kohya-ss/sd-scripts/tree/sd3https://github.com/derrian-distro/LoRA_Easy_Training_Scriptshttps://github.com/tdrussell/diffusion-pipe>WanXhttps://comfyanonymous.github.io/ComfyUI_examples/wan22/https://github.com/Wan-Video>Chromahttps://huggingface.co/lodestones/Chroma1-BaseTraining: https://rentry.org/mvu52t46>Neta Luminahttps://huggingface.co/neta-art/Neta-Luminahttps://civitai.com/models/1790792?modelVersionId=2122326https://neta-lumina-style.tz03.xyz/>Illustrious1girl and Beyond: https://rentry.org/comfyui_guide_1girlTag Explorer: https://tagexplorer.github.io/>MiscLocal Model Meta: https://rentry.org/localmodelsmetaShare Metadata: https://catbox.moe | https://litterbox.catbox.moe/GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-oneTxt2Img Plugin: https://github.com/Acly/krita-ai-diffusionArchive: https://rentry.org/sdg-linkBakery: https://rentry.org/ldgcollage>Neighbours>>>/aco/csdg>>>/b/degen>>>/b/realistic+parody>>>/gif/vdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/trash/slop>>>/vt/vtai>>>/u/udg>Local Text>>>/g/lmg>Maintain Thread Qualityhttps://rentry.org/debo
Cookies & Cream
>>106666599pretty cool collage
For those who have missed that out, maybe we'll get something better than Wanhttps://byteaigc.github.io/Lynx/
Thanks for the help with the multiple sampler method to split up the strengths of the lora for each step. Sadly it doesn't seem it works.I'm trying to fix the color shift with the other wrapper wf.
>>106666683I downloaded one video and it's 16fps, it's probably a finetune of Wan 2.2
>>106666683i love how those numbers just do not mean anything at all.
>>106666755> Built on an open-source Diffusion Transformer (DiT) foundation modelit's 100% wan2.2. but the somewhat interesting part is their ID adapter. that aside i see nothing else of value.
>>106666800what's with the celery
>>106666878you don't know Hatsune Miku's leek? how young are you? lolhttps://www.youtube.com/watch?v=6ZWwqTnqxdk
>>106666890I don't speak japanese, do you?
>>106666912>do you?I do know how to search for the translated lyrics on the internet yes?
Blessed thread of frenship
>>106666890>Hatsune Miku's leekLeekspin was orihime from bleach mikunigger
>>106666683>>106666755While the demos do look pretty good, seems to still have the same 5 second cap....sigh...
If I got the hardware, are the fp32 options worth it over the fp16 precision in comfyui nodes? Can't tell much difference.
>>106666979https://www.youtube.com/watch?v=ekdKIKfY6Ngdamn I miss that era of youtube, All YouTube recommendations were as kino as this
cute kote
>>106667005likely a finetune of Wan so yeah, with all the drawbacks associated with it
>>106667011isn't bf16 better than fp32?
nunchaku qwen image edit plus when???????????
>>106667003corpo slop you'd see them hang in your office to remind you not to think for yourself and waste your life on the company while they rape you.10/10 would kms again
Is the state of unpozzed video gen good enough that there's a thread someplace with horrible degenerate images of heterosexual coupling (and not just "female with liquid overflowing")? Where? Is there a place to try my hand at it for free?
make a plastic anime figure of the cyan hair anime girl on a round pedestal. (qwen edit)
>>106667130> freeyes, simply buy a 5090 and you can gen it at home for free.
>>106667152oops, didnt set upscale on the compare image nodes to lanczos. now it's better:
>>106667153dont need a 5090 for wan. 12 or 16gb is enough.
>>106667084>againwhat are you a fucking cat?
>>106667189sure, if you want to wait forever just for 5 seconds.
>>106667203with a 4080 I can get a clip in like 100-120 seconds with lightx2v loras.
>>106666591>having good success with the 2.1 lora at 3 strength for high, and 2.2 lora at 1 for low. only 2.2 high seems to affect motion in a bad waynigga we been known this
>>106667175with both:
>>106665018Catbox?
put the image on a coffee cup, that is placed on a table in a coffee shop.the edit models are so neat. and you can use this stuff with wan or whatever. you can inpaint and do stuff like that but it'd be very hard to do this type of stuff without qwen edit/kontext.
>>106667279Woah that's crazy I had no idea this is entirely new information that no one's mentioned before thank you for sharing anon
>>106667288yes my bad for discussing diffusion models in the diffusion general.point remains, these are really good tools cause any amount of denoise % or controlnets would not be able to do what these can do.
>>106667003I would use that in a powerpoint.
How many GB vram do you need to train an SDXL LoRA?
>>106667320Much like the memes you choose to edit the discussion you bring is old and stale kek
>>106667250Can it make a mamama apimiku style mimukawa miku?
>>106667394kek
replace the anime girl with Miku Hatsune. Change the text "DOROTHY: SERENDIPITY" to "MIKU: HATSUNE".nice
>>106667445kys
>>106667455no ty sir
>>106667445pretty good, but hatsune miku isn't the hardest to change, most models know her well
>>106667481youre expecting anon to do something interesting? he would never
>>106667481you could also remove the model entirely and then swap them in with photoshop. what's neat about the edit models is they can edit or remove stuff but respect layers. can't do that with inpainting at high denoise levels.
https://huggingface.co/Qwen/Qwen-Image-Edit-2509it's up
Incredible groundbreaking developments from the miku poster
>>106667506Enhanced Single-image Consistency: For single-image inputs, Qwen-Image-Edit-2509 significantly improves editing consistency, specifically in the following areas:Improved Person Editing Consistency: Better preservation of facial identity, supporting various portrait styles and pose transformations;Improved Product Editing Consistency: Better preservation of product identity, supporting product poster editing;Improved Text Editing Consistency: In addition to modifying text content, it also supports editing text fonts, colors, and materialssounds good
>>106667506>This September, we are pleased to introduce Qwen-Image-Edit-2509>Multi-image Editing Support: For multi-image inputs, Qwen-Image-Edit-2509 builds upon the Qwen-Image-Edit architecture and is further trained via image concatenation to enable multi-image editing. It supports various combinations such as "person + person," "person + product," and "person + scene." Optimal performance is currently achieved with 1 to 3 input images.let's fucking go dude, no more image stitching cope anymore
>>106667506thank you chinks, you really are our saviors
>>106667506any fp8/q8 yet?
>>106667506https://huggingface.co/spaces/Qwen/Qwen-Image-Edit-2509https://huggingface.co/spaces/akhaliq/Qwen-Image-Edit-2509there's demos in here
>>106667537it's just been here less than an hour ago, unlikely
>>106667506already?lmao, qwen actually speedruning to agi in 3 years...
>>106667445>>106667279>>106667152Can you try with the updated model >>106667506
>>106667506>and it's not overwe'll get wan 2.5 in a few days, Alibaba is better than Santa Claus lmao
>>106667546>https://huggingface.co/spaces/akhaliq/Qwen-Image-Edit-2509wtf is this shit ;_;
>>106667574dont have a quant or fp8 download, I assume it has to be converted from this batch of files.
> This September, we are pleased to introduce Qwen-Image-Edit-2509, the monthly iteration of Qwen-Image-Edit.what? so they are updating it monthly or what?
>>106667601China does what Scam Altman dont
>>106667506sadly it does look just like a incremental improvement
>>106667601>what? so they are updating it monthly or what?I think they already done that with LLMs
at what point will you wake up and stop getting excited for chinese slop
>>106667506NUNCHAKU NEXT TARGET
>>106667611sure but is this just a monthly further finetune or athe actual "edit plus" they are talking about?
>>106667622>NUNCHAKUgive up bro lol
>>106667619slop? wan is better than any western video model and is open source. qwen/qwen edit are free. OpenAI want you to pay $1000 a month for 5 prompts a day.
>>106667506if that one doesn't zooms in randomly, all we have to do is to SPRO this shit and we'll be back
that's nice and all but let me know when they are brave enough to include aroused genitals in the dataset
ok yeah sure
So are the lighning loras supposed to be 2 high 2 low or 4 high 4 low?
>>106667619never, we can save it!
>>106667611I still would welcome a leak of dalle 3. It has a weird vibe no other model gets right.
>>106667506>The girl from Image 2 is sunbathing on the lounge chair in Image 1>The girl from Image 2 is drinking coffee on the sofa in Image 1excellent, that's exactly what I wanted from an edit model
>>106667678scale is way off in the coffee shop image
>>106667640wan is the only non slopped decent model from xi its an outlier need i remind you of the dozen failed image models
>>106667692that was a bad idea to use the ratio of the 2nd image, it should've been the ratio of the 1st image
>>106667506>60GBdo I need 60gb vram to run this?
>>106667619Qwen Image is less slopped than Flux and has the apache 2.0 licence, if only it wasn't so big it would be finetuned by another suspiciously richfag>>106667704it's the same size as Qwen Image, so a 24gb vram card will suffice (Q8 QIE + Q8 text encoder)
>>106667704where'd you get that number? I'm seeing 40, and that's fp16 so fp8 will be 20
>>106667712I use q8 image/edit on 16gb (4080) without issue, not even using multigpu node.
It's shit. You will cope for a week saying how it's better than nano banana until you too finally admit it's shit.
can you actually run the bf16 qie with blockswap on a 24gb card?
>>106667745who legit cares about if it's better or not than nano banan? this shit is free and wildly uncensored compared to any paid non-local model.you fat cunt.
>>106667546
>>106667745>You will cope for a week saying how it's better than nano bananano one will say that lol, I don't expect QIE to beat nano banana anytime soon
Is Wan 4:3 resolutions only or can it do 9:16 (ie iphone), 1:1, etc? I mostly gen between 4:5 and 9:16 so 4:3 doesn't work for me...
>>106667760the eyes are sus for poor Ryan, but I like the skin texture though
>>106667767this is a dumb question.
>>106667767just try it?
>>106667767you can do any size, smaller is good for fast gens (ie: 640x640 vs 832, etc)
the model has been out one hour nowwhere quants
why aren't all loras migrated to the new model already?
>>106667767If only there was perhaps a guide written that included this very information.
>>106667506not bad at all
Qwen Image Edit PSAAlways add:"without changing anything else about the image"at the end of your prompts if you want to preserve anything at all from the original imageAlso here's a great workflow for the old Qwen Image Edit modelhttps://files.catbox.moe/6wcz4m.png
>>106667839your advice is deprecated anon kek >>106667506
>>106667821Can you catbox those two images so I can try it with the old model? And paste the prompt too if you can
>>106667856I found it on reddit so I can't help you with that
>>106667853No, I'm giving it specifically because I see that the new version also needs that same prompt to be appended for it to preserve things properly and because I see people who are using bad workflows for the old model thinking it's bad but the new one is just an incremental improvement.The new model still doesn't keep the exact same resolution, and still has the same VAE quality loss obviously as it's still not pixelspace.
>>106667872once lodestones finishes his radiance/pixelspace model we will likely see more models adopt it.all pissy trolling aside, it really keep images nicer not having to run em through a vae. this will be important for edit models where you simply can't iterate because the quality gets raped by the vae.
do people still use guidance that makes generation take 2x as long if you use negatives? haven't done SD in a while
quants where?
>>106667642>if that one doesn't zooms in randomly,it does, look at 33 sechttps://xcancel.com/Ali_TongyiLab/status/1970194603161854214#m
>>106667821looks kinda bad, you can see how it completely rejects the blue jacket's texture when outpainting. looks like a 512x512 crop pasted on top
>>106667642spro is a meme lil bro, let it go
>>106667906oh yeah nice catch
not uploaded yet but seems like the first quants are here https://huggingface.co/calcuis/qwen-image-edit-plus-gguf
>>106667916we'll still have to wait for comfy to implement the multi image process too
no qwen image edit plus nsfw finetune yet?
>>106667885>once lodestones finishes his radiance/pixelspace model we will likely see more models adopt it.yep, for edit model it'll be mendatory to go for the pixel space, maybe QIE will be the first to do it who knows
>>106667932This is gonna be brutally heavy on vram and ram tho
>>106667506now that can be interesting to experiment with
>>106667938I still think 20b is overkill, if they manage to keep the quality with 13-14b + pixel space we could manage to run this shit
>>106667886Raising the CFG will inevitably make things slower.
>>106667782it takes me like 10 minutes every time I "just try" something with video, I'd like to not waste hours discovering the handful of things everyone else already knows. Besides I was hoping someone might know some info about how it was trained and whether it was intended to support such resolutions or not
>>106667506it says it supports ControlNet, holy shit
Here's the new vs old Qwen Image Edit models for comparison with the Will Smith example posted above.We need SRPO and no quality loss from VAE like Chroma radiance model has with the new pixelspace research, this is just an incremental improvement that isn't much different as the old model is already pretty good depending on the prompt and workflow.
>>106668015I like the improvement, the face is more accurate and the skin texture is not as slopped as before
>>106668015Old version workflow: https://files.catbox.moe/r0kyif.pngSame workflow as posted at >>106667839>>106667821
>>106668015old version looks so much better ahahah.can you do some more comparisons? i'm out of gpu time on hf
>>106667601>what? so they are updating it monthly or what?lmao, are they really gonna upload a new version of QIE each month? sounds crazy, I guess they realized that the training wasn't over and the curve loss wasn't flattened yet
>>106668015>>106668028Also keep in mind that the deployment parameters of models matter a lot, so we need to wait for the best workflow to be created for a more like to like comparisons.For example with this comparison and that generated image you see there, on the old model I added"Don't change anything about their heads at all, keeping their faces and heads exactly as they are."to the prompt and yet I got the same image as you see there as I get in picrel without adding that sentence to the prompt, meaning the old model for example can't like for like copy the original images like the new one can, despite the images being low quality themselves, it can be a showcaseing of the new model following the prompt better, which is important.
https://huggingface.co/calcuis/qwen-image-edit-plus-gguf/blob/main/qwen2.5-vl-7b-test-q4_0.ggufwhat's this
I just bought a 5060 Ti (16 GB) instead of a 5070 Ti.Not worth 2x the price; still a massive upgrade from my 3060
>>106668134the text encoder? it's probably the same text encoder as the previous Qwen Image model
>>106668137Waste of money. Should've waited until you had more and bought a 4090 or 5090. I can't imagine doing video gens on 16GB.
well I guess q8/other models will be up later today some time.
>>106668151wan q8 works absolutely fine on a 4080 (16gb). the only thing you have to consider is not making the dimensions *too* large cause that needs more vram.
>>106668134>>106668149if that's the same text encoder he's wasting his time, there already have gguf of this
>>106668137nice!
>>106667786imo there's specific resolutions that wan works best with, and i will continue to stick to wan 2.1 resolutions, which is 1280x720 high res, and 480x832 for low res.
>>106668137Should've waited for the super cards. A 3090 is faster than a 5060 ti and you're now stuck with 16gb.
>Sarrs... a second model is released this week.Wtf, I didn't get to fuck around with Wan animate completely yet. We're eating too good.
>>106668161you cant do 720p + future wan models may not do the whole high/low split thing again. if they don't, you'll be forced to use a lower quant like with wan2.1
>>106668181what the hell even is wan animate? is it like vace?
>>106668151waste of money, by getting +70% faster gens? not really. Not very interested in video>>106668168yeah! looking forward to it.>>106668175I considered waiting for them, but when they're coming is not confirmed - and they're hardly going to be anywhere close to MSRP anyway
>>106667506>Multi-image Editing Support: For multi-image inputs, Qwen-Image-Edit-2509 builds upon the Qwen-Image-Edit architecture and is further trained via image concatenation what makes it different to our image concatenation cope we used to do on the previous QIE?
>>106668216>waste of moneyit's literally not. vram is absolute king.>Not very interested in videoit's beneficial for training loras too and future proofing for the latest models, but you do you. 100% wasted.
>>106668212https://humanaigc.github.io/wan-animate/>tl:drCharacter replacer for videogen.
>>106668236whos the girl anyway?
>>106668234>vram is absolute kingwould you take a 96GB GTX 680 over a 16GB RTX 4080?
>>106668236Seems VACE is a component within Wan Animate, so essentially it's doing the same thing except better I guess.
>>106668258No because it's GDDR5 vram not GDDR7.
>>106668234I have had no issue training LoRAs with hundreds of images. Not wasted for me.
>>106668286...would you take the GTX if it was GDDR7?
>>106668236that face seems grafted on her, scary
>>106668234>it's literally not. vram is absolute king.I've seen no difference in my gens speed (well 5%, aka nothing) while sending parts of the model to ram.Compute >> vram when you have enough vram to do the compute while ram can host the model itself.
I will run a couple tests of the new QIE with the stock inference code if you post image pairs and a prompt.I tried it a few times and the result wasn't too good. It retained random bits of the source. It's also unclear how to specifically refer to image 1 and image 2 in the prompt.
>>106668243Meme Dance from some vtuber, real dancer is Yaorenmao https://www.youtube.com/watch?v=db8FRxYM97Ygacha character replacement as a testhttps://nikke-goddess-of-victory-international.fandom.com/wiki/Liter#Cute_Sunflower
>>106667506aint no gaht damn way kek guess i'm not running this shit locally?
>>106668286>>106668295the silence says everything
>>106668355what were you trying to combine?
>>106668386It looks so much better in the original.
>>106666599>>106663722Is bottom-left Chroma Radiant?
>>106668423Up to (you)s to mess around and find out what works. Wan animate is only out for couple days. Took a while before Wan2.1/2.2 to get good too.
So, Flux SRPO and Chroma share the same grubby output, how do you Chroma guys help mitigate the problem? Upping the resolution certainly helps, but detail is still pretty poor the further into the image you look.Also, these Flux lines are killing me... I forgot how bad they were when you can't get them to go away.
>>106668422A basic miku and this space marine>"the girl is wearing the space marine's armor. she is not wearing the helmet."
>>106668452it has stealth metadata which saysModel hash: ea349eeae8, Model: noobaiXLNAIXL_vPred10Version, Hashes: {"LORA:noob-ft-1536x-extract.safetensors": "62ab3e5fbe", "LORA:96_chadmix_vpred10_1a.safetensors": "3e4c2efab7", "LORA:96_chiaroscuro_vpred10_1a-000022.safetensors": "29061f3419", "model": "ea349eeae8"}
>>106668236Still looks like shit
>>106668468Nice, thanks
>>106668467go for, "the anime girl from image 1 wears the armor from the image 2"
>>106667250>childlike innocence
>>106668466lots of steps and second pass
>>106668493It's definitely a better result with that prompt, but it's also trying too hard to preserve the original
Is there a Comfy native workflow for Wan Animate yet?
>>106668634It's open source so there's a delay, a lot saas stuff to implement first for the comfy man
https://xcancel.com/LodestoneE621/status/1968687032605065528#mstill no comfyui nodes for this?
>>106668654Is this possible for text models
>>106668654I'll believe it when i see it i tell's ya.
>https://github.com/comfyanonymous/ComfyUI/pull/9979Someone from the LTX team fixed the ComfyUI memory leaks but the devs are probably getting mangled by their VCs and cba to push it to main.
>>106668685I will spare this fella from my daily jew jokes henceforth, holy fuck cumfy stop fucking around and push this shit. what the fuck are their priorities?
>>106668685based, it's important to have this shit when using wan 2.2 (with all the unloading/reloading shit)
>>106668706That guy is almost cartoon level jew stereotype lol. Based af though if his implementation holds up, I will kneel to him
>>106668706just copy/paste the file yourself
>>106668742>That guy is almost cartoon level jew stereotype lol.I thought he was arab lol
>>106668737bro i literally cannot fucking run wan 2.2 at all on any quants because of this issue, it'll get through the first pass then rape my computer because it's not cleaning its memory properly. meanwhile i've been steadily perfecting my 2.1 workflow and i can keep running for another 3 or so gens before i get an OOM.>>106668742someone in here lurking probably has a better eye for all those desert phenotypes, he's probably not like "pure jew" but still incredibly based. >>106668753 this is what i mean kek
https://huggingface.co/calcuis/qwen-image-edit-plus-gguf/blob/main/qwen-image-edit-plus-q2_k_s.ggufthis guy is uploading q2 first...
>>106668773even if we have the weights we can't really test the multiple image shit, comfy has to implement this
>>106668773haha. aahh. i'm not even gonna try this model.
worth it
>>106667506https://imgsli.com/NDE3MjYystill has the zoom issue, fuck
>>106668793What was it?
god I fucking love China FUCK
>>1066688511280x720 80 step porn clip. came out perfect
>>106668872>80 step
>>106668753both are semite so yeah
are you serious? https://github.com/comfyanonymous/ComfyUI/pull/9979/filesjust 3 lines of codes were needed to fix the retarded memory leaks?fucking useless.
>>106668849OH COME ON
>>106668793>>106668872SIX HOURS?And I was already thinking I was crazy back then when my gens were like 2h each before any optimizations.You've got to share it, catbox!
> 6 hours for 5 secondsthe absolute state of video gen'ers
>>106668973would you have found it?
https://github.com/comfyanonymous/ComfyUI/pull/9986damn this motherfucker is fast, now I'll wait for the gguf quants
>>106668973>nocoder
>>106669012haha funny how fast they can just implement new SOTA models meanwhile >>106668973funny that innit bruv
>>106669018dude, sometimes the solution is in the details, there's like millions of lines of code on his project, good luck finding the one that caused the issue
chat i'm going to manually the mem fixes
>>106669023> if loaded_model.model.is_clone(current_loaded_models[i].model): to_unload = [i] + to_unload for i in to_unload: current_loaded_models.pop(i).model.detach(unpatch_all=False) model_to_unload = current_loaded_models.pop(i) model_to_unload.model.detach(unpatch_all=False) model_to_unload.model_finalizer.detach() total_memory_required = {} for loaded_model in models_to_load:you faggots were just crying every thread that there WEREN'T memory leak issues, now it's "w-wwell aschkully muh six million lines of code"
never forget the six gorillion lines
>>106669039and yet you weren't the one who proposed a solution to this as a PR, brother
>>106669039comfy fanboys are pretty cancer unfortunately. Why the fuck does a UI even have fanboys lol
>>106669055ooooo i'm taking your hulk hogan slam jam no-u-aroo and no-u-aroo'ing you right back brother, why didn't you come up with the wrestlemania 1989 solution to this PR hogan? yeeaaahhh that's what i thought YEAH.
>>106669039>you faggots were just crying every thread that there WEREN'T memory leak issuesno one said that wtf
lodestonesissies, /lmg/ is making fun of our project :(>>106669029>>106669087
>>106669082Pretty much look through all the github issue page, the reddit page, and heck even previous threads while back. Anybody who said about the memory leak are met with >works on my machine lol
gib q8 qwen edit update
https://www.reddit.com/r/StableDiffusion/comments/1nnyur4/mushroomian_psycho_wan22_animate/kek, wan animate seems fun to play around with
>>106668973Acktually it was just one "model_to_unload.model_finalizer.detach()"
>>106669106It does work on my machine tho. Idk if you niggas are trying to run it with 16GB ram and cry when it crashes.
>>106669113this
What's the alternative to cumfy?>gradio>ani
>>106669097> siss> ourpiss off
>>106669082>>106669113>blames KJ's nodes>blames wrappers in general>blames "its your workflow bro">blames "its your ram bro lol">now there's a pr for it>well we never said any of those thingsit's enough to BRING ME TO THE BOILIN' POINT!
>>106669131We got a new schizo is town, but it's a WWE schizo so I let it pass, that's funni
>>106669139>pointing out observable thread behavior (and even things outside 4chan) is schizo behaviory'know what buddy, just for that, i'm done with the WWE bit. no fun allowed.
RIP WWE Shizo2025 - 2025
https://huggingface.co/calcuis/qwen-image-edit-plus-gguf/blob/main/qwen-image-edit-plus-q3_k_l.ggufq3! they are slowly getting to q8...
>>106669170why did this retard started with the small quants? reeeeee
just ask grok to find the leak it's got all the context window
UPDATEhttps://huggingface.co/QuantStack/Qwen-Image-Edit-2509-GGUF/tree/mainnot up yet but I use this repo for ggufs all the time, should be there soon.
>>106669113same. never had any issues with memory leaks using wan2.2. have done 100+ gens in a single session. zero problems.i do however have obvious memory leak/lag with my SDXL workflow for some odd reason. I have to refresh the page after 30 or so gens because the ui becomes unresponsive. no idea why. i cant figure out why its happening.
>>106669173indian website sir
>>106669177just like that kid with the joker profile pic who asked chatgpt to make triton 4x faster and somehow tricked everyone on reddit into downloading his slop then disappeared after someone called him out
>>106669178Retard here, 2509 is the same as plus, right?
>>106669194pretty sure yes.
>>106669170>>106669178>>106669194>>106669201
>>106669193the fact most redditors just upvoted the shit out of it and praised it proves that there are too many retards that are dumb.this is why grifters thrive. too many people with absolutely zero braincells who throw money at shit.
>>106669210Did he earn any money from that?
>>106669210>too many retards that are dumbcool tautology, guess i'm one of them.
claude is awesome for making general purpose custom nodes though
Every time I hear something about memory issues it's some poorfag retard trying to blockswap with kijai
>>106669212no, i don't think there was a patreon linked etc.
Hello any idea how this was made?https://www.youtube.com/watch?v=yi8ffoNrj9k&t=6sspecially Maria I think looks great and if this was done without any Loras is a step forward in simplified workflows
>>106669210or the fact almost no one actually tested it themselves to see if it even offered a performance boost.
>>106669237lol what
>>106669237>take a sprite>use Qwen Image Edit -> make the image realistic>I2V with Wan
>>106669237Feed the 2D into qwen image edit to get the 3d image, then feed the 3d image into a wan image to video pipeline. No character loras needed but maybe use a style lora and good prompting