Six Gorillion Lines EditionDiscussion of Free and Open Source Text-to-Image/Video Models and UIPrev: >>106666599https://rentry.org/ldg-lazy-getting-started-guide>UIComfyUI: https://github.com/comfyanonymous/ComfyUISwarmUI: https://github.com/mcmonkeyprojects/SwarmUIre/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneoSD.Next: https://github.com/vladmandic/sdnextWan2GP: https://github.com/deepbeepmeep/Wan2GP>Checkpoints, LoRAs, Upscalers, & Workflowshttps://civitai.comhttps://civitaiarchive.com/https://openmodeldb.infohttps://openart.ai/workflows>Tuninghttps://github.com/spacepxl/demystifying-sd-finetuninghttps://github.com/Nerogar/OneTrainerhttps://github.com/kohya-ss/sd-scripts/tree/sd3https://github.com/derrian-distro/LoRA_Easy_Training_Scriptshttps://github.com/tdrussell/diffusion-pipe>WanXhttps://comfyanonymous.github.io/ComfyUI_examples/wan22/https://github.com/Wan-Video>Chromahttps://huggingface.co/lodestones/Chroma1-BaseTraining: https://rentry.org/mvu52t46>Neta Luminahttps://huggingface.co/neta-art/Neta-Luminahttps://civitai.com/models/1790792?modelVersionId=2122326https://neta-lumina-style.tz03.xyz/>Illustrious1girl and Beyond: https://rentry.org/comfyui_guide_1girlTag Explorer: https://tagexplorer.github.io/>MiscLocal Model Meta: https://rentry.org/localmodelsmetaShare Metadata: https://catbox.moe | https://litterbox.catbox.moe/GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-oneTxt2Img Plugin: https://github.com/Acly/krita-ai-diffusionArchive: https://rentry.org/sdg-linkBakery: https://rentry.org/ldgcollage>Neighbours>>>/aco/csdg>>>/b/degen>>>/b/realistic+parody>>>/gif/vdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/trash/slop>>>/vt/vtai>>>/u/udg>Local Text>>>/g/lmg>Maintain Thread Qualityhttps://rentry.org/debo
someone like me shouldn't have the power to make these images in less than a minute at hireses>>106669779not bad
Qwen Image Edit PSAAlways add:"without changing anything else about the image"at the end of your prompts if you want to preserve anything at all from the original imageAlso here's a great workflow for the old Qwen Image Edit modelhttps://files.catbox.moe/6wcz4m.png
>>106669789You dropped >>106669779
>>106669789blessed thread of friendzoned ;3
>>106669808saving this b4 janjan delet due to nippies
the destiny of every ShillAI employee:
>>106669829cleanerlike Microsoft's asshole after Sam licks it
https://huggingface.co/dannygroove666/Qwen-Image-Edit-2509_fp8_e4m3fn.safetensors/tree/mainthere's the fp8 version
ookay ookay bub we get it
Can someone give me a quick rundown in this "AniStudio" thing?
>>106669859It's the best frontend for generating everything that's hated only by one schizo transphobe itt.
>>106669823yknow what, you might be a tripfag, and a kinda annoying one, but knowing you're also banned from civitai makes you a brother.>>106669827eh they're clothed it should be fine. i had to debate whether to post 99% of the wedgie gens i did because they seemed to toe the line.of course if i'm wrong, well. whoop. lol.
>singular schizo theory
>>106669891>brotherwe are all brothers, on earth, in christ's love.
With the new qwen edit local finally reached 1/4 of Seedreams power. Impressive
>>106669891What base model are you using for these? I'm impressed by the expressiveness.
>>106669871Thanks. I really like to try it. Do you have the link perhaps?
>>106669912>we are all brothers, on earth, in christ's love.hard as fuck man a-fuckin-men.>>106669920wai-nsfw v140 with an expression lora, "one piece funny face".and 20 hires steps + adetailer for the face
>>106669811>>106669773where do i put this shit nerds???? I AM ALMOST DONE NO THANKS TO YOU
>>106669930Of course! https://github.com/FizzleDorf/AniStudio/
>>106669952you turbo nigger retard, read it, it literally tells you the folder paths right there
>>106669952learn to read the op next time, jamal
>>106669952>comfy> in 2025
I spoke to John Qwen and he said wan going forward will be API only.
another try with new QIE>prompt = "the anime girl from image 2 is standing in the foreground of image 1, looking back."It's also not a very good result.Maybe I'm asking too much.I'm supposed to be doing work so I can't spend more time looking for better pictures to test, but I'll try a few more later.
>>106669952you're almost there! be sure to post the vids you make :3>>106669942<3
anistudio doesn't have this problem btw
>>106669952>wan 2.1
>>106669985When I posted some images from the last thread I was getting victim blamed for follow the instructions on the website. It's a step up, but it's not perfect. That's for sure.
qwen edit is a great pepe generator btw
>>106669985it completly changed the background, it's so bad
>>106669789wait waht? how do you combine a video and pictures? is that some new formate?
>>106669912pedo christcuck
>>106670064a video is a set of pictures, so it's just repeating the same picture over and over during the video
>>106670088
>>106670038
Wan Animate is good at 3D to 3D. Trying to use 2D reference to 3D or vice versa gave bad results.
>>106670120dunno why this in particular made me wheeze laugh but here i am, giggling again.
>>106670157dead on arrival
>>106670157How about 2D to 2D?
>>106670174Don't have any 2D reference video on hand. I'll try to find something to test.
>>106669952can i skip the 30gb one and just put my own checkpoint in that i have already tested? this shit is takling too long i dont know how you guys have patience for this shit
https://huggingface.co/calcuis/qwen-image-edit-plus-gguf/blob/main/qwen-image-edit-plus-q4_0.ggufslowly getting to q8, might just try this for now
>>106670161I Fairr,stockng wis die thary linge!
Wan2.5 should have similar requirements to 2.2 right? Already pushing the limit of my 4090 here
>>106670120brainlet, you do know videos can contain still frames yes?
>>106670202>might just try this for nowwhy not going for fp8 instead >>106669844
>>106670246Nobody knows yet, but I expect it to be a Veo 3 competitor.
>>106670261404, it's gone.
>>106670276I screenshotted this image and sent it to oxford dictionary so they can use it in their picture dictionary under the word "cope"
>>106670277still herehttps://huggingface.co/dannygroove666/Qwen-Image-Edit-2509/tree/main
>>106670276lol
>>106670297wtf the other one is 40gb
>>106670325well yeah it's a 20b fp16 model
>wan2.5 is releasing mid 2026 and people haven't even fully migrated to wan2.2 yetimagine taking a break from ai for a few months. you'll be so behind you may as well be starting new
So there is no way to know the trigger word in civitarchive loras unless it is one of the few ones archived with that knowledge included, right?I wish they included civit links instead of just saying deleted, maybe could have tried web archive or something.
> kijai nodes> "simple" i2v wfabsolute disgrace
>>106670346civitarchive is fully vibecoded and has been abandoned for months. don't expect it to work.
>>106670157it's fucking better using dance on i2v. animate is so bad
>>106670364Do you know a non-abandoned alternative?
>>106670355it's really not that complicated
>>106670375maybe not complicated but there are far too many moving parts that are just so unneededand it looks messy.if native had context windows for wan i would delete all his nodes
>>106670372My local backup
>>106670235WHEEETEEEZE
>>106669789Catbox on the second from the left on the top please?Is there any way to use an image as reference to get the same style/outfit on new gens?
>>106670334>wan2.5 is releasing mid 2026Niggas here were saying the 24th of this month
>>106670346pretty much. if the lora has no metadata then it is impossible to know.
>>106670157Workflow?
>>106670408it's supposed to be tommorow or in 2 days yeah lolhttps://xcancel.com/bdsqlsz/status/1969650994192794103#m
>>106670334>wan2.5 is releasing mid 2026lel, are you retarded ?
Is the new qwen censored or am I good to download and edit boobs onto everything?
>>106670426artist? really cool
>>106670421Kijai example workflow. https://github.com/kijai/ComfyUI-WanVideoWrapper/tree/main/example_workflows
>>106670446have a real deep think
>>>106665054is there a reason to resize the i2v source image before genning? doesn't it get resized at the end into whichever frame size you choose anyway?
>>106670435If I open that link and it's the blue dragon I'm going to flip my shit.
>>106670447It's Nakayama Tooru
>>106670455Do you need a nasa pc to run it?
>>106670465>is there a reason to resize the i2v source image before genning?well if you don't have much vram it's good to go for a small image, and it's slower if the image is big too
>>106670435who the fuck is that? why should i trust that person? how do they know the release date?
>>106670458Haven't touched image gen in a year or so, I have no idea if you're referencing something or not. Was just looking for a model to get back into it with and hoping to use something uncensored.
>>106670472thank you anon
>>106670481lurk more, he's a guy that gets invited by all the big AI companies in China, when he announces something, it always happens
>>106670494So if it's not out in 2 days, I'm free to call you a retard, correct?
>>106670455Probably doesn't recognize what to do with the anime eyes there but movement seems to translate better than 3D ones.
>>106670501good luck with that, he's right since he announced Wan 2.1
https://huggingface.co/QuantStack/Qwen-Image-Edit-2509-GGUF/tree/mainmodels are popping up!
>>106670478fucking this
>>106670550>Q4_1oh come on, why do they never start with the biggest quant :(
>>106670550whats the difference between qwen-image-edit-plus and Qwen-Image-Edit-2509
People will post the fakest posts with the biggest aplomb.
>>106670563what's the difference?
>>106670598Q8>Q4
>>106670598>what's the difference?
how can I make the change stronger?
>>106670597no fucking way I just saw this and was about to post this here.
>>106670597fucking lmao.the dumb cunt could have just asked how2faceswap but had to add a fake backstory.
>>106670202>>106670550I don't want to be ungrateful but what's with this rollout... most people either use q4km or q8, why not start with those?
>>106670616is it the old or the new qwen image edit?
>>106670627old
>>106670613The milage this comparison's gotten is insane.
>>106670593bump
>>106670632try the new one then >>106670297
So how do we give multiple references to the new Qwen edit plus in comfyui?
>>106670617It was such a retarded post.
>>106670638Need a new node or update to the existing one most likely.
>>106670638you go for that new TextEncodeQwenImageEditPlus nodehttps://github.com/comfyanonymous/ComfyUI/pull/9986
>>106670613Thanks.
>>106670661Hello sir, I make six figure and want to explore wifes bob and vageen on only fans. How to put wifes face on bob and vageen?
Did you know wan can do pullups? Maybe not that interesting.
>>106669952>>106670189downloaded all that bullshitcomfy just CLOSES what the fuck is this shit?"got promptUsing pytorch attention in VAEUsing pytorch attention in VAEVAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16Requested to load CLIPVisionModelProjectionloaded completely 98657.8 1208.09842069125 TrueUsing scaled fp8: fp8 matrix mult: False, scale input: FalseC:\ComfyUI_windows_portable>pausePress any key to continue . . ."fag shit
still the old model, waiting for the new one (q8 ideally)
>>106670674well since your such a python genius, why don't you figure it out by yourself?
>windows >portable install LMAO
>>106670685I can do pullups too. Should I post a vid as well?
>>106670685Keeps the style consistent, that said it would be interesting if you prompted for a change in expression to see how it handles
>>106665874need this with little bit smaller hands touching her, those look like they belong to 8-foot giant that doesn't fit in that little space they are both in and is using noclip to fit anyway, kek
>>106670691
>>106669952
>>106670700HALP
>>106670738that no longer looks cool at all, scatfag
>accidentally genned a man i am attracted to
>>106670745kek, he's right you know- we shouldn't make time for fag shit, when it could just WERK.
I have never once had an issue with comfy UI since using my own install with my own environment. Portable is a trap and nightmare to maintain.
>>106670745absolutely uncivilized vermin
>>106670751>When you see another fucking Protoss go Nexus first on cross spawn
>>106670745This is how Furkgod got his start as Turkey's leading computer engineer
>>106670478whats the problem? They all work fine on my 5090.
>>106670777I found out furk blocked me the other day. I'm actually kind of sad.
>>106670777>Turkey's leading computer engineerHe cannot be stopped
in a workflow, how do you know exactly which node follows another? or does order not matter as long as all needed nodes are connected to each other?
"stashing current changesnothing to stashcreating backup branch: backup_branch_2025-09-22_17_27_57checking out master branchpulling."what??????????????
>>106670793it means that it's updating and there's no errors so far
>>106670793Oh shit, RIP, it's over for you...
>>106670793Oh fuck, rip out your SSD now before it spreads.
>>106670791The freedom to connect shit also cause confusion. Anyway, just drag the nodes out and it will give you some options. Failing that, use search by double clicking empty field and search what fits.
>>106670791Are you new to this?Just look at example workflows until it clicks.Typically:UNET > some optional model patching > samplerText encoder > positive and negative prompts > samplerUNET and text encoder and can both be loaded independently or from a checkpointThen sampler will output processed latentsTo decode that you load a vae (again either independently or from a checkpoint)And that gets saved as image. (For videos, there might be more post processing like interpolation)I don't feel like typing more detail but hopefully this helps.
>press any key to continue>press power button on my pc>it turns offi HATE comfy
>>106665040which nodeset is this from?
>>106670857Buttons aren't keys!
>>106670860nvm found it: https://github.com/BigStationW/ComfyUi-Scale-Image-to-Total-Pixels-Advancedgoated node
>>106670157And porn?
how to make it photorealistic saars pls hlp
rentry wan guide>never fucking worked or loadedcool game
>>106670923>how to make it photorealisticare you using the new qwen image edit model?
>>106670916Wan animate is basically useless for all but a narrow range of clips without excessive movement and lighting that won't make the character look like a bad green screen. It's a cool concept but in the end it's kind of grabo.
>>106670935yes
>>106670355Kijai should fork comfy and call it kijAI just for the lolz, 90% of the users would migrate
>>106670881
>>106670939just right, "make the image realistic"
>>106670923you can't add that many steps at once.it is far better to do one prompt at a time, ie. place the char where you want and then do whatever else to themyou mong promptlet
I don't understand Kijai's obsession with block swapping whatever the native comfy nodes use is so much more efficient
>>106670857>Press exe>Send bitcoin to this address to get access to your computerComfy fuck man, this shit is the worst
AI "slop" gets a bad name because of the inherent reddit clout-chasing nature of platforms like CivitAI and X. All we see floating to the top of everyone's feeds is generic, safe, obviously-AI gens that get hundreds and thousands of upvotes. But if you go to the model pages with sparse, recent submissions where it isn't a clusterfuck of people buying/farming clout and forcing their way to the top of the feeds with their bland vanilla mediocre gens, you'll see some amazing stuff. Real, actual art, usually with a controversial or violent streak and ZERO upvotes. I make sure to give them at least 1 and follow these obscure local genners
>>106670955what do you even mean??
>>106670984>denoise 0.88wtf are you doing? don't touch that, put it back at 1
>>106670984please stop. it's evident you're just trolling now.
>>106670984>make the image a realisticesl-kun>fp8>fp8lol
>>106670993You've been saying everyone who has issues with the model is a troll. They can't all be trolls.
>>106671000to be fair there's not better than fp8 for the new qwen image edit right now
>>106671000my bad I rushed to get the result ready , shit takes forever
>>106671001what? this is the first time i've said anything to anyone using qwen.the guy is very obviously fucking with us.
>>106671006fuck off
Heads up, new qwen edit prefers left (1st image) and right (2nd image) over first and second image.
it is even worse, maybe its something to do with the resolution of the initial img?
>>106671011stop fucking with the default settings and then being surprised why it doesn't work you stupid bastard.>>106671013you know this from testing or do they mention this somewhere
>>106671024try to remove the cfgnorm and the modelsamplingauraflox
>>106671025mm ok let me change everything back to default
>>106671025Testing and the fact the hugging face space examples explicitly use that language.
>>106671024I think you need to add this TextEncodeQwenImageEditPlus node for the new qwen modelhttps://github.com/comfyanonymous/ComfyUI/pull/9986
What's the best way to extract this as pose for wan animate?
still not there>>106671060how do I install that node?
>>106671088>update comfy>double click on the empty space and search for "TextEncodeQwenImageEditPlus"
"When loading the graph, the following node types were not found.This may also happen if your installed version is lower and that node type can’t be found.'nothing in comfy is workingi updated but every workflow i grab just says im missing all this shitload of fuck
Can anyone explain why loading a LORA at zero weight or not loading it all result in different images?x should equal x + (y * 0), no?
>>106671094How do I do that???
>>106671086model?
"Prompt execution failedTypeError: Failed to fetch'
>>106671122Flux Krea
Real retard hours
>>106671088>>106671105you don't seem to need the new node (unless you want to go for multiple images), it seems to be working for your single image, and yes, the resultat isn't great, that's Qwen Image Edit, not Nano Banana lol
>>106671139stop trolling, anon.
>>106671086>>106671135Hot damn I may have made a mistake in disregarding Krea. I just don't want to go back to flux.
NON-LOCALFAGS ON SUICIDE WATCH
Why does it always keep coming back to flux?
>>106670550Mine look deep fried
>>106671176I like it personally. Yes it is an overglorified LORA rather than a true novel checkpoint, yes I am a coping VRAMlet who can't into Qwen, but it can make comfy images.
wan 2.1 = 31gbwan 2.1 (distilled) = 18gbi guess i have to retry tomorrow
>>106671102By weight I meant strength my bad.In case it wasn't clear.
>>106671079it can't handle the spin jump
>>106670983>inherent reddit clout-chasing nature of platforms like CivitAI and X"Reddit" needs to be a noun here and not a gibberish adjective, sweetie. Clout-chasing predates civilization altogether.
>>106671245GEK!
>>106671241>>106671245How?
>>106671249Ever heard of Cloutius Maximus?
he's zapping to the extreme!
>>106671095I get this sometimes, I restart PC and it works again
>>106671245how do u make the output exactly the same length as input length?
>>106671245why stop there, just make a m2f transformation. Then we can all pluck out our eyes.
>>106671241Pose strength 1.0
can someone just post their 2.1 workflow? or link to one
>>106671325lmaooooooo
>>106671325KEK
>>106671316This is some high tech shit right here.I would kill to have such technology back in 2000s, yet everyone is bitching about it. Go figure.
>>106671327https://civitai.com/models/1818841/wan-22-workflow-t2v-i2v-t2i-kijai-wrapper
Does the new qwen use the same text encoder as the last one?
in kijai nodes, how can you set the lora strength per context window? say your first prompt shouldnt use the lora but the next one should.
>>106670613poorjeets itt will run at q4, say there's "no noticeable quality reduction", and then proceed to call the model plastic slop. anyone who runs a model quanted should not be allowed to critique the model
https://huggingface.co/QuantStack/Qwen-Image-Edit-2509-GGUF/tree/mainI dont think it's working, or it might need a comfy/node update.
>>106671458>anyone who runs a model quanted should not be allowed to critique the modeltrue that
>>106671464The other guy posting quants doesn't work either, its probably that the gguf node needs updating
>>106671458Both Q4 and distill lora, double whammy quality hit. I think people forget how bad distill lora's make a model lol
>>106670613that chart is like 6 months old
>>106671474And? Quanting leaves a mark on every model.
>>106671474and? the quants are still the same
saaaar is same quality and fasterrr use speed lora lightning saaaaar
>>106671474AND? that changes nothing.
>>106670613>>106671487>>106671488>>106671504quants don't work like that. There's a reason you always see this example. because if you actually try to replicate the experiment you'll see that the differences are neglible
>>106671516
>>106671516>differences are neglibleQwen text capability got fairly brutalized going to Q6 what I tested. Image quality was roughly the same.
>there are unironic 3060 poorfags who think that q2 outputs the same as fp16
i just hate black miku isall
>>106671516>the differences are negliblefor the text encoder it's a bad idea to go under fp8
>>106670613>holding a smartphone on her left hand>and a multicolored ball on her right hand>she has a red t-shirt >neonsHoly ESL
>>106669930AniStudio is actually really good. Been busy but I'll have new builds coming up in a week or so.
>>106671535Not talking about the text encoder at all, I use T5 at FP16. DiT models can be taken to Q4 with very little differences
>>106671558>DiT models can be taken to Q4 with very little differencesThis is bullshit, you have a counterexample right in front of your face and you continue to lie. VRAMlets are delusional.
>>106671573>you have a counterexample right in front of your faceWhere?
>>106671516Larger models can cope with it better and some variants like nf4 and nunchaku can provide better quality than run of the mill Q4 but it is absolute cope to say.This is probably (You) bait but whatever.>>106671535Honestly I wouldn't go below fp16 unless you are scraping the bottom of the barrel for gen speed up.Text encoding is the shorter part of the gen process and the time gain/degradation ratio is really inefficient if you are doing multiple seeds with the same prompt.
>>106671577>Where?>>106670613
>>106671585Like I said there's a reason that's the only example you see. Next time instead of relying on internet slander you should just run the experiment youself
>>106671601>Next time instead of relying on internet slander you should just run the experiment youselfI'm literally the guy who created that example lol
>>106671604Then put the images in a catbox and I'll run it for 10 seeds on 10 random prompts sourced from a 3rd party dataset.
the reason he can't provide a counterexample is because q4 is the only way he can run the model in the first place KEK
>>106671613you do it, you're the one who claimed it was an exception and that the rule is low variance, you have the burden of proof
>>106671079Code/name?
how the fuck do I set the output video length the same as input??
>>106671621First post your images so we can see that you didn't fake the experiment
>>106671631here's the workflow for fp16 and Q4_0>fp16https://files.catbox.moe/xx2kvr.png>Q4_0https://files.catbox.moe/fn58gx.png
>>106671628It's processed in batches determined by the "frame_window_size" on the WanAnimate Video Embeds node. If there's a slight overrun it's going to fill in the remaining frames.If you have a 150 frame video set it to 50 for 3 batches of 50 framesIf you set it to 70 it's going to still do 3 batches, the last 40 frames being random bullshit
Maybe I'm just crazy, but the new edit model works better without CFG?
this is by far the best version of chroma imo https://civitai.com/models/1956921/chroma-dc-2k?modelVersionId=2214897
>>106671725usually, when someone wants to shill his product, he usually adds an image to show the capability of the model, what you did is lazy lol
>>106671458I've always done Q4 and nobody says my gens are plastic
>>106671738was just stating found it far far better than the HD one
>>106671770how much better? show an example?
>>106671777She reminds me of Greta-anon...
is there a way to see if sage attention is working while genning?
>>106671741yeah im not sure what anons on about plastic. quants fuck up the prompt comprehension but thats it visually
>>10667177469%
>>106671770Everything is better than the HD one.
>>106671796I was expecting 420%, lame
>>106671725>>106671770If it is producing the iconic chroma deformed anatomy that is worse than SD 1.5 noticeably less often I can be interested.
>>106671658This one belongs to Anime Diffusion Thread: sorry for posting my image here.
>>106671741any workflow for a vramlet?
What do I prompt
>>106671853WW2 propaganda
>>106671738He links to a civitai page with images, like dude
We've already been therehttps://files.catbox.moe/qphnpf.jpg
>>106670613can you add nunchaku to this
bros I was so fucking used to qie nunchaku... the wait will kill me
>>106671963kino
Never used nunchaku. does flux loras work with nunchaku model? I'd guess no they don't
>>106671981No loras work with nunchaku lol
>>106671770Don't want to be that anon but as far back as v50 people were posting on HF, here and on leddit that the model was behaving weirdly. It's much easier to make slopped looking images on it and it will randomly blur outputs for no apparent reason. It's why lode caved and made v48 the "Base" model when he realized the HD training didn't work correctly.
>>106671622MMR-AK090 Ami Sasano
which model or whatever should I install if i just want a quick in and out transform image to landscape task?
>>106671983this makes flux pretty useless with nunchakumight try qwen then
>>106671983>No loras work with nunchakuwait seriously? bruh...
>>106671970
>>106672028Idk if Flux has support but qwen definitely doesn't.
>>106671622>>106671993desu, I only care about BBW Javs. In fact. I need BBWs. My favorite is Reo Fujisawa.
>>106671981>>106671983>>106672028>>106672042>>106672034Flux nunchaku quants support loras, without major slowdownsNote that the quality varies though, especially when using more than one.
5090 nigga hereanyone got a wan 2.2 t2v and i2v comfyui workflow optimized for my level of card? Also anyone got advice on what model and encoder I should use specifically? I feel like I usually see workflows optimized for cheaper cards
>>106672042https://github.com/nunchaku-tech/ComfyUI-nunchaku?tab=readme-ov-fileQwen Image is mentioned here. I haven't installed this yet so my knowledge stops here.
>>106671983bullshit, I made this one earlier today while I was testing nunchaku
>>106671837I decided to give it a try but saw that it is fp16 only.If you decide to shill this more later, please make a Q8 beforehand, thanks.
>>106672108Niggahttps://huggingface.co/silveroxides/Chroma-Misc-Models/tree/main
what was the prompt to add two images together in qie plus? like the one from the twitter post that i can't find right now
her boobs got smaller
https://xcancel.com/ImperfectEngel/status/1970330695047561339#mwhen you compare the 2 images, there's this fucking yellow tint, I thought they stopped training on 4o, c'mon chinks...
>>106672125Well how would I know, it wasn't linked there.Alright thanks anyway.Might make a post about what I think of it later.
>>106672125Well you might be in the know.Do you know what's up with the "T2-SL4" variant?
Less than 24h for wan 2.5, my dick is hard and ready, please infinite gens or at least 10sec+
>>106672140user error. Put balloons under your shirt and get to makin content bucko. The future of porn depends on fat sweaty old men shakin their butts covered in ridiculous prosthetics.
>>106672175>Less than 24h for wan 2.5um source?
Lmao should have backed up my Comfyui before updating everything. WanImageToVideo node maxes out at 24GB VRAM and either takes 5 minutes or hangs indefinitely now.
>>106672200A tweet
>>106672200https://www.timeanddate.com/worldclock/china/beijing
still have to generate the pose/depth controlnet ourselves right? not sure i follow what they are saying.
>>106672240>>106672240>>106672240>>106672240
>>106672175>please infinite gens or at least 10sec+it'll be like Qwen Image Edit Plus, a small improvement, not a lot of time passed between Qwen 2.2 and 2.5
>>106672243>a small improvementNope, confirmed tons of important improvements if anything I have a suspicion that it may be closed/api only
>>106672243Wan 2.1 to 2.2 was a big improvement
>>106672250>confirmed tons of important improvementsname literally any
>>106672254>Wan 2.1 to 2.2 was a big improvement9 months separate wan 2.1 and wan 2.2 though
>>106672215>>106672216i was cereal
>>106672337https://xcancel.com/bdsqlsz/status/1969650994192794103#m
>>106672344I want to believe but "new open-source video model" doesn't necessarily mean wan 2.5
>>106671842Could do one, sure. I'll post the reply in the next thread. I'm on 12gb so if you're at 8 or less you can go with a smaller gguf or smaller text encoder, or even smaller gen size
>>106672344probably isnt want 2.5 if its "new" open source model, 2.5 would be like "updated" open source modelor im just overthinking it and ESL just wants to make it ez
>>106672368>>106672443The bdsqlsz did go to a conference for wan 2.2 before it released and there was a discord screenshot that a gatekeeping dickhead eventually posted on here about 2.5 like 4 threads ago. Either way, we'll know for definite if it is or not soon
>>106672518>gatekeeping dickhead eventually posted on here about 2.5 like 4 threads agowow i missed that. cool. well those are some decent potential hints.
I'm starting to wonder if the fact that the image source being AI made is fucking with the FFLF i2v getting so much colorshift.But the shift also only happens when it is FFLF, if just First frame there's no colorshift.With or without the Color Match node, the shift still happens.For these loops I make with old reactionimages, the shift doesn't happen.
I'm on 24gb vram and 32gb ram and when I gen, my ram usage is at 99% most of the timeWould I get a sizeable speed increase by upgrading to ram to 64?
>>106673068 If you're running WAN 2.2, then yeah. It stores models in DRAM when they're not loaded into VRAM, so you're almost certainly paging each time the high or low ksampler starts.
>>106673068I'm in the same boat and my memory sits at about 55GB. Go a lot bigger if you can.
>>106673175yeah, definitely paging hard>>106673183higher than 55, or you mean higher than even 64?
>>106673231Higher than 64GB (which is what I currently have). Go for 96GB, 128GB or 192GB (or even 256GB if you're a baller) instead. 64GB is just the new minimum and will be more of a side-grade for you.
I am in Japan now. most anistudio work while I'm here will just be cmake and splitting things off into shared libs. sorry I haven't been active on the repo recently but I'll be back at it. wish me luck with softbank fundraising!
>Somebody else here found that you need to update your ComfyUI and replace your text encode nodes with TextEncodeQwenImageEditPlus. I'm testing it and it seems to be working.for edit v2
>>106673385the anime girl is waving hello.works, unless you change the node to the new one it will be random noise. also, the node has 3 image inputs so it should be easy to do multi input stuff.with: Qwen-Image-Edit-2509_fp8_e4m3fn.safetensors
>>106673415test 2: connect another load image node to image 2the teal hair anime girl is shaking hands with the pink hair anime girl.amazing, no more image stitch bullshit or concatenate jank, it just works with an image node.
>>106673424the teal hair anime girl is sitting at a table in a coffee shop with the pink hair anime girl.
>>106673430the two japanese women are standing in an empty classroom in Japan.source images: anri and anri (diff pic)
>>106673441the japanese woman is standing in a japanese hot spring wearing a white bikini.for a cropped photo it did really good desu
>>106673451and yes, qwen-image-edit-remove_clothes.safetensors still works if you want to do that.https://files.catbox.moe/y5y946.png
Wan 2.5>https://x.com/Alibaba_Wan/status/1970405877778915523inb4 x.com, just type xcancel
>>106673776nigga you DID type it and then you deleted it for some retarded reason