Discussion of Free and Open Source Diffusion ModelsPrev: >>107820534https://rentry.org/ldg-lazy-getting-started-guide>UIComfyUI: https://github.com/comfyanonymous/ComfyUISwarmUI: https://github.com/mcmonkeyprojects/SwarmUIre/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneoSD.Next: https://github.com/vladmandic/sdnextWan2GP: https://github.com/deepbeepmeep/Wan2GP>Checkpoints, LoRAs, Upscalers, & Workflowshttps://civitai.comhttps://civitaiarchive.com/https://openmodeldb.infohttps://openart.ai/workflows>Tuninghttps://github.com/spacepxl/demystifying-sd-finetuninghttps://github.com/ostris/ai-toolkithttps://github.com/Nerogar/OneTrainerhttps://github.com/kohya-ss/musubi-tunerhttps://github.com/kohya-ss/sd-scriptshttps://github.com/tdrussell/diffusion-pipe>Z Image Turbohttps://huggingface.co/Tongyi-MAI/Z-Image-Turbo>WanXhttps://github.com/Wan-Video/Wan2.2https://comfyanonymous.github.io/ComfyUI_examples/wan22/>LTX-2https://huggingface.co/Lightricks/LTX-2>Chromahttps://huggingface.co/lodestones/Chroma1-Basehttps://rentry.org/mvu52t46>Illustrioushttps://rentry.org/comfyui_guide_1girlhttps://tagexplorer.github.io/>MiscLocal Model Meta: https://rentry.org/localmodelsmetaShare Metadata: https://catbox.moe|https://litterbox.catbox.moe/GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-oneTxt2Img Plugin: https://github.com/Acly/krita-ai-diffusionArchive: https://rentry.org/sdg-linkBakery: https://rentry.org/ldgcollage>Neighbors>>>/aco/csdg>>>/b/degen>>>/r/realistic+parody>>>/gif/vdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/trash/slop>>>/vt/vtai>>>/u/udg>Local Text>>>/g/lmg>Maintain Thread Qualityhttps://rentry.org/debohttps://rentry.org/animanon
Blessed thread of frenship
>>107823802benchod
>>107823785Thanks, have more daughterwife
Enjoying your base you fucking retards?
>>107823836I'm an healthy male with a normal sex drive, wbu?
>>107823819that's a man
Going to post a gen soon make sure you have it on the next collage, thanks
1 girl bros...
Apparently someone on Plebbit found a way for LTX to do porn by copy and pasting a Wan2.2 GIF into the I2V workflow? It copies the motion and gives it proper audio.
>>107823847Based
From my experiments with vision enabled LLMs, I can say that from best to worst it's:Qwen3 VL 30B-3AQwen3 VL 8BGemma 3 12BGLM 4.6V Flashstrangely, the dedicated caption model (joycaption beta) shits the bed with 'complicated' instructions in the system prompt, I'd have to test it with something simplier (just ask it to use natural language or booru prompts, and to include camera position and all that retarded stuff)
>>107823819qwen edit?
>>107823993zit
Enjoying your image and video gens you fucking legends?
>>107823993what the hell, it's not anime -> realistic lora?
https://files.catbox.moe/d69gk6.png
>>107823924dude discovered v2v
>>107823992>GLM 4.6V FlashI don't know about the flash version but I'm testing GLM 4.6V in Q8 and it's very good so far, it's even uncensored as long as you ask it properly and add a prefill.Only issue is that it's huge, but once I'm locking in a proper description prompt I'm definitely using it for most realistic and even some anime.
>>107823785Why is there a vtuber in the OP?
>>107824006meant for >>107823994
https://files.catbox.moe/ykfjxo.png
>>107824031I'm using captioning models to convert images to a natural language description, then touch them up (if needed) and send it to the actual model to gen an image.>>107824025I'm running these as part of my workflow and I don't want to wait a long time to load a big model to gen a caption>running at Q8how much ram you got bro lol, q8 need like 140gb~
anons, how many steps do you recommend and use with res2s to get something good before diminishing returns in ltx-2?
>wan2gp doesn't recognize the loras folderit's over
So... I am planning to get a rtx6000 pro in a month or so, but... I am a bit confused on what model(s) would be closest to working like Gemini's nanobanana offline for anime girl edits. would anyone know what would be the closest? i.e. image + prompt input with the prompt either changing the clothing or pose? Only ever worked with stuff like wai-nsfw-illustrious models that did tags and have little experience with proper prompt models...
For audio>>>/wsg/6069549
>>107824053have you genned any vids with wan2gp yet? if so hows the speed vs comfykek?
>>107824077>have you genned any vids with wan2gp yet? yes>if so hows the speed vs comfykek?well, it actually works on my 3090 Ti, so it's faster than infinity.For a 5 second video:# Dev 19B - Profile 1 - 6m 21s- Profile 4 - 6m 25s# Dev 19B Distilled- Profile 1 - 1m 27s- Profile 4 - 6m 08s
>>107824044>I'm running these as part of my workflow and I don't want to wait a long time to load a big model to gen a captionI'm thinking about either captioning everything in advance, or just using my second card (3090) with zimage turbo since it's such a light model.>how much ram you got bro lol, q8 need like 140gbWhen loaded it's relatively usable speed wise, basically I'm using 115GB out of 128GB just to load the Q8_K_XL + 15GB on the 5090 itself with MoE mode with 16k context on llamacpp.I could go for the lower Q8 since I didn't see the difference (lower one uses 10GB ram less) but in both cases it's within my system capabilities so why the hell not go big.
>>107823916
>>107824091funny how ads used to look like this
>>107824099hot sexy women? in my ads? no I'd rather have something RELATABLE and SAFE
>>107824059Qwen image edit variants for editing images. It's nowhere close to nbp though.
>>107824142it is if you use a 50b gemma
>>107824059>planning to get a rtx6000 pro in a month or so>anime girl editsyou don't need a 6000pro for that, a 5090 is enough
>>107824123Go block an ICE vehicle.
>>107824161that doesn't sound very SAFE of you anon
https://files.catbox.moe/wqjzdz.png
>>107823836>suddenly
>>107824091>>>/wsg/6069557
>>107824044>I'm using captioning models to convert images to a natural language descriptionwhich one? can you share the workflow?
>>107824183yeah
>>107824179https://files.catbox.moe/fnrel6.png
>>107824189"INSERT COIN" pixels are completely different from the image behind it.
>>107824123>oh nyoooo my bobas and veganas, do not redeem
>>107824142>>107824147thank you, will note that down>>107824160I currently do have a 5090 comp and a 2x 3090 comp, but the ones i've tried from civit.ai and huggingface have been.. a bit rough for me I guess.
>>107824196this is so good
>>107824196Is that flux?
>>107824196Why are the top tears faceted?
What's the current best webui for retards? I used automatic's back in the day.
>>107824197>I currently do have a 5090 comp and a 2x 3090 comp, but the ones i've tried from civit.ai and huggingface have been.. a bit rough for me I guess.you won't have that more inference speed with a 6000pro, I also wanted it but in the end I got 2x5090 and I'm pretty happy with them
>>107824207qwen 2512>>107824209prob cuz the prompt had "crystal-clear teardrops" kek
>>107824211comfyui
>>107824183Qwen3 VL 8B mostly, I have some private stuff/nodes my wf currently.I suggest to use:https://github.com/sebagallo/comfyui-sg-llama-cppfor in-workflow usage, requires to install a python library in your venvORhttps://github.com/BigStationW/ComfyUI-Prompt-Rewriterthis uses your system's llama.cppOtherwise just run your llama.cpp in router mode and use a generic OAI client to call the model and some custom REST nodes to force the router to unload models after you finish genning (it's currently what I'm doing)
>>107824220wtf
>>107824211Forge Neo if you want to avoid comfy
>>107824222what about? https://github.com/stavsap/comfyui-ollama
https://huggingface.co/SG161222/SPARK.Chroma_v1Spark chroma guy seems to be giving it another go.I don't think rewriting prompts and 25% larger dataset will give drastically different results from last attempt, but we will see.
>>107824240ollama is for faggots so i dont use it (it uses llama.cpp anyway).It's maybe easier to setup compared to all other llmao stuff I guess.
>>107824243which chroma are you using?
>>107824220>>>/wsg/6069568
>>107824250Piggybacking on this, I need a good chroma model and WF recommendation. The last one I used did not adhere to prompting at all and looked melted.
>>107824049You can test yourself with chained samplers. You can do like 30 in the first sampler, connect it to vae decode and save image, then connect it to a second sampler to do another 10 steps or 5 steps, and then a third one to do more steps, etc. and have them all connected to vae decode and save image nodes so you can compare and see how many steps is worth it.
>>107824221>>107824235thank youI think I'll go for forge, I remember being too stupid to use comfy properly
>>107824260piggybacking on this too, there are hundreds of chroma models, help
>>107824258jesus christ
>>107824059Imagine buying a $10,000 card to make shitty qwen slop edits that you can do with a $400 card.
>>107824264forge is good, if you do ever want to use comfy because of a use case not covered by forge, the comfy 1girl guide in the op should be helpful
>>107824260I use spark.chroma, and I like it a lot for pornhttps://files.catbox.moe/j6d4vy.png
>>107824273I mean my wife died and I got some insurance money, so I'm trying to cope with it
>>107824282that's not porn
>>107824282This one? https://huggingface.co/SG161222/SPARK.Chroma_preview/tree/main
>>107824282Does it hold up with NSFW with character loras?
>>107824250None at the moment.I am planing to test this:https://civitai.com/models/2086389?modelVersionId=2517681Not an endorsement. No idea if it's shit or not. But it's on my to do list.
>>107824310Why would you do that?
>>107824301the porn i gen with it is too niche>>107824305yes>>107824310degrades anatomy, but yes
ltx t2v is fucking slow and censored. also, many of my prompts give me 1940 videos... the fucking point of gemma. wtf
>>107824275I keep that in mind. Thanks.
https://files.catbox.moe/t4p0fp.png
>>107824189tried to recreate your prompt with qwen 2512
>>107824432oh my god
>>107824444z-baggyeyesmage
>>107824449theyre full of poisonous cum
>>107824432Very cool! Much more realistic than mine'shttps://files.catbox.moe/lgj4jt.png
>>107824462at least make it believable next time0/10
>>107824462Very cool. Feels like a The Last Guardian cutscene
Official LTX2 workflow uses 0.6 image strength in i2v, while comfy's uses it at full strength. I'm 95% that comfy is a hack and the original creators know better, but MAYBE he's onto something?
>>107823916skin for z image look too noisy and static. why do people say its superior to photorealism of qwen image 2512?
>>107824499kijai said 1.0 is more better
>>107824502chroma is as slow as qwen and better
>>107824502i think people genuinely don't see the noise for some reason, they've looked at too many shitty smartphone photos with sensor noise and think it looks natural
best vidya to play while genning?
>>107824537>playing gameswrong board champ
>>107824537https://www.shoutoutuk.org/game/story.html
More like Z-image Never
>>107824502You can overcome zit noise somewhat by using a steeper noise schedule/raising shift.
>>107824537Project diablo 2 by far
>>107824537Baldurs Gate 1-2 uses almost no GPU power.
https://files.catbox.moe/k3xez7.png
Is there any reason to use full ltx model + distill lora against just distilled model directly?
>>107824499>>107824503You mean the LTXVImgToVideoInplace node? What does it even do?
>>107824462sick
Anybody use turbo diffusion with wan? They said it's a 200x speed up but I'm wondering if the quality suffers.
>>107824671Prepares latents based on the original video? Same as WanImageToVideo
https://streamable.com/xdfcx6I need a workflow for this shit, now.
>>107824688There's no such thing as lossless optimization
>>107824710Obviously but is the trade off worth it or is it a 200x reduction in quality too?
>>107824706fucking kek
>>107824706This is genuine shit why in god's name is anyone trying to make this work instead of just going after existing porn
https://files.catbox.moe/daktmo.png
>>107824706this is why we should wait for ltx2 loras instead of trying to make it do something it has no idea how to
>>107824727looks like someone cant handle inter-dimensional porn
What's everyone using for audio for existing videos? Is there some gradio thing I can install that doesnt require comfy?>mmaudio - hit or miss and doesn't produce audio for the full video>hunyuan foley - doesnt seem to work for the latest versions of comfy (i tried everything, only solution is to setup a separate older version)>ltxv2 - probably the best option but have yet to try, then again it requires the latest comfy which will break existing nodes>>107824243Did he say when it'll be finished? Spark preview is amazing, some of the best lighting I've seen a model produce. Also check out uncanny if you havent https://huggingface.co/dawncreates/UnCanny-Photorealism-Chroma-GGUF/tree/main
>>107824706
>>107824327I generated a woman taking her clothes off in under 4 minutes on a 3090 Ti. Use wan2gp
>>107824753He's trying to make a production version; the first version that he baked wasn't up to his standards, so he's trying again
>>107824782benchod
>>107824798not gonna spoonfeed you, currynigger
>>107824833What the fuck are you doing here, /pol/ maggot?
https://files.catbox.moe/2zfkh4.png
>>107824753wan2gp might have something so I'd say peruse the docs there
someone explain the "chinese culture" meme to me
>>107825018>if you're promised a model but it never release>thaaat is chi-nese cul-ture~>random anime man says "two more weeks!">thaaat is chi-nese cul-ture~>open source is all you need>be careful to watch the rug from under your feet>you'll get wan2.5 if you say pretty please>thaaat is chi-nese~>thaaat is chi-nese~>thaaat is chi-nese cul-ture~
>>107825018this board is inundated with racist troll
>>107825140give base chink
>>107825140shalom m'lady
>>107825118>comfortable migu:)
>>107824921Not bad, but I really hate the mixing of pixelated an smooth lines.
>>107825259rare stocking color, it's nice
>>107825133Why are they such fags about releasing 2.5 anyway when 2.6 is out already?
>>107824791What model saar?
>>107825306The non-upscaled one is more coherent re pixelization
>no one is training LTX2 loras>prompt following is dogshit, half of the time you get powerpoint slideshow>not actually faster than Wan if you want qualityis the model DOA?
>>107825133>>if you're promised a model but it never release>>thaaat is chi-nese cul-ture~>>random anime man says "two more weeks!">>thaaat is chi-nese cul-ture~>>open source is all you need>>be careful to watch the rug from under your feet>>you'll get wan2.5 if you say pretty please>>thaaat is chi-nese~>>thaaat is chi-nese~>>thaaat is chi-nese cul-ture~https://voca.ro/11sA543nw26m
>>107825371I wish there was a model that could do actual pixels.
>>107825389me too, frenhttps://files.catbox.moe/x6h89q.png
>>107825371a little better, but there is still light bloom and intrapixel dithering
>>107825389Use imagemagick to downscale, then run upscale. ezpz
>>107825182aw fuck my lora is accidentally jewish
>>107825389hello
>>107825385lmfaoooo, saved
>>107825352too big for you
>>107825385acestep?
>>107825437you're a big modeluuuu
>>107825471forgot catboxhttps://files.catbox.moe/mfy4c4.png
>>107825482do you do commisions?
>using the kijai workflow for ltx-2>notice the hidden negatives prompt>expect it to be empty like usual>it's actually thiswtf is all of this necessary?
Gooner sisters we are savedhttps://www.reddit.com/r/StableDiffusion/comments/1q94nlk/wow_i_accidentally_discovered_that_the_native/
>>107825538where did you get that from?
Been trying to gen Amelias and got one that looks eerily similar to one of my cousins (if she had different hair/fashion sense)Like this is scary close, almost exactly what she looks like
>>107825559https://streamable.com/xdfcx6
>>107825526I don't. I do it only for the love of the game
These threads are insufferably retarded. They have as much to do with technology as /v/ posting an SFM rip of Tracer wearing a thong bikiniNothing has changed since the last time I posted thisI don't know how long it's going to be for this finally just gets booted off and put onto /hr/. I don't hate "AI slop" just because it's a trendy new thing and a trendy new thing to hate, it really is just absolutely braindead and completely devoid of anything redeeming
>>107825583meltie alert
>>107825385What did you use to make this?
>>107825583>redeemingGood morning saar
>>107825583How about making an /ai/ board, retard?
>>107825555https://files.catbox.moe/f9fvjr.jsonI googled for it and it linked to a r*ddit post so maybe it's just the retarded poster who did it. Don't know if it even actually is kijai's workflow but it works anyway. Just wondering if the negatives were fucking up my videos this entire time
>>107825583>calls things braindead>is too braindead to know how to use filters
>>107825628I'm too busy doing real programming to bother learning such shit, keep playing with your ai crayons
>>107825643 (( (Crayola( ((>
>>107825385https://files.catbox.moe/wi3i8y.mp4genned in 1m27s on a 5090
>>107825552>Guys I just invented Video to Image t.Redditnext they'll invent 1girl
>>107825643>i'm too busy to copy and paste into a textboxno wonder jeets and ai are taking over your job
>>107825697weird image choice
>>107825719puto
I think ltx is here to stay, people are slowly figuring how to uncuck it, genning in seconds instead of minutes is too good
>>107825697And we have this shit instead. I feel very depressed localbros. We were jewed, jeted, etc. Why could we do now, is OVER?
>>107825725That's just what I've been working on and I wanted a full body character in front of a green screen.
>>107825756What*>Juche Manse
>>107825004QIE 2511?
Any idea how to fix this?
Can I give vision to models that don't have it natively?
Has anyone been experimenting with samplers/schedulers on LTX? To my surprise, Euler Ancestral seems to produce sharper results than Euler, usually it's been the opposite. Also I never seem to get any good results with the 2nd upscaled pass, everything looks so plastic and sloppy
>>107825775yeah
>>107825783res4s for me
>>107825773that's what carrots actually look like
>>107825775You can simulate vision with a captioning model. It's what we used to do before multimodal existed.
>>107825794>carrots have little tentaclesretard
>>107825740It also has more uses than just the videos. I feel like its ability to add audio to silent clips is a little underrated, though using actual text2speech for voices is still better because LTX voices grate my ears
>>107825583those threads had a lot of actual tech discussion before ran's recent melties started. lots of people left since then but we might get anons back after z base release
>>107825812you and ran belong to each other ani
>>107825773seems latent noise spikes that the model is filling in. Maybe try a less agressive denoise setting?
>>107825804it does. You have to carefully remove the outer layer.
>>107825854>lmg>using cloud modelgrim
>>107825862lmg?
>>107825552>>107825580The first example is awful but the second one show promise, though god I hate dirty talk so fucking much. Moans should be enough.https://streamable.com/aynxj6
>>107825875lmao
>>107825768No, that was made using controlnet.QIE isn't good for character swaps. It's very good for concepting though.
https://www.reddit.com/r/StableDiffusion/comments/1q94nlk/wow_i_accidentally_discovered_that_the_native/apparently you can use videos, or gifs, as input, so video to video: havent tested it though but other people say it works.
>>107825875The dirty talk is the good part. Without that it's not nearly as funny
>>107825889read the thread before you post monkey
>>107825896was the insult necessary?
maybe it's just placebo, but it's worth using NAG with dedicated audio and video clip encoders
>>107825903>being called a cute animal is an insultidiot
>training the same lora with different settings for the Xth time because i'm not satisfied with the resultsplease release base...
what timestep shift do you guys use when training ZIT loras
>>107825783res2s is the one LTX team uses, and it looks good>>107825791awfully slow?
>>107825905why does your stuff look good
>>107825915what
>>107825896>RacismYou arready rost.
>>107825889use this node instead of load image:
>>107825953thanks benchod
Why does ZIT often correct into the wrong direction?For instance.>ball gag lora.>prompt ball gag in detail >in her mouth is a red ball gag. black leather straps go from the ball on the girl's mouth around the girl's head.Then in the preview I'd see behind the blur that it is actually putting the gag inside the mouth. But as the generation progresses just before the last steps the ball more and more appears outside. After VAE it then looks like the ball is on her mouth and not in her mouth.>ball gag lora at 0.6 down to 0.35 doesn't make a difference.
>>107825953also set the frame load cap or you get this error: >ValueError: Invalid number of frames: Encode input must have 1 + 8 * x frames (e.g., 1, 9, 17, ...). Please check your input.
>>107825953Can you load "frame 10 to 30" with it? It does look like it loads everything?
>>107825968yeah you can, it lets you limit number of frames, starting point, etc
>>107825973OK I'll have to check it then.
>>107825889>you can load a video into a video generatorbruh if i had the video already i wouldn't need to generate it
>>107825982retard
>>107825961>on the girl's mouthWait, I just saw that. But nonetheless... changed it to in the girl's mouth but that doesn't make a difference either.
>>107825982I have some very old gifs that can be fun to use, and some video scenes I'd like to make longer.
>>107825953Is there a difference between using AnimateDiff format over LTXV?
only lodestone can save us from chinese culture
lmao so it works, used a super low quality azumanga gif as inputthe anime girl kicks her shoe in the air, which explodes high in the air.https://files.catbox.moe/bz5bpu.mp4
>>107825982You can add audio to silent videos, or redub it
>>107826000idk this video has image output, I assume other video nodes would work fine too
>>107826000if i remember correctly, format just sets the resolution automatically>>107826016>zeta chroma cooking>spark cooking>uncanny photorealism seems to be regularly updated>chroma radiance x0 (i cant get it to work despite following instructions, produces static, probably have to update comfy)yeah we eatin good
>>107826031works with mp4 too, still figuring out settings etc.https://files.catbox.moe/6hpnj2.mp4
sometimes you actually do need a controlnet
kek, video input is goodhttps://files.catbox.moe/wc8ocs.mp4
>>107825385KINO
how the fuck do you add input video, what's the next node?
>>107825643it takes less than 10 mn to make a filter retard, holy low IQ moment
Is it ok to use Pony loras with WAI?
stupid question incoming. playing around with some tts stuff namely higgs audio as my first foray into ai. is it normal that i have to go through the conda setup every time I want to use it?
>>107826139no your computer will release VX nerve gas
>>107826098can't believe they beeped the swears, this model is so fucking cucked
>>107824511>chroma is as slowthis is a poors problem. Chroma is not slow
>>107826157think again. also, ltx knows trump natively:https://files.catbox.moe/2lkemr.mp4
>Be ZiTstain>Dial in your prompt for 1000 hours until you get a girl that looks kind of like your crush>Prompt her in lots of lewd positions>Not good enough, I'll train a LoRA with her FB photos>Better but still not 100% there, good enough to wank to but misses the essence>Be Chromite>Humbly submit to the will of The Model>"p-please give wh-what it pleaseth you to give">drops an exact likeness of your cousin
>>107826157check this one: it is absolutely not censored.https://files.catbox.moe/yjr8a6.mp4
>>107826174>ltx knows trump nativelyevery single model on earth knows donald trump lol
>>107826174kek
>>107826191not exactly, also it gets his mannerisms: nothing has been as close as ltx.https://files.catbox.moe/brdlr6.mp4
>>107826218>not exactlyyes exactly, and ltx2 doesn't know who miku is, can you believe that? unbelievable
>every single model on earth knows donald trump lolEspecially child pageant models
>>107826229we can fix that with loras, no worries. also i2v miku is fine.
>>107826236sorry moche, not touching your jerusalem powerpoint generator
lmaothe man says "why are you gay? are you trans ginga? you homo? faggot?", in a ugandan accent.https://files.catbox.moe/8b13ln.mp4
how do i actually set number of steps in the sampler for ltx2? the default comfyui workflow is so retarded
>>107826266so yeah, video input, 17 frames for frame load cap, works great. lmao, this could literally be artosis:the man says "this piece of shit protoss! go kill yourself you stupid faggot! I hate you!"https://files.catbox.moe/4ld0gp.mp4
Its me or the comfy dev never cared to implement proper batching functionality? Things like decent loop/flow control for example. I'm only finding it in shitty custom node packs(pic related) where its highly unintuitive to do some basic for loop so I can make it cover an entire folder / list while making use of stuff generated earlier in the workflow without re-running them again needlessly when calling via API.What a dummy, made the perfect ecosystem for automation and didn't care to give proper love to the very basic functionality that would make use of it.
>>107826308he didn't but I guess there was a lot of work related to just getting support for models into comfyui, it's not like the only work is just on webUIand there is not a huge advantage to most people's workflow vs +1 counter per iteration somewhere in custom nodes which is also easier for most people to handle
>trump and floyd shit here every fucking day>ldg fags call other people schizo>trump and floy shit here every fucking day
base in 2 more weeksstay tunedfucking retards
>>107826328>there is not a huge advantage to most people's workflowThe whole point of using comfy is to cover the niche of doing intricate automated stuff that you couldn't do on simple UIs. Baffling that it can't do it decently the moment you want to start working with the basics of dynamic lists and iterations.
>>107826308Conditions and loops are a must have, it's surprising that even after so many years we still don't have such basic functionalities in comfy. He could just mimic the way LabView does them and everyone would be happy.
>>107826308>comfyui-easy-use>it's not easy to use
>>107826374>is called comfy ui> is not comfy to usemany such cases
>>107826336>>ldg fags call other people schizofags call other fags schizos FIFY
>>107826362you could do some work with applicative functors and zippers too, but unfortunately this doesn't support real programming
how do you stop masturbating srsly
>>107826150>to go through the conda setupwhat do you mean? activating some virtual environment?
>>107826435cut off your penis
>>107826435eat corn flakes
>>107826435keep going. you will grow tired of everything eventually
>>107826435>how do you stop masturbating srslyit's been a while I want to know the answer desu, having sexual energy is a fucking curse
>>107826336Somehow I feel like you won't be particularly excited about my tens of sneed videos genned for testing purposes
>>107826528What's the result?
kek video input (set to 17 frames) is pretty goodhttps://files.catbox.moe/pp3dda.mp4
https://github.com/huggingface/diffusers/blob/6cfc83b4abc5b083fef56a18ec4700f48ba3aaba/docs/source/en/api/pipelines/glm_image.md>Autoregressive generator: a 9B-parameter model initialized from GLM-4-9B-0414>Diffusion Decoder: a 7B-parameter decoder based on a single-stream DiT architecture for latent-space image decodinga fucking 7b model to decode images? are they fucking serious?
>>107826541This is gonna take like an hour or so I don't know
>>107826544I only watched it in Japanese but this does absolutely sound like how I imagined the voice to sound speaking English.
why is it that wan2gp can do LTX-2 so well and comfy just OOMs and shits itself?
>>107826362Whatever you need is probably severely out of ComfyUI scope and you're better off just writing your own python script
>>107826601comfy aggresively caches everything, try fiddling with launch options or put Clear VRAM Cache nodes where it OOMs
>>107826571subbed anime is better 99.9% of the timeJP frieren and fern are great too, also new season in a week.
>>107826617but it won't even load the model. It's like wan2gp uses layer offloading way better, or something.
>>107826601Put --cache-none in startup arguments and all of a sudden Comfy will perform as fast as wangp without hogging all the memory trying to cache models. Might also try the ram node https://desuarchive.org/g/thread/107460114/#q107462678https://desuarchive.org/g/thread/107460114/#q107462783
>>107826679Where do you store links like this?
>>107826679cache none is a meme, without cache it takes ages to reload the model
>>107826679>--cache-nonewon't I then have to load the model from disk every time I switch between models? Like if I was doing WAN, it would be removing the high noise model from vram and then loading the low noise model from disk, right?
>>107826708Cuz I (gemini) made it, duh>>107826717>>107826728Sure as hell beats going over memory limit and either getting OOM or taking forever to dump the model to the pagefile
>>107826601Try --reserve-vram 2.0 to 4.0. At some point, the memory management in Comfy became retarded again.>>107826728Yes.
>>107826748>Sure as hell beats going over memory limit and either getting OOM or taking forever to dump the model to the pagefilebut wan2gp just werks. There must be a way to get comfy to work the same.>>107826754>Try --reserve-vram 2.0 to 4.0. At some point, the memory management in Comfy became retarded again.I'll give it a shot.
>>107826718
>>107825889I tried a few sexy time videos but most produced body horror. Some gens did work by keeping the prompt simple like "She continues to make the same motion". Would Abliterated gemma make it work better? Did anyone test that and compare? Or the prompt enhancer?
>>107825798
>>107825396
>>107826869lol love that style
>still waiting for z-image
>>107826150No, that's not the normal. I encountered a similar problem before. The fix involved fixing the path, but I forgot the exact details.
fresh >>107826985>>107826985>>107826985>>107826985
>>107826748>>107826728i think comfy is doing something retarded with ltx2, i have a 5090 and 96gb ram, and it will sometimes hit the VRAM limit and grind everything to a halt while i still have like 30gb free RAM
>>107827243I just tried>--reserve-vram 4and it fucking worked! we are in business!
>>107825643>too busy for 10min filter>but has time to bitch about his lack of filterkek
>>107826435take the globohomo SSRI eberything pill
>>107826601I've used an 80GB GPU and even that tends to OOM. Something is fucky with the way ComfyUI manages memory.
>>107828145It seems like the `--reserve-vram 4` flag is working pretty good so far
I have an idiot question. I installed SD two days ago, but I can't get img2img to work. For example, I generated a girl, and I want to add an ankh tattoo on her neck, I paint the mask on that place, but the result has only smudging there, no tattoo whatsoever.
>>107828503pic unrelated?
>>107828503Denoise too low