Discussion of Free and Open Source Diffusion ModelsPrev: >>107772643https://rentry.org/ldg-lazy-getting-started-guide>UIComfyUI: https://github.com/comfyanonymous/ComfyUISwarmUI: https://github.com/mcmonkeyprojects/SwarmUIre/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneoSD.Next: https://github.com/vladmandic/sdnextWan2GP: https://github.com/deepbeepmeep/Wan2GP>Checkpoints, LoRAs, Upscalers, & Workflowshttps://civitai.comhttps://civitaiarchive.com/https://openmodeldb.infohttps://openart.ai/workflows>Tuninghttps://github.com/spacepxl/demystifying-sd-finetuninghttps://github.com/ostris/ai-toolkithttps://github.com/Nerogar/OneTrainerhttps://github.com/kohya-ss/musubi-tunerhttps://github.com/kohya-ss/sd-scriptshttps://github.com/tdrussell/diffusion-pipe>Z Image Turbohttps://huggingface.co/Tongyi-MAI/Z-Image-Turbo>WanXhttps://github.com/Wan-Video/Wan2.2https://comfyanonymous.github.io/ComfyUI_examples/wan22/>NetaYumehttps://civitai.com/models/1790792?modelVersionId=2485296https://nieta-art.feishu.cn/wiki/RY3GwpT59icIQlkWXEfcCqIMnQd>Chromahttps://huggingface.co/lodestones/Chroma1-Basehttps://rentry.org/mvu52t46>Illustrioushttps://rentry.org/comfyui_guide_1girlhttps://tagexplorer.github.io/>MiscLocal Model Meta: https://rentry.org/localmodelsmetaShare Metadata: https://catbox.moe|https://litterbox.catbox.moe/GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-oneTxt2Img Plugin: https://github.com/Acly/krita-ai-diffusionArchive: https://rentry.org/sdg-linkBakery: https://rentry.org/ldgcollage>Neighbors>>>/aco/csdg>>>/b/degen>>>/r/realistic+parody>>>/gif/vdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/trash/slop>>>/vt/vtai>>>/u/udg>Local Text>>>/g/lmg>Maintain Thread Qualityhttps://rentry.org/debohttps://rentry.org/animanon
kino
anyone remembers when FLUX came out and some mad man redditor set up a site where you could upload your images and it would train a lora for you?i think it lasted 24 hours before it broke down but it was pretty neat
I can't stop generating risque images of the girl who used to be my highschool crush.
>>107778913indian behavior
>>107778821Don't believe anything until you test it, remember comfy shilled and hyped SD3 for MONTHS and it was fucking shit, he's a big corpo sellout now, of course he wants you to use comfyui!Nvidia also kept posting about Tensor RT and it was also shit!
Continue crying
is this the new diffusion thread
>>107778862https://huggingface.co/Lightricks/LTX-2>19b>"dev" -> distilled shitD O A
>>107779005if no one ran the 18b russian model because it was too big, I don't see why they'll run this one
>>107779013this one has audio
>>107778927>>107778954>>107779017bad actors at work already
my dreams were answered. a bloated 50gb model nobody can run
>>107778981I can't stand outerglow but other than that nice style
>>107779020the audio is complete garbage though, even worse than that Ovi shit
that's all the pokemon zit knows
>>107779005I never thought this shit would be better in terms of quality than Wan 2.2 but I expected it to be smaller than 14b so that it could replace it and force the Alibaba dev to release 2.5 or something, but it's too big it's not gonna be succesful enough to force the Alibaba dev to do that unfortunately
>>107779133the drawing style is nice, is this from a lora?
>>107779140yeah its kinda grim
>>107779146>its kinda grimI have something to look forward and cope a bit though, it's something!https://github.com/huggingface/transformers/pull/43100
>>107779145Yeah, but it's a hard to use questionable bake.https://pomf2.lain.la/f/nqndhfhh.safetensors Optional:>a watercolor illustration of >soft stylistic white glow around subjects
>>107779165hunyuan AR model was 80B, hopefully this is based on 4.6v flash, so hopes are also low...
>>107779184>hopefully this is based on 4.6v flashit is based on that 9b model, on the code it shows the layers numbers shit and it's the exact same as glm 4.6 9b flash
>>107779191>on the code it shows the layers numbershope rekindled!!!
Qwen REALLY tries hard to AI UPSCALE AND ENHANCE the image, even unprompted
>>107778862Are all these Z-Image?
>>107779228it's been trained only on "high quality 4k" stock images, so yeah it's biased towards that, a good model should be able to reproduce the "low quality" style of an image or a video like sora 2
Just moved from 6GB 1660S to 16GB 5070Ti.What are the absolutely best image and video models I can use on the 5070Ti? I also have 64GB RAM if it makes a difference.
>>107779005how retarded is that? they can finetune models, why didn't they further finetune the model to do those effects instead of going for fucking loras like your regular poor civitai jeet
>>107779276Does this require a lora?
>>107779286yehttps://civitai.com/models/2176812/z-image-lucasarts-style-1990s-pc-adventure-games-z-image-lora
>>107779282at least they didn't merge those loras in their model like the retarded Qwen fucks with Qwen Image edit 2511 lmao
https://www.reddit.com/r/StableDiffusion/comments/1q59ygl/ltx2_is_out_20gb_in_fp4_27gb_in_fp8_distilled/any billionaires with an rtx6000 to test it?
>>107779302>20GB in FP4, 27GB in FP8how is that possible, I thought fp8 was twice as big as fp4
>>107779269>imageZ image turbo fp16, qwen edit 2511 and non-edit 2512 Q4 (that is if you want to fit everything into VRAM, otherwise you can try Q6 or even Q8)>videoWan 2.2 Q8, but 64gb ram won't be enough to keep both high and low noise models cached, so they'll keep swapping back and forth between gens
Its a big boy, much smoother than wan2.2 for sure and the audio is funhttps://files.catbox.moe/kvmiem.mp4
>>107779320>how is that possibleShut the fuck up retard.
>>107779320audio part of it is a extra so many GB AND it contains the vaes
>>107779330it's too big though, and the licence is far worse than Wan 2.2 Apache 2.0 licence
>ltx2Got OOM with any settings on rtx 3090
>>107779330is it possible to use this with 16gb and the rest loaded into ram?
>>107779339kill yourself
>>107779298
>>107779345>>107779347With 6 gorillion RAM and updated multigpu nodes maybe
>>107779342its completely open unless you make more than 10 mill a year off of it, they have no claim over outputs at all. Just the usual dont use it to make csam and the like>>107779347wait for kijai I guess, FP8 barely fits on 24GB
>>107779355can they make a gguf of this stuff or nah
>>107779359yeah? you can gguf anything
>>107779358>FP8 barely fits on 24GBwanussy is more than 24gb and it fits
>>107779370Can you gguf me a fuck to give?
>>107779374>wanussy is more than 24gbnot at all, Wan is 16gb on Q8, and yeah it can spike up fast once it's running, but on idle it's way more easy to load
Its FAST btw, like absurdly fast
I have 2 3090s please got let there be a way to use them to run this. Otherwise we are truly in the age of vramlet death.
>>107779384how many minutes?also if they can make a Q4 of this maybe it will work fine on 16GB
>>107779384how fast, give the steps, the resolution, the number of frames, your gpu, and the time you got
>>107779384God gave us clocks so we can use better words than "fast" how long for a generation?
>>107779374for Q8 wan 2.2 I have zero issues on 16GB cause part of it goes into RAM and it's still fast with lightx2v loras with 4-6 stepsmaybe this will be similar, if the quant model is like 22gb it would also be fine, given sufficient RAM.
>>107779330>4k res>30 fps>8 sechow long did that take anon?
>>107779403It looks like ltx doesn't use ram at all.
Did they finally do it? Did ltx finally release a model that wasn't garbage?
>>107779417>It looks like ltx doesn't use ram at all.NTA but a model doesn't give two fucks whether some of its layers are on RAM or not.
>>107779410
>>107779421>Israel saved usimagine having people say that if the model is actually good lmao
>>107779430wtf? 20 seconds for 20 steps and 8 seconds? are you kidding me? what gpu are you using anon?
audio is very hit or miss it seemshttps://files.catbox.moe/67lkoi.mp4
>>107779430I like those numbers
>>107779298Awesome, thanks. Is there any way to run Z-Image on a 12GB card?
>>107779430what the fuck? I thought you were being hyperbolic.
>>107779430>>107779455I'm sure he's using the nfv4 acceleration shit with his rtx 5090, for people with normal gpu you won't get that speed, don't dream lmao
>>107779448This is a thing I've noticed with a lot of audio and video models. The audio is often obnoxious.
>>107779454Definitely, I don't know the details though
https://files.catbox.moe/bfdj7a.mp4
>>107779454on Q8 definitely
Boob status?Can it do pussies?
>>107779467the quality is definitely better than wan 2.2, even if the sound is god awful you're not forced to use it, and if it's as fast as it's claimed >>107779430then maybe we have our new champion
>>107779467That's horrifying.
kekhttps://files.catbox.moe/d575qg.mp4
>>107779454i'm running the bf16 one on 8gb so yeah
>>107779487is it i2v? nyan~
>>107779430>>107779479>as it's claimedJust so you know this nigga is using a 6000 96gb and probably using nvfp8
I want to make a 5 second 600-720p animation of this image on my 5070Ti. What's the best and fastest workflow like, and how long would it take?
The fp4 model should run on a dedicated 3090 with no display output right?
If you dont prompt for audio then audio is usually shithttps://files.catbox.moe/10ni5z.mp4>>107779494its both
>>107779495
>>107779467Everytime I open up a video and I HEAR that it's grok, I instantly close it down and get mad. It's an irrational anger I can't hold back, I get the same from ltx2.The only good thing from getting audio out of the video gen is that you can dub it with a real voice after.
>>107779502so it's a 3090 or a 4090, so it's a gpu that's not using the nfp4 acceleration shit, good to know
>>107779448>>107779467>>107779487can you guys generate something kino at least as a test? try profanity and violence, think classic tarantino.
>update comfy>workflow with batch 4 that worked before OOMs nowwow!
>24gbwhy is text encoder so fat bitch
It does up to 20 seconds and 4k res apparently but I cant fit that
>>107779519>think classic tarantino.so a lot of high kicks? he loves dem feet :^)
>>107779526this, my second gpu can only handle 12gb, Q8 will be too big, I guess I'll have to offload as well
>>107779499ok I'm sold, now give me the workflow and the files I have to download
A dev said to use --fp8_e4m3fn-unet for low vram btwAnd 15 secs is the most I can fit:https://files.catbox.moe/2p5vkc.mp4
>>107779005>https://huggingface.co/Lightricks/LTX-2wait a second, so it's a unified model that can do both t2v and i2v? holy shit FINALLY
>>107779547>give me the workflowhttps://github.com/Lightricks/ComfyUI-LTXVideo/tree/master/example_workflows
apparently they also already have a 2.1 with better audio coming that they say they will release fully as well
>>107779573source?
>>107779554this is pretty good, I LOVE THE JEWS NOW
>>107779575
>>107779448all that matters is I can make floyd fent memes with audio
>>107779584based, shitposting will go for another level, but fuck /g/ it doesn't allow for videos with audio, the only thing we'll do is to spam catboxes :(
>>107779497thats a 16GB card, get wan 2.2 Q8, the 4 step high/low lightx2v loras, 4-6 steps total and you are good to gothe comfyui template workflow is fine just add the node for gguf instead of fp8
>>107779554was this video i2v or t2v?
>ltx2 requires a comfy updateOh boy, time to make a backup so I can restore later after the inevitable broken install.
>>107779554Are you using comfy's workflow or ltx's workflow with their custom node?https://github.com/Lightricks/ComfyUI-LTXVideo
>>107779554>--fp8_e4m3fn-unetNTA, but you're saying run that argument when launching comfy?
>>107779330How does it handle anime styles?
>>107779644>https://github.com/Lightricks/ComfyUI-LTXVideothe workflow is using Impact Pack, be careful with that anonshttps://www.reddit.com/r/comfyui/comments/1pgvdgo/impact_pack_is_trying_to_connect_to_youtube/
>>107778862will the rentry guide help me set up an ai thats like grok that will edit images to be naked?
>>107779662No
distilled fp8 is up
>>107779664sad. i dont really want to gen sloppa i just want to slop existing images of people nude
wtf do i do with this?
>>107779679you go for that insteadhttps://huggingface.co/Comfy-Org/ltx-2/tree/main/split_files/text_encodershttps://huggingface.co/unsloth/gemma-3-12b-it-GGUF
>>107779685can't use gguf text encoder with comfy workflow
>>107779330>no sleep for me >EVERRR-kekd
>>107779706why not? replace the normal loader with the gguf loader
>>107779448The hiss and stereo imaging of the steps combined with the video makes it feel like a fever dream. I'm scared.
>>107779685Open source market is so dead with the price hikes, good thing I did it before the fall.Also, the upsamplers are 404 on the page, classic.
thx for beta testing guyslooking forward to the qrd later
>>107779711
shit posting is so backhttps://files.catbox.moe/g9wbfp.mp4
>>107779732>alkweda
>>107779731show the screen of your workflow
Man you fuckers are so unhelpful.What GPU are you using? Are you using the FP8 or the FP4. If you're on a 24gb card, how the fuck are you running gemma without an oom?Suck my dick.
>>107779746>If you're on a 24gb card, how the fuck are you running gemma without an oom?you offload niggahttps://github.com/pollockjj/ComfyUI-MultiGPU
>>107779731Probably didn't set the clip type properly.
>>107779732it's funny how bad it's able to replicate the text script on subtitles but for audio it's perfect lol
>>107779732Man, is this what satisfies localfags nowadays?>yeah man it looks fucking terrible but at least it's not cloud!!!
>>107779745Default comfy workflow for ltx2 i just switched text encoder node
>>107779754I can't believe you've reduced me to this level, but may I have a workflow?
>>107779771desu it looks like shit on T2V, but it has a lot of potential if you go for a normal image and go for I2V>>107779775show a screen goddamit maybe you haven't set the right parameters on your new node
>>107779771this is 8 steps. More steps looks MUCH better >>107779554
>>107779775>>107779780
>>107779771Hey look a fellow ball and chain lover, I also love when corpo model step on my balls and tell me I am bad bad boy
>>107779782>not going for the QAT goofsare all image genners this braindead?
>>107779782wait? It actually works like that?
The people saying just use the GGUF unet loader are actually talking out of their ass. It doesn't work like that.
>>107779820the multigpu node works
>>107779820>unetreading comprehension
>>107779834Fuck I meant clip, but my point still stands. None of you can even show a screenshot let alone share your workflow.
>>107779838are you retarded? have you ever used gguf or multigpu nodes?
>>107779847Yes I have. I do not understand what or how you expect it to work. It just says it doesn't recognize gemma 3. Why are you lying to my face?
If you are struggling that much then just wait for GGUFs then. Its a big model
>>107779880holy fuck why can't you just post a workflow to prove your shit works because the thing you are suggesting makes no sense in the default comfy UI workflow. You won't even provide a simple screenshot.
>>107779721It's ogre for 16gb vramlets, isn't it
hahahaha god I fucking hate Github so much, why do these nerds make it so complicated to download a file? Just give me an exe file for fuck sake, im not a coder/programmer I dont care about all that shit.
>>107779891I'm not whoever you think your talking to but it works fine for me on 24GB like this
>>107779914Then you in the wrong hobby bro
>>107779923this stopped being an autistic nerd hobby with the release of Comfy, and its only going to get easier for people like me so get fucked. I'm not going anywhere scrub.
>>107779921I can't even run this workflow
https://huggingface.co/unsloth/gemma-3-12b-it-GGUF/tree/maingemma Ggufs work naively according to https://github.com/city96/ComfyUI-GGUF/issues/398
>>107779959have you never used comfy before? install missing nodes nigger. And make sure comfy is updated
here is I2Vhttps://files.catbox.moe/ii1h9i.mp4
>>107779988is it censored, can it do nudity?
>>107779988not bad at all, at least it doesn't have the slow mo shit of wan 2.2
>>107779988CUM!
GGUFs DO work, just use a GGUF in the gemma 3 model loader node in LTX's WF
>>107779928>this stopped being an autistic nerd hobby with the release of ComfyThis doesn't make sense. Comfy is objectively one of the worst interfaces ever, objectively. The fact that you can't even mention that here nowadays without being called the thread-shitting schizo makes me consider that the thread shitter is actually some sort of false flag.A1111 and forge have issues but they are much better from a user perspective. Comfy becoming the default ui is a fucking disaster. Comfy shills were universally hated for shilling this half-assed ui and nothing changed since then. Comfy is as terrible as it used to be. You can have good node interfaces. Just go look at blender.
audio needs work lolhttps://files.catbox.moe/un2i5o.mp4
>>107780019how the fuck did you manage to get a gguf on the list? I can only display safetensors with that node
>>107780035I put it in the text encoders folder in comfy?
>>107780035same.
>>107780035I do have GGUF node installed though, so maybe you need that? I have no idea but it works
>>107780054you're trolling lol
>>107780062I'm going with this. This is advanced trolling. He can brush away doubts by questioning their experience with comfy.
>>107780062im not, its workinghttps://files.catbox.moe/y82cfx.mp4
>>107780018OH HE'S COMINGhttps://www.youtube.com/watch?v=9f_SmIYiA6k
>>107780062make sure to use latest comfy, force pull then pip install requirements, sometimes comfy can fuck up a update cause it works for mehttps://files.catbox.moe/mnw6kw.mp4
all with gguf cliphttps://files.catbox.moe/mh7amb.mp4
>>107780069>>107780102>>107780114prove it, give a workflow so that we can see the gguf file inside that node
mods=gods as always
This is more steps and higher res:https://files.catbox.moe/eye7ih.mp4
>>107780114>https://files.catbox.moe/mh7amb.mp4 not gonna lie this is really really good, way better than Wan 2.2
we are so backlmao to the anons who said it's DoA, get clowned
>>107780120I do not believe you. Post the workflow or get the fuck out of here. How are you the only person where the gguf models show up in the model loader list?
>>107780120https://github.com/Lightricks/ComfyUI-LTXVideo/tree/master/example_workflowsall I did was put the gguf in the gemma folder in the text encoder folder. Again, I had GGUF nodes already installed so you might need them
can I run LTX2 on a 4090 and with 64GB RAM?or is this only for 5090 chads?
>>107780140did I stutter? share your own workflow
>>107780139Learn to make your own fucking workflows, retarded ass fucks I swear
>>107780153fp8 is 25 gb... I guess that can be offloaded, let's hope it's not too slow like that
>>107780154it is EXACTLY that one, herehttps://files.catbox.moe/g63jsc.json
>>107780168I'm sure this fucker has modified the json with a notepad lmaooo
>>107780176wtf is wrong with you?
Speaking of clip and ltx2.
>>107780180in what folder did you put that gguf
>>107780189getting real tired of your shit, it should detect it anywhere in text encoder folder, sub folders do not matter
>>107780202He's not the only anon that doubts you.
>>107780207literally just use the WF and try installing gguf node, no idea what else to tell you
>>107780213>installing gguf nodeYou mean the one by city 96?
so 32GB of DDR4 RAM is $200 where I live which I can actually shell outI don't NEED DDR5 for my five-second goonsloppa right?
>>107780202>>107780213you're full of shit anon
>>107780222are you a actual retard? read the node you are apparently trying to edit. Its GEMMA 3 MODEL LOADER NOT AUDIO TEXT ENCODER LOADER
>>107780222
>>107780213You have some patience with the retards here lol. Just let them figure it out
>>107780213Second for full of shit.
>>107780232it doesn't display gguf here neither retarded fuck
I believe debo is at work here.
>16gb gpu>64gb ramso what about multigpu + gguf?
>>107780246you must be the fucking trollThis is you:https://files.catbox.moe/0rug3x.mp4
>>107780259everytime this troll shares a video it doesn't have the workflow in it, curious innit?
>>107780253try again retard, maybe try installing the GGUF node the like I saidhttps://files.catbox.moe/smv3fv.mp4
>update comfy>get new errors for qwen edit 2511>KSamplerNo backend can handle 'dequantize_per_tensor_fp8': eager: scale: dtype torch.bfloat16 not in {torch.float32}what the fuck did they break now?
>>107780269give a video with the workflow inside anon :^)
https://blog.comfy.org/p/ltx-2-open-source-audio-video-ai?publication_id=3493400&post_id=183444839&isFreemail=true&r=5vt9k&triedRedirect=truecomfy WFs are out. Retards can go try that instead. GGUF gemma works btw
>>107780273>update comfy from update folder>error>update again with "update comfyui" in the GUI>worksthen why the fuck doesn't the folder update do the same thing?
WTF IT CAN DO HIS VOICE!https://files.catbox.moe/bjfhky.mp4
>>107780310however, the latest comfy seems to be faster, or maybe it's the new nvidia driver with dlss 4.5. idk. only 14 seconds with 4 steps with this model:https://huggingface.co/lightx2v/Qwen-Image-Edit-2511-Lightning/blob/main/qwen_image_edit_2511_fp8_e4m3fn_scaled_lightning_comfyui.safetensors
>>107780286>GGUF gemma works btwNo. It fucking doesn't. Why are you all lying? I'm going nuts here.
>>107780325>only 14 seconds with 4 steps with this model:what was the time before?
https://www.nvidia.com/en-us/geforce/news/rtx-ai-video-generation-guide/nvidia also just announced it, you need 4000+ or better yet 5000+ to have good speeds though
>>107780327I really don't know wtf to tell you but that it works. install ComfyUI-GGUF if you haven't. That is all I can think of. The only reason I tested it is cause the maker of the GGUF node said it worked and it worked
What timestep shift to use when training ZiT lora on Onetrainer? Default 1.0, or something other?
>>107780269tsk. fake context again.
>>10778033120-25 maybe? it is definitely faster now, could be comfy optimizations, or the new driver on nvidia app. prob comfy.
>>107780339>the maker of the GGUF node said it worked>>107779964https://github.com/city96/ComfyUI-GGUF/issues/398that's not the maker of the GGUF node, it's a fucking random that said it lmao
>>107780339I have that shit for years and it's all up to date. I cannot fathom why for some reason I am the only one who's nodes do not recognize gguf files.
>>10778034816 seconds, very fast with latest comfy. it was already fast at 4 steps before but even quicker now.the anime girl with teal hair is holding a baseball bat and is wearing a Los Angeles Dodgers baseball uniform with the number 39.
>>107780351someone on the discord linked to that and said it worked, I tried it and it worked for me as well
Either you're all lying or something is being overlooked.
>>107780360better test with just og miku:super clean. I recommend 2511 edit but the baked lightning lora comfyui version (you can still chain other loras and it works)
>>107780360there's enough mikuslop out there ffs can you gen some other character (not floyd)
https://files.catbox.moe/z9gdc0.mp4
>>107780383migu is a good test case, but sure
>>107780180works for me, the other anons must be trolling 100%
>>107780398he / they were using comfyui WF with a entire different node: >>107780222
>>107780406nta but I also do not see shit in my model loader node. I have no idea why but it feels malicious at this point. Everything is up to date.
>>107780389Put the Doc's red vest on him.
>>107780202Ignore that retard anon, I have a question. How is the nsfw wan clip model? I've never tried it before because there we're people complaining it no longer works and so I didn't bother with it. And that other anon is the reason this tech should be gate keeped because fuck retards.
>>107780411I think your setup might be broken then.
>>107780420honestly never noticed that much of a difference with it or not and did not test them side by side, I just started using it after I saw someone said it helped a bit with nsfw prompts
great native ltx2 workflow template comfy, thanks for your contribution
>>107780432man why is there so much skill issue today, maybe go back to genning on forge if this is too hard for you?
If you have a 5090 just use https://github.com/Lightricks/ComfyUI-LTXVideoits better
>>107780438i mean i know it's caused by VideoHelperSuite previews not being compatible but literally everyone has this node installed
https://files.catbox.moe/02p9ky.mp4>>107780443turn off preview in the settings
>>107780446well duh obviously. but i like having previews. and i'm not gonna uninstall VHS which will break my existing Wan workflows
should I install game ready or studio drivers??
https://files.catbox.moe/bono9u.mp4
This model doesn't work on sub 32gb and there is no way to get gguf working as of yet. Anyone telling you otherwise is a troll. As much is evident by the fact literally nowhere else is mentioning this.
To not generate any audio with ltx, do I just rip out all of the audio related nodes?
it crashes for me lool
>>107780469holy cope, learn how a computer works. post hands
>>10778047224GB vramlets need not apply. Untill someone releases ggufs I guess
>>107780452are you mentally challenged or something? Perhaps this hobby isn't for you.
>>107780479There is no point. It's obvious you're lying at this point.
>>107780481wtf is your problem? everything is working for me how is that "mentally challenged". i am just complaining about VHS previews being broken
>>107780486i've been genning on my 4080 without problems, but keep telling yourself that im lyingretard
>>107780480>24GB vramlets need not apply.but he's running it on 24gb though >>107779430
apparently people got this working on 24GB by having claude vibecode in offloading:https://github.com/Lightricks/ComfyUI-LTXVideo
>>107780489Yeah but you don't need to uninstall, you could not use the node or disable it in the manager, it will be fixed within a few hours considering the model has past the first public quality test's.
I'm not crazy here right. The guy saying he's running this on 24gb or less is clearly taking the piss, right?
>>107780502>it will be fixed within a few hourslast commit on VHS is 3 weeks ago and taesd previews have been broken for weeks. i wouldn't get my hopes up
>>107780507it works on 4090 on Linux with 96GB ram at least
>>107780472>>107780497kek, so for the moment only the 5090 chads can run it? damn :(
>>107780442why is it better? i'm genning with the native workflow on a 5090 just fine
>>107780418sure, was afk. im glad the new edit lets you reference nodes with image2/image3 cause it makes referencing stuff way easier.the man in image1 is wearing the red vest of the man in image2.
LTX2 either has very good image coherence or it knows adolf hitler. It doesn't know göring though.
You can just use wan text encoderHoly my fucking balls it's fast with just rtx 3090
damn local is saved
>Decide to comb through the files to figure out the gguf issue>realize he is full of shit and the model loaders do literally nothing to accommodate ggufWhy did they go through so much effort to lie? It's confusing me.
>>107780472I had to increase the pagefile but holy fuck, come on dude
>>107780549>50 posts early>anti comfy shitpost 1st post>no rentry warningI shant be posting
>>107780544MASSIVE TIP USE EULER A!!!!the default one sucks ass!https://files.catbox.moe/dl4p73.mp4
>>107780549>34 posts early>this thread was cozy as fuck>mods deleted your shitty bake >>107778962yeah no
>>107780563You mean like this one? >>107778962 >>107779027
>>107780563>Announcing a report or 'sage'
add a watermark to this image with the text "LDG".can remove watermarks but also add them (not that you should but just to test)
for anyone who was having issues, activate your comfyui env and manually pip install -r requirements.txt for the ltxv2 nodes
>>107780508my uncle knows the guy, chill
>>107780596and make sure you have at least 96GB ram and / or a big pagefile. 64GB does not cut it
>>107780617even with that it won't be enough, with a 24gb vram card you OOM
>>107780617qrd
>>107780623it works on 4090
>>107780628proof?
>>107780631https://files.catbox.moe/xapucr.mp4
>>107780596acceleratediffuserseinopshuggingface_hub>=0.25.2ninja~=1.11.1.4transformers[timm]>=4.50.0How are any of these going to lead to gguf models magically loading in the model loader?
>>107780633do a screen capture with obs with your task manager visible showing your 4090 loading and inferencing.
fyi the native i2v workflow template has a non-tiled VAE decode at the end which will OOM you even on RTX 5090 and 96GB RAM. you will need to switch it to the tiled one
1920x864 121 works on 4090 with FP8 model
>>107780549>bake nukedahah get fucked nigger
>>107780639>obsI use linuix and fuck off, this aint tech support
>>107780639alright fuck it, it is a 5090 I just mistyped, happy?
>>107780651Just chat until 310, we're set.
>>107780634but it works for me
>>107780650>I use linuixthat's probably the reason, I'm on windows 10 and I can't do it, linux is known to be way more efficient on memory management
>>107780639people on the discord have it working on less
>>107780662im on windows and able to do it on my 3090
>>107780653Yes and no. Yes because I was right. No because I don't have a 5090.
>>107780665go back there
https://files.catbox.moe/46y2ar.mp4>>107780670smarter than you apparently
>>107780666I think I'll just wait for the ggufs at this point, at least with that you can offload manually and shit with MultiGPU
>>107780676wtf it's obvious the model has been trained on videos of him, it knows it has to do this one and it does it well
>>107780676but I can gen on 16gb, im not retarded like the other guy
>>107780676its video extension is fucking insane btw. It is GREAT at copying a voice. This is gonna get so many people in trouble
wtf
new>>107780632>>107780632>>107780632
>>107780689now that's a proper bake, thanks anon
>>107780688for non realistic try euler, I found it fixed the wonky
erm I can gen it on 8gb of vram? Skill issue.
>>107780689you linked the wrong previous thread (and that one is dead)
>>107780700I'm sorry. Can we make do with the early correction? >>107780703
Nobody here is running it on 24gb of vram btw. Don't let them troll you like that.
>>107780709>>107779430I was wondering why people seem to be struggling and the only thing I can think of is that windows users are dealing with its horrible memory management while linuix chads are winning
>>107780707As the representative of all anons, I approve.
>>107780718they straight up state on the repo that the bare minimum LOW vram specs are 32gb of ram. I can't even get past the gemma model loading before it ooms on 24.
>>107780747>I can't even get past the gemma model loading before it ooms on 24.it should offload as needed, the clip should run on pretty much anything. Your comfy must be broke
>>107780747>not bits'n'byting the QAT gemma modelit's honestly cope on your side
>>107780753no senpai you dont understand, these retards use 131256312 custom nodes and loaders, and when shit OOMs they cry wolf.
>>107780747also wan fucking said 80GB was the minimum. They are always way over what you need. People with 24GB are using it
A screen shot of the model of your GPU and the console inferencing the model would at least put some doubt in my mind that this works on 24gb
Well, t2v is worthless it seems. SOTA I2v with audio is nice Sounds like a 24gb vram card plus 96gb ram is the minimum for a good time with ltx2Has anyone tried 16gb vram + 64gb ram? Probably gonna have to Q4 it but I would be very happy if it somehow reasonably all worked in Q6
>>107781365comfyui's implementation is broken, it does not properly offload wait or use https://github.com/maybleMyers/ltx
this thread is so fuckin confusing
>>107782044gemma is used as a text encoder retard
>>107780325usually 10s diff comes from changing the sampler/scheduler / that could be it
>>107779353in an alternate universe this would have been a 18+ game from back then