Discussion of Free and Open Source Diffusion ModelsPrev: >>107780498https://rentry.org/ldg-lazy-getting-started-guide>UIComfyUI: https://github.com/comfyanonymous/ComfyUISwarmUI: https://github.com/mcmonkeyprojects/SwarmUIre/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneoSD.Next: https://github.com/vladmandic/sdnextWan2GP: https://github.com/deepbeepmeep/Wan2GP>Checkpoints, LoRAs, Upscalers, & Workflowshttps://civitai.comhttps://civitaiarchive.com/https://openmodeldb.infohttps://openart.ai/workflows>Tuninghttps://github.com/spacepxl/demystifying-sd-finetuninghttps://github.com/ostris/ai-toolkithttps://github.com/Nerogar/OneTrainerhttps://github.com/kohya-ss/musubi-tunerhttps://github.com/kohya-ss/sd-scriptshttps://github.com/tdrussell/diffusion-pipe>Z Image Turbohttps://huggingface.co/Tongyi-MAI/Z-Image-Turbo>WanXhttps://github.com/Wan-Video/Wan2.2https://comfyanonymous.github.io/ComfyUI_examples/wan22/>NetaYumehttps://civitai.com/models/1790792?modelVersionId=2485296https://nieta-art.feishu.cn/wiki/RY3GwpT59icIQlkWXEfcCqIMnQd>Chromahttps://huggingface.co/lodestones/Chroma1-Basehttps://rentry.org/mvu52t46>Illustrioushttps://rentry.org/comfyui_guide_1girlhttps://tagexplorer.github.io/>MiscLocal Model Meta: https://rentry.org/localmodelsmetaShare Metadata: https://catbox.moe|https://litterbox.catbox.moe/GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-oneTxt2Img Plugin: https://github.com/Acly/krita-ai-diffusionArchive: https://rentry.org/sdg-linkBakery: https://rentry.org/ldgcollage>Neighbors>>>/aco/csdg>>>/b/degen>>>/r/realistic+parody>>>/gif/vdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/trash/slop>>>/vt/vtai>>>/u/udg>Local Text>>>/g/lmg>Maintain Thread Qualityhttps://rentry.org/debohttps://rentry.org/animanon
>>107780632ty bro
>>107780632actual previous >>107778862
Blessed thread of frenship
Morning.I can't waaait for LTX2 release!...I'm going to create so many obscure viya clips. Amusing only to me and maybe 10 other people!Hahaha!
JESUS CHRISThttps://files.catbox.moe/stp510.mp4 nsfw also ear rape
>>107780709>Nobody here is running it on 24gb of vram btw. Don't let them troll you like that.I confirm, can't run it on my 3090 I OOM, will this model even be possible to run for vramlets? I think it's gonna be too big to be succesful, and that makes me sad because it looks like a really competent modelhttps://files.catbox.moe/46y2ar.mp4
anyone else has shit results with i2v? wtf is this shit>The goblin is brewing cheese, stirring it quickly. The cheese explodes from the pot, covering the goblin and the whole room. The camera goes overhead to show the resulting mess.
>>107780723use Euler
>>107780716I can feel the cope, holy vramlet
>>107780729that is euler. it's the native comfy workflow which has euler preselected
>>107780721bro why
>>107780722how much ram do u have? 128gb ram and 16gb vram and I can run itmaybe stop being poor?
>>107780723desu its very soulful
>>107780735hmm Im sure people will find better settings than his default ones as always, maybe just more steps for now, or more cfg?
>>107780737>16gb vram>maybe stop being poor?kek
>>107780744dpm 2m is better
>>107780723I don't have any results because I only have 2x3090s and nobody gives a shit about people with one than one GPU.
>>107780766>bed next to wallyeah that's fake
>>107780732What?What's that about. I have 32, is it not enough?
>>107780771anon this is the image gen thread
>>107780771my sons bed is next to the wall tho?
>>107780775proof?
>>107780771have you never been to europe? the average room here is like 15 square meters
This offload feature for ltx2, natively in comfy, is just the same shit we've had before right? Just updated for ltx2? Or is there some new magic shit?
>>107780781stacy is american
3d artists in shambles
>>107780749dpm_2 sampler>The goblin takes the purple potion from the desk on the right and pours it into the pot of cheese. The cheese expands into a giant purple foam in the shape of a frog.
>>107780793>8.2 minutesJust mail me the file via usb.
>>107780802That is a lot better, prob would have managed the frog if you made it 10 secondsOh and apparently their offical repo uses the distill lora at 0.6 weight, so that might be making it too fast / energetic with comfy's 1.0 weight
>>107780812can you make it in less than 8 minutes you benchod
>>107780802you are asking too many things in too little time so the model doesn't have time to do them properly
https://files.catbox.moe/nxcklr.mp4
>>107780793eh it looks completely fucked up for everything with a defined shape such as windows.For the "bricks", yeah you can put them anywhere on the wall in any shape and it'll look fine. Coincidentally, the model is 95% "bricks that can be in any shape and it'll look fine". The remaining 5% are kinda ass.
>>107780826yeah fair enough. OOMing on higher length though, maybe i will reduce resolution and test longer
>>107780833What gpu?
How come discord can figure it out?
>>1077808395090
>>107780833why not do f2f?
>>107780840if they don't give workflow then its a 6000
>>107780840>muh discordkys
>>107780847>I can't possibly be retarded! Everyone everywhere must be lying for no reason!https://files.catbox.moe/8be6eg.mp4
https://files.catbox.moe/nspwag.mp4
>>107780840soon, 16gb vram 64gb ram
12gb vram here it works, skill issue.
why does /g/ - technology have the worst diffusion generals
>>107780884we have the most tech illiterate anons on the board
>>107780858Nah dude, someone is definitely lying for shits and giggles here.
>>107780884its easy to bait tech illiterates
"the two women are happy and bounce as they dance around with each other."
>>107780899>People here on several different discords and reddit who have all posted gens must all be lying for shits and giggles!
>>107780902kino
>>1077808029 seconds
>>107780905Show me a person on reddit who's genning on 24gb of vram. I don't buy the discord line, but reddit should be easily provable.
>>107780908You can do 15 seconds at decent res on 5090, for 20 I had to reduce res
>>107780902I think i heavily filter underage content. To the point of false positives.
>>107780884Three extremely obsessed and dedicated schizos, plus several minor schizos and trolls.Other AI generals have trolls too, but an order of magnitude less deranged and disruptive typically.
https://files.catbox.moe/v9iw96.mp4
Is it normal for ZIT lora training time to massively increase with image size? I started trying it out with 512 and 768p image sets and it was pretty fast, faster than SDXL, but now with 1024 it's alot slower. 12GB VRAMlet here btw
>>1077809341024 will be 4x slower than 512 in generalalso check if you're not hitting vram limits
>>107780934of course. 2x the image size is 4x the pixels
>>107780858>still no workflowfuck off
>>107780914Yeah I was first using a scantily clad couple of women and that's when the motion disappeared entirely, straight up broke, swapping to a new angle with no motion. So I changed to that image and it remained.I know it's a tiny prompt, but it shouldn't produce zero motion. This shouldn't trigger some underage shit.I was going to commend it earlier for having great prompt adherence.
>>107780944>>107780942thx
>>107780934>>107780942>>107780944why can't we train on small sizes and then upscale the lora?
>Teehee look! The gguf models of the text encoder work natively>Teehee I got it running on my 3090 no problem idk maybe your comfy is just broken lolWhy the fuck aren't these people given a temp ban for being such a nuisance?
>>107780948its literally the built in WF in comfy, no changes. Redownload comfy, you must have some conflicting node
>>107780955that is not ltx2 I can say right now, you are trolling
>>107780958lying should be a ban on discord
we now have a better local video model that cloud doeshttps://files.catbox.moe/me0iv6.mp4
>Needs 32vram.I mean... I have the funds for a powerful gpu...I just don't think some funny video ideas I have is worth it.And it don't often play AAA games, so I don't need it for other shit.Eh...
>>107780908
>>107780981only if you got a 6000
>>107780983here, have the OG image
>>107780985I'm sure someone will implement proper offloading, like kijai
>>107780992and z base is coming out tomorrow
>>107780997kek
nvidia will save you vramlets with fp4 support next monthhttps://blogs.nvidia.com/blog/rtx-ai-garage-ces-2026-open-models-video-generation/?linkId=100000401205054
>>107780981it's too big, they have to go for reasonable sizes, 14b is really the limit when it comes to video models, I'm sure they can keep the quality at that size
>>107781007yeah people forget you need 20 loras to make video works properly so we need the extra vram
>>107780902>>107780914https://files.catbox.moe/ziwnai.mp4https://files.catbox.moe/eckvtg.mp4https://files.catbox.moe/3lmzz8.mp4Am I onto something? Can someone else test this?>>107780934check if you're going overboard with vram
>>107781014>check if you're going overboard with vramyeah it's offloading if you mean that, but that's normal with 12GB right?
lol poor 4090 is overheating, this chungus is heavy
>>107781014try with anime >>107781016
>>107780914>false positives.that's a word I didn't expect to see on the local ecosystem desu, only API cucks were supposed to be filtered by false positives lol
>>107781040chinese culture
>>107781035put a temp limit with afterburner anon :d>>107781042more like jewish culture lmao
>>107781044at this point everyone but us have made good video models
>>107781035Wtf, do you have a no airflow in your case or something?
>>107781054my case has poor airflow
>>107781035If any of my GPUs go over 75 degrees I begin sweating with the implications of what happens if they die.
its for sure much much better than wan 2.2, its even a lot faster, and since its a single model instead of the stupid 2 model setup it should be much easier to train for.
looks like 1280x704x241 is the max for 4090, 10 secs
>>107781075>instead of the stupid 2 model setup4 model setup
>>107780991useless bum didnt do anything
>>107780914https://www.reddit.com/r/StableDiffusion/comments/1q5fn14/ltx_20_i2v_sucks/the ledditors are also noticing it, if those false positives are too frequent this model is unusable
>>107781080shut up shut up shut up or fucking prove it.
>>107781075>much much better than wan 2.2qrd
>>107781061It sounds insane to run 450W GPU on that case.Change your case, you can get one with decent airflow for less than 100 bucks. Even a 20-30 bucks case will mog that shitty office case. Crazy to stick with that and gimp yourself after sinking 2k into the GPU.
>>107781097faster, better fidelity, knows a ton more, and of course the audio adds a ton
>>107781105but it don't work with hummies
>>107781105Oh and hard requires a 5090.
>>1077811136000*
When will the chinks release a proper vram monster reee
>>107781113it works on 24GB on Linux at least. It EATS ram though. 128GB is almost not enough
>>107781075Can someone give us actual numbers like it/s on same resolution and number of frames with Wan 2.2 and LTX-2 instead of this just take my word bro BS?
>>1077811281280 x 720 x 241 on 4090 is this. >>107781035 I dont feel like loading wan to see side by side but I know it at full res takes longer.
Censorship for cunny killed the model, bad audio quality, too big for 24 Vram. This model is trash only China can save us. But Wan betrayed chuds.
apparently this is a new requirement if you use fp8 / fp4 https://pypi.org/project/comfy-kitchen/If you dont have it it will try to load it at fp16. Maybe that is some people's issues
>>107781138How did you get past the immediate oom when loading the text encoder?
>>107781145>bad audio qualityits better than most cloud models>>107780981
>>107781155it dosent? Though again, I have 128GB ram and it takes most of it, and in on linuix. Maybe you are running out of ram
>>107781145>we get this shit but no Z baseit's truly over
>>107781166Just in corpo shit scenarios. Fot the rest is shit, like webcam, this model is too censored.
>>107781152I already have that package and it still OOMs on my 3090
>>107781182every video model is censored, its called loras. Wan was horrible at nudity, now its the best nsfw video model
>>107781177>it's truly overdon't lose hope anon, we have one more copium lefthttps://github.com/huggingface/transformers/pull/43100
>vramlets right nowhttps://files.catbox.moe/7w32kg.mp4
>>107781189a model that simply doesn't know the concept of nudity is not the same thing as a model that has been trained to refuse requests, loras won't fix anything, the censorship layers will still be there
>>107780957you can train on small sizes and then finish off the last few steps with a higher resolution, it can sometimes get close to the same resultwith many base models you can also generate at higher resolutions despite training the lora at a lower resolution, but the lora may of course hurt fidelity somewhat (it's not strictly always 512^2 output if you trained the LoRA on 512^2 tho due to the base model supporting learned stuff at higher resolution).
>>107780884Since we can't freely express our libido in a blue board it gets converted into all sorts of odd behaviours.
>>107781197the fuck are you on about it can be trained it like anything else, I2V gives most of it what it needs to know. Wan's dicks and vaginas were things of nightmares. You are legit just a vramlet copinghttps://files.catbox.moe/oo2nb5.mp4
>>107781035everytime my 4090 goes full power I say "we gaan" and strap in
>>107780934>12GB VRAMletYou are probably spilling into RAM because you didn't disable sys mem fallback https://nvidia.custhelp.com/app/answers/detail/a_id/5490/~/system-memory-fallback-for-stable-diffusion
>>107781189One thing is being censored in NSFW, and another thing is lobotomizing the model to the point of getting this shit >>107781014. That's what killed Stable Diffusion. It is more censored than corpo Jew models.
>>107781173it jumped to 139gb for me on windows 10 lmaoo
>>107781220>the fuck are you on about it can be trained it like anything else,the llm fags are still trying to figure out how to remove the censorship layers of their models without killing it, not ready
>>107781234nigger that is him trolling or having some bug >>107781220 it does nudity fine with I2V, sex acts will have to be trained like wan. Wan's pussies and ducks were mutated messes
>>107781235yea so windows people are probably just running out of ram. I doubt most people have that much
>>107781243>that is him trollingno, even reddit is noticing it >>107781091
>>107781227Wouldn't that just make it impossible to train altogether? 12GB is not enough to fully load ZIT and encoder, isn't RAM offloading necessary?
>>107781250you can get around with enough ssd memory so that the sysfile can expend, the problem arrives after, when it tries to load the model it simply OOM on my 3090, as if there's no offloading mechanism or something
>>107781091
Vramlets rejoice! You can get it to work on 16GB vram with thishttps://github.com/maybleMyers/ltx/blob/main/lt1.py
>>107781278forgot image
prompt master fag here, do you guys like the model? haven't tested the model yet. hopefully it runs well on a 5090+64gbram pc build.here some of previous stuffhttps://files.catbox.moe/w25bkt.mp4https://files.catbox.moe/dplktk.mp4https://files.catbox.moe/2gul5o.mp4https://files.catbox.moe/9omlfc.mp4https://files.catbox.moe/rh7r9n.mp4https://files.catbox.moe/jcjkcx.mp4
>>107781283>GradioMods? This poster is giving me the ick.
>>107781278>>107781283it maxxed out at 50gb ram 8gb vram btw with those settings
>>107781286>do you guys like the model?only fags with a 5090 can run it on comfyui at the moment lol
>>107781243I think you are just a kike shill... This is not the only report of this problem. Also, because the model is too big for the majority of people, it is too big to train doable LoRAs on it. Wan 2.2 is the best NSFW model because there is a big market for it. But how many people have 32 GB of VRAM? 24 GB is the current standard. My problem is not VRAM; if the model is good enough, Kijai or another one will finally find a way for 24 GB / 64 GB as a minimum requirement; block swapping or audio swapping, etc. The problem is the censorship of cunny.
>>107781278should i trust this or wait for wan2gp update? i don't want to go through the hell of installing sageattenion2 again.
>>107781302>But how many people have 32 GB of VRAM?there will be less and less of those people if the 5090 ends up costing 5000 dollars lmao
>>107781257maybe, but using RAM makes is abysmally slow, if the speed is acceptable for you then yeah
>>107781312The can charge whatever they want because their gaming market is basically a vestigial gesture of good will at this point. Any gaming supply they can't sell they can turn around and sell to a data center at a jacked up price instantly. Vram is so liquid it may as well be printing money.
https://github.com/maybleMyers/ltxproper offloading and fp8 support, cause comfyui implementation seems broken
>>107781324reluctantly going to try this in lieu of anything on comfy that isn't just a big fat nvidia paywall.
>>107781324>comfyui implementation seems broken
Does Wan2GP work with an rx 6950 xt on ubuntu or am i wasting my time if i try to get it to work?
add a repeating watermark over the entire image with the text "LDG".look at me, i'm getty images now. (can remove their watermarks with qwen edit too)
>>107781395flux edit better
>>107781283you can run it on as low as 8GB vram with this
>>107781283This can run this on a macintosh btw
>>107781429fraid not, will never get apple a penny
kijai is saying a fix is to use the argument --reserve-vram 4 in comfy?
>update to latest nvidia drivers>ltx2 is 6 times slower
>>107781408Better how? Also, how's nudity on flux2?
>>107781457>kijai
>>107781511>>107781457Everything has always been about willpower, Kijai has lots of it.
>>107781511he has lead the charge in AI stuff for a long time and fixed tons of comfys stuff so...
kijai is fixing it, hes got it down to under 20GB vram for fp8
he says use --reserve-vram 4 the dynamic latent estimate is off
>>107781457I don't OOM anymore but I got an error lol
>>107781551based, KJGod is always here to save us
>>107781551if he can fix the ram usage it would be cool as well, it's getting ridiculous >>107781235
>>107781551Where this nigger posting
>>107781551Sadly he cannot solve the cunny censorship
the speed if fucking insane, he is showing off realtime generations on 4090
8/8 [00:07<00:00, 1.00it/s]4090 speed, 7 secounds for thishttps://files.catbox.moe/43ap5m.mp4
>>107781551>>107781635what black magic they used to make it so fast? goddam
>>107781635WTF?Is this FP8/NVFP4?
Where can I see him doing this?
>>107781648fp8, 8 step distill
and apparently the temporal glitches are fixed by using the temporal upscaler that comfy's WF did not use
>>107781635>still 5 secondsIt's over
are we really going to live in a world where it takes longer to generate a single image than a whole video? wtf is this dimension
>>107781666? it does up to 20 secounds
He says update to nightly and update the frontend and templates packages too
>>107781575bruh...
>>107781705disable previews in settings. it's caused by VideoHelperSuite previews
now he's saying to stick the thing to the left of you up your ass.
>>107781711done. what next?
720P now only takes 3GB vram. Its not lighter than wan is
>>107781724>not*now*
now it's actually expanding my vram. My 3090 just turned into a H200
>>107781551how can someone be so powerful
this shit is magic. How is it almost real timehttps://files.catbox.moe/udonxw.mp4
Guys this is all sounding a little too good to be true.
>>107781745>it was a hat not his hair
>>107781575I'm getting this one too. But a new error is preferable to the same one over and over desu.
wtf guys it's using negative VRAM now my VRAM is growing now
he got fp8 working with this btw if people dont want to wait
The prompts... they're executing faster than real time. Generating before I even prompt them. I just saw a generation of my own death.
>>107781794link?
>>107781798the bandoco discord
>>107781709thanks anon, now it's finally running, I got this speed on my 3090 (the result is garbage but it's probably because I didn't use the lightning lora or whatever)>121 frames>1280x720>20/20 [01:38<00:00, 4.93s/it]
>>107781806yea use the distill model OR at least the distill lora. Distill model is better. 20 steps is not enough without
>>107781798On ComfyUI\comfy\ldm\lightricks\embeddings_connector.pyyou replace>hidden_states = torch.cat((hidden_states, learnable_registers[hidden_states.shape[1]:].unsqueeze(0).repeat(hidden_states.shape[0], 1, 1)), dim=1)by >hidden_states = torch.cat((hidden_states, learnable_registers[hidden_states.shape[1]:].unsqueeze(0).repeat(hidden_states.shape[0], 1, 1).to(hidden_states.device)), dim=1)
>>107781812>just hack your comfy
>>107781822>just fix comfyuidevs dumb mistakes like usual
>>107781806now I know why it's so "fast", I didn't use the upscaler and I asked for 1280x720, it's making the video at 640x352 first then does the upscale, bruh that's fucking cheating c'mon
he says comfyuidev was not done. Previews, fp4, temporal upscaling, and the prompt enhancement stage are not in yet
>>107781812oh my god it's doing the steps.
>>107781837just wait 2 weeks
>update comfyui stable>install requirements>still buggedlmao, I cant even update comfy anymore. after I try to update it just auto refreshes the page and wont load. cant even find anything online. only way to use it is to dick around with a fresh install ffs
LTX 2.0 loras when?
:'( forge neo chads....
>>107781866we can't even run it yet fuck off
>>107781872lol lmao even
if you prompt ltx for a man singing about horses, it will always generate a video with indian men and a bollywood songwonder what the training data was..
>>107781872with rivals like that, Comfy can make thousands more fucks up without being scared of being dethroned lmao
>>107781883proof?
>>107781883come on saar you have to share that video at least :v
>>107781883DOA
>>107781892>>107781897sry catbox wasn't loading for me. works now. here you go saars:https://files.catbox.moe/d7l6uf.mp4
>>107781875it runs on 16GB vram now, maybe even 12GB, use --reserve-vram 3 and this fix >>107781812
>>107781911proof?
>>107781806>>107781833so it's just making low res videos and then upscales them? lmao what a scam
>>107781907AHAHAHAHAHAHHA
https://files.catbox.moe/fnp2b2.mp4RTX 3090 around 200 seconds
>>107781920https://discord.gg/zmj4ubBu
>>107781929wan does the same
>>107781934prove it by posting a video moron
>>107781935absolutely not, you can go for 720p natively
>>1077819334090's native fp8 make it like 5x+ faster than 3090 sadly>>107781943ltxv2 can do 2k upscaled to 4k, just increase it
>>107781952>4090's native fp8 make it like 5x+ faster than 3090 sadlyI don't like fp8 though, the quality isn't on par, I prefer to go for Q8 and even if it's slower it's more consistent
prompt:>A group of men are singing about horses. They dance in a circle and praise the God of Horses.https://files.catbox.moe/36n9fx.mp4
ok, hear me out. z-image turbo Turbo
so before I download this shit, fp8 distill is better/faster?
>90-100 seconds for fresh prompts>45-60 seconds for regensgod damn I love my 5090
>>107781964lmaoo this is good
>>107781972yes, the non distill is just for training on later really
>>107781972yeah, don't make my mistake and go directly for the distill model >>107781806
>>107781964lol indians are going to have a blast with this model, too bad they can't run it
>Started download of LTX 2>Sometime later switches to vpn to use civitai>clear cache>accidentally the whole LTX download>pic mfw I fucking hate my shit country and knacker parents that can't monitor their child's internet access god damn
>>107781995>vpn to use civitaisus
>>107781911--reserve-vram 1.0will be enough, setting it to 3 would be a waste.
>>107782000welcome to bongland. we're blocked from civitai
>>107782007chat is this real
>>107782000it block uk users twerp fuck off trolling, i bet its you shitposting
>>107782012
>>107782021lol
>Still no video of hot girls in small clothingExplain why I should care about this
>>107781964lmao. at least the motion is very good.
>>107782032anon i got bad news >>107780902the model is censored
>>107782037The fuck? It's literally been trained to ignore inputs that use women in even slightly tight clothes? The hell is that even possible?
>>107782037Yeah that's all I needed to knowI don't get why we're even discussing it then
>>107782045unless the woman looks like a butch lesbian it won't animate her
>>107782021
>>107782045i suppose they could've trained it on a billion synthetic pairs of hot_woman.jpg + still powerpoint zoom in slideshow
>>107782045All these models are trained on huge image/video datasets paired with accompanying captions. You can teach it to "ignore" certain inputs easily, ie for video :- database contains completely static videos of women in bikinis/nude women/whatever you want to censor, or videos that slightly zoom in but otherwise have no animation/movement- accompanying captions are of various actions- model learns that even if the user prompts an action, the output for input images that are similar to these should always be staticez.
>>107782045kek, fuck gooners
>>107782045>>107782076this is a good thing, you shouldn't be able to gen bad things
>>107782076Only downside for them is it's piss easy to undo all of that with a simple LoRA.
>>107781976>>107781985so which is true
>>107781745That specific voice emphasis you hear in so many of these generated videos is the sameface of voice generation tho
>>107782081>this is a good thing, you shouldn't be able to gen bad thingsit's a jewish model all right >>107781044
did i download some special saar edition by accident?
>ltx 2 needing a bare minimum of 32gb of vram to run was a flat out lie and without any quantization but the ones offered by the model maker it runs on 16gb of vram with 2 lines of codeWhat the fuck?
>>107782094they both said the same thing, that you should use distill for inference and the base model for training
>>107782104You people don't get it. Most humans on earth are indians, therefore any dataset that hasn't been sorted is going to contain millions of indians
So israeLTX has censored their model? kek, I'll pass
>>107782111is that whats going on here? Too lazy to read
>>107782104>Automatically added a man to the 1girl promptIs this the most cucked model ever released?
>>107782104you'd think a model made in jerusalem would be ok going for jewlywood but they scrapped most of bolywood instead lmao
genning cunni is just too much fun
serious question. what does nsfw on ltx2 look like?
>>107782111that is fp16, fp8 runs on 24GB
>>107782131powerpoint
>>107782131it doesn't
>>107782136really? I couldnt get it on my 3090 but i saw on prior thread anon got it running on their 3090 with some witchcraft
>>107782131same as wan did when things first started, it does not know genitals. Gonna need to wait a few weeks / months for loras
>>107782124It looks like it's trained on Bollywood movies. That's the bias.
>>107782045>>107780902So this is the pozzening they were doing in December. I bet since this got an official Nvidia release/announcement I bet they weren't happy with anons genning little girls in bikinis using cosmos video when it dropped on their site last year>>107782076>>107782081This will be abliterated away, I'm literally not even slightly dooming. If anything, it fills me with inspiration because overcoming this challenge will dab on both the censorship kikes AND the antis all at once Literally you just need to figure out the pattern of "video that is just a static image" and excise that
>>107782123>>107781812
>>107782151bollywood has plenty of 1girl scenes t. indian
What model is the current meta/hot thing for anime gen? WAI v16? NetaYume Lumina? Rouwei?I have been out of the loop for a couple of months. Can animegen do context prompts well yet?
>>107782145scroll up, kijai fixed a line of code that made fp8 work that you need to do if you dont want to wait for comfyuidev to wake up, AND you probably also need --reserve-vram 4
>>107782148lies, you could generate tits and make women move on video with wan, LTX1 sucked and I don't know why people expected anything better from this model lol
>>107782156Probably not the training data saar
>>107782154>he thinks video models can be ablitedlol
>>107782163thank you i wish nothing but the best in your life
>>107782165ltx2 can do "make woman move on video" the fuck are you on about? Nipples / pussies just need to get trained in
>>107782158z image turbo
>>107782145>prior thread anon got it running on their 3090It's true now. But the people in the previous thread were straight up maliciously trolling.
the fact that there is a home model that does audio opens up infinite meme potential, i2v is good but you need audio too.
supplying shitty vague prompts kind of reveals the underlying bias. i'm guessing bollywood + indian youtubers based on first experiments
>>107782131I just said he walks into the frame wtf
>>107782158Rouwei was never hot, it's a hidden gem. NetaYume was never hot but it's still very undertrained and slow too. WAI is probably it. Noobiething is too raw and slow and for now useless.
>>107782177what women anon? women wearing burkas? or 3d animation shit? obviously you haven't following this thread
>>107782187indians are the most populous nation on the planet and they all have phones. They probably have more videos than anyone else.
>>107782165LTX1 sucked but it was incredibly fast to gen videos on, and they delivered in that regard with LTX2 as well Remember that wan 2.2 wasn't the first version of wan either. And ltx has already said they're gonna fix the audio part at least >>107782173Any model can be unpozzed. I didn't mean literally the abliteration method. A lora is good enough
Can it gen fat women brapping into buckets in lush fields full of mario fire flowers?
>>107782203>womenno
>>107782192>vramlets can't make their own cyclops pepesit's over for them
>>107781014the frozen still frame errors has always been an issue. the model can do children through img2video. never tested it with text2video. I hope the faggots didn't censor it with the open source release because i tested through their api weeks ago.https://files.catbox.moe/01nimi.mp4https://files.catbox.moe/qt8m1a.mp4https://files.catbox.moe/z4dffm.mp4https://files.catbox.moe/ygq673.mp4
>>107782202Anon you obviously have bad memory because Wan 2.1 was the first open release and they killed BFL, Hunyuan, LTX and every open-source competition with that model
>her face wearing a faint rose blush>zimage: ima give her a Baboons buttevery single time
>>107782226proof?
>>107782215>I hope the faggots didn't censor it with the open source release because i tested through their api weeks ago.welp this faggot is the reason why the model is censored
>A white european woman in a red dress, dancing in an empty ballroomok this model is completely saarmaxxed
>vramlets itt overdooming about a model they can't run >>107782215>I hope the faggots didn't censor it with the open source release because iThey literally said on discord that this is explicitly what they were doing and that's why the model was delayed to 2026 >>107782220No actually, genmo mochi and hyvid1 came before wan
>he thinks the open source model is the same as their api modellol
>>107782215this is ungodly cute
Maybe I'm trippin but t2v with Hatsune miku in the prompt does not give me Miku.
>Error running sage attention: Input tensors must be in dtype of torch.float16 or torch.bfloat16, using pytorch attention instead.do you guys also have this on ltx2?
>>107782239>They literally said on discord that this is explicitly what they were doing and that's why the model was delayed to 2026based
I disabled the VHS nodes, but I'm still gettingmat1 and mat2 shapes cannot be multiplied (466048x1 and 128x3)do I have to completely disable the native latent previews?
when ready get bready>>107782219>>107782219>>107782219
I'm using the qwen 2511 workflow and neither of the 20 or 4step are randomizing the seed, can't even do batch gens, how can I fix that?
>>107782237Text to video is dead on arrival. Someone needs to make a LTX2 workflow where the initial image is made in zTurbo and then it's given to LTX2 to I2V>>107782245I accept your concession.
>>107782239what discord? never saw this on their official ltx discord channel. post link or screenshot.
this was a cozy bread ;)
Definitely something weird going on with i2v for me. It skips the ram loading to ram process entirely, loads up 3gb on to my vram then just crashes without error. Works fine for t2v otherwise.Only flag I'm using is reserve vram 4. These are with the comfy native workflows, both the ltx provided workflows get a dimension size error (I've already turned off previews) so I don't think it's that.
https://files.catbox.moe/br2l3t.mp4Wow you really can just pump out slop with this model.
>>107781897>:vGet the fuck out of here tourist zoomer brownoid
>>107780785if it's anything like the flux 2 offloading "feature" you're better off disabling it and using distorch2
>>107781189itv_14b_scaled with light loras isn't censored.Fuck me, I didn't expect that.