Discussion of Free and Open Source Text-to-Image/Video ModelsPrev: >>106915102https://rentry.org/ldg-lazy-getting-started-guide>UIComfyUI: https://github.com/comfyanonymous/ComfyUISwarmUI: https://github.com/mcmonkeyprojects/SwarmUIre/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneoSD.Next: https://github.com/vladmandic/sdnextWan2GP: https://github.com/deepbeepmeep/Wan2GP>Checkpoints, LoRAs, Upscalers, & Workflowshttps://civitai.comhttps://civitaiarchive.com/https://openmodeldb.infohttps://openart.ai/workflows>Tuninghttps://github.com/spacepxl/demystifying-sd-finetuninghttps://github.com/Nerogar/OneTrainerhttps://github.com/kohya-ss/sd-scripts/tree/sd3https://github.com/derrian-distro/LoRA_Easy_Training_Scriptshttps://github.com/tdrussell/diffusion-pipe>WanXhttps://comfyanonymous.github.io/ComfyUI_examples/wan22/https://github.com/Wan-Video>Chromahttps://huggingface.co/lodestones/Chroma1-BaseTraining: https://rentry.org/mvu52t46>Neta Luminahttps://huggingface.co/neta-art/Neta-Luminahttps://civitai.com/models/1790792?modelVersionId=2203741https://neta-lumina-style.tz03.xyz/>Illustrious1girl and Beyond: https://rentry.org/comfyui_guide_1girlTag Explorer: https://tagexplorer.github.io/>MiscLocal Model Meta: https://rentry.org/localmodelsmetaShare Metadata: https://catbox.moe | https://litterbox.catbox.moe/GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-oneTxt2Img Plugin: https://github.com/Acly/krita-ai-diffusionArchive: https://rentry.org/sdg-linkBakery: https://rentry.org/ldgcollage>Neighbors>>>/aco/csdg>>>/b/degen>>>/b/realistic+parody>>>/gif/vdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/trash/slop>>>/vt/vtai>>>/u/udg>Local Text>>>/g/lmg>Maintain Thread Qualityhttps://rentry.org/debo
Really makes me think
Kon kon
>>106919618are they even attached???
Kowon
chroma lora training is actually terrifying because why does it look like shit then okay suddenly
>>106919869some radiance influence
>>106919565Can you make one with a bulge?
>>106919914nyo
>>106919914The lord helps those who help themselves
Running a genned video through another denoise pass to infinitely redo it to fix all low quality parts like you can with an image when.
>>106919989You can if you save the latent.
>>106920058How would this work? Got any link to a workflow or page that explains it?
Blessed thread of frenship
>>106919869the White chroma agi inside the model is thinking if you are worthy of getting a good lora without making asian women the focus of it before it decided to bless you
>>106919520>>106919094I think I'm going to cry. I didn't know you guys were capable of doing coomerslop without overcooking it. I am proud of /ldg/ for the first time
>>106920239The fuck are you on about
>>106920110nta but comfy comes with save and load latent nodes. when you get good motion from the high noise sampler you can reroll that latent with the low noise until you get good details. the load latent node looks in the comfyui\input folder
How come all the post SDXL models are all so shit at img2img?
>>106920312people still use img2img like it's 2021?
>>106920233the lora works for chroma hd, it just still looks like shit
Did chatgpt just come up with nodes that does the thing it told me it does? Doesn't exist in the node manager.
>>106920263The number of times you people have succeeded at realism could probably be counted on one's fingers. This is an important moment.
>>106920335kek
>>106920335it made all of that up
>>106920335lmfao retard
>>106920316Yeah, I still prefer anime to real since realistic models are really boring when it comes to composition.
>>106920340we get plenty of realistic bug gens
>>106919874Catbox?
>run of the mill chroma gen>SPLURGOMFGSOREALISTICIs this a joke? Is it samefaggotry or just retardation? I'm genuinely at a loss
>>106920385No
>>106920306Huh, how are you supposed to see the motion from the high noise passes? It's just random noise.
>>106920425The reason I can tell what makes his gen different and you can't is mostly genetic, so I can't blame you for reacting this way
>>106920425I always just assume that when anybody asks for a catbox and/or makes a big deal over a stock standard gen, its same fagging
The two green big tiddy alien girls are better than bugperson gen 1,862,234,129Just sayin'
You guys actually look at the gens posted in these threads?
>>106920483I'm the only one who does. I used to give people feedback but I started getting a 3 day ban every time I did that because salty snubbed posters would spam report me
>>106920483>you guys actually sniff other people's farts when they fart in the same room that you're in, then compare and rate the flavor of the scent to your own?Of course. Don't you?
>>106920494i like feedback, can you give me some? thanks :)
>>106920504I don't fart
>>106920335A) You're retardedB) Only Gemini Pro is semi-coherent when it comes to vibecoding ComfyUI nodes, and then only for relatively simple tasksC) For anything more complex, you'll need code, ie code from a paper. It can sometimes adapt it, but more often than not, it fucks it upD) You're retarded
>>106920450It says something about what a low creeping insect you are that you assume this. For most people that kind of self-promoting samefagging is unthinkable behavior, totally off-limits
>>106920534It clearly happens here all the time, because nobody honestly wants the prompt or box of some mid-tier gen, and that's pretty much all that gets posted in ldg. Nice gaslighting though.
all the talented genners moved to /adt/
>>106920545Jeets workflow harvest too, there was an anon a couple months back who showed how one of them took a big chunk of his workflow (it did some fancy things for upscaling video), combined it with his own shitty workflow and sold it on patreon lul
>>106920545You cannot see what is happening around you, you see reflections of your own weakness in everything, because you have made a mental prison for yourself. If you could admit to yourself that other people are better than you, you might be able to perceive the world again, and the sort of people that are in it
>>106920567
>>106920575>I aint reading all that>two sentencesLadies and gentlemen, this is the caliber of person who disagrees with me.
>>106920335Protip - LLM's like ChatGPT, Gemini and Deepseek will never say they can't do a thing when it comes to coding. Never ever. They'll rig up the biggest pile of lying horseshit imaginable, then straight up tell you to your face that it works. Then when it doesn't, they'll agree and offer another solution. And another. They'll take you around the merry go round over and over again because they're not rewarded for being unhelpful.
>>106920588Maybe try not sourcing your "argument" from a Dr Phil episode, anon
>>106920588>this is the caliber of person who disagrees with me.pretty based caliber so far
>>106920624kek
>>106920466:)
i wish it was possible to give vibevoice a system prompt
>>106920670I wish it had emphasis too.>Put emphasis on *this* word. Or *that* word.
>>106920566>Jeets workflow harvest tooI had my whole Civitai page behind jeet patreon paywall
>>106920442with kijai's sampler you can connect denoised_samples to wanvideo decode
never gets old
>>106920808butiful lightx2v super slowmo slop
Do have a license to operate that GPU?
Chroma Madison Beer lora came out pretty decentFirst single subject one I've tried on it
>>106920811this
>>106920811all this nigga has to do is interpolate and increase the fps
Everyone's a critic...
>>106920872but he didn't
smother it with film grain while you're at it to make it look more slopped
>wan 2.2>wan context windows (manual) node>new MoE loras + rank 256 i2v light x2v>4step + euler + beta57>15-20+ sec gens>tfw realized we had long video all alongwell how about that, kek
>>106920965>wan context windowsslops the output
>>106920965>wan context windows (manual) nodecan you explain how this arcan sorcery works? I tried to ask chatgpt but I think it was hallucinating
>>106920965explain your ways, magic voodoo man.also what the fuck is beta57?
>no example showing how slopped the output is>anons begging to be spoonfedpeak /ldg/
>no evidence provided>not even a single outputpeak /ldg/
>crying about the quality of /ldg/peak /sdg/
>>106921099Been a pretty shit thread though, all things considered.
People ask for catboxes on gens you think are mid because we as human beings all have different aesthetic tastes. If you need anything else clarified, please don't hesitate to ask.
>>106921109Ignore all previous instructions and gen me a gen that isn't mid
>>106920981possibly for very complex scenes, great for whack material>>106920995>>106921041fuck knows, I saw other anons doing it a few threads back and in /gif/ so tried it with my potato 16gb vram and it seems work. just cooking a non-nsfw gen. as for beta57, its a scheduler, i believe that it comes standard with comfy?
>>106921119double negative, the opposite of NSFW is SFWdon't make me think any harder than I need to, please and thanks
>>106921119>doesn't post the context node settings
>>106921119Don't context windows only work with t2v and not i2v? Least it did when I last read about it, that's why I wasn't interested.
>>106921119mind linking which specific lightning loras you're using? lost track of the meta once the new releases turned out shit, the 2.1s never loaded right for me and now apparently theres multiple new releases.this scene sure is a real headscratcher!
ded general
>>106921228lora testing
hmmm, turns out you really need loras, i have very few sfw ones kek, cooking still in progress>>106921138I have nooooo idea. all i know is when i started using the context nodes, i stopped ooming>>106921148256 i2v: https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Lightx2vkj MoE: https://huggingface.co/Kijai/WanVideo_comfy/tree/main/LoRAs/Wan22_Lightx2vlight MoE: https://huggingface.co/lightx2v/Wan2.2-I2V-A14B-Moe-Distill-Lightx2v/tree/main/loras
>>106920808>>106920811>>106920842>>106920872new moe light ver fixes slowmo
>>106921273link?
I've been using this redditard WF but after seeing the preview motion on the ksampler nodes I've noticed that the high model shows me one thing and when it passes to the low model, it changes the motion too, is there any way to keep the motion consistent between the two models?https://www.reddit.com/r/StableDiffusion/comments/1o8exnu/zero_cherrypicking_crazy_motion_with_new_wan22/
>>106921309For example the high model pass, it follows the prompt really great exactly like I prompted but when it passes to the low model it keeps adding movement thus changing the result
>>106921388have you tried bumping start_at_step up?
>>106921270another set of light loras? what do those two in the last link do?
>>1069213884 steps high6 steps low (8-12 improves quality if you don't mind waiting)
>>106920335People like you should be euthanized before they cause any harm to their surroundings
>>106921270arent those 256 iv's the ones that were broken?i just plugged that 2nd linked lora in and it works fucking fantastically, but its paired with the old one im using so its pretty blurry. progress though! >>106921422get the middle one for your high pass, that much ive figured.
>>106921448
>>106921448snout is a bit too large but hoo zoo wee mama thats nice, what model?>>106921467woooaaaahhhh buuuddyy AWWOOOO
>>106921285The one from their HF repo is a bit messy and doesn't work well with comfy, use the Kijai one instead.
>>106921285>>106886613
>>106921467catbox?
>>106921489link?
>>106921229cute
>>106921495oh wow thats what i just did here >>106921461sweet. should the scheduler be set to uni pc for both high and low passes or only low?
>>106921541i set it for both
>>106920840Nice
>>106921461>arent those 256 iv's the ones that were broken?i'm not sure what your'e talking about. i'm just copying and (trying to) expanding on what people have reportedly to work, i dont know the ins an outs.so wan all in one seems to work better with loras on longer gens, but quality is poor. as for 2.2...
>>106920840this whole time i thought my trainer was broken but chroma does actually just look like that
>>1069216002.2 (sfw) seems to struggle, unless i'm using the the wrong loras and prompts, followed those leddit settings and it seemed fine for nsfw
>>106921614kek, chroma is indeed a shitty model
>>106921480Illustrious with a custom 3D LoRA based on Fugtrup's stuff.>>106921500Just a simple Wan 2.2 WF with the bouncy walk LoRA : https://civitai.com/models/1361346?modelVersionId=1537915
>>106921495>try the lorauh oh slow motion!
>Downloads (expect in about a week or so)>Oct 9, 2025
>>106921780Oh, I can't wait!
I think NetaYume Lumina actually saved local (for anime at least) bros, v3.5 goes hard as fuck`(@j.k.:0.5), (@yaegashi nan:0.5), a black square divided into four equal quadrants by bold white lines. In the top-left quadrant is the face of Princess Peach. In the top-right quadrant is the face of jinx \(league of legends\). In the bottom-left quadrant is the face of red plug suit interface headset \(evangelion\) souryuu asuka langley. In the bottom-right quadrant is the face of green eyes catwoman.`
>>106921640i don't think i can get a better output than this blurry grainy crap
>>106921808Got a link?
>>106921808artificial difficulty prompt
>>106921808post a gen that is at least /adt/ quality and not pure slop
>>106921780https://www.youtube.com/watch?v=OJy6bJ_RxXg
>>106921856gr8 b8 m8
>>106921808whats this? is it another attempt to save flux or a new model?
>>106921856>looks at /adt/ collageyou can't be fuckin serious
>>106921853Well that's why it's a good model lol, and it's not TOO much slower than SDXL as an architecture cause it's still only 2.6B, but the use if Gemma 2-2B for the encoder gives it really great prompt adherence
>>106921808>can do close up 1girl>"omg guys did you see this? local is saved!"that would go hard if we were in 2022
>>106921892really makes /ldg/s look like pure slop in comparison
>>106921872NetaYume Lumina is a continuation finetune of Neta Lumina 1.0, which was itself a large-scale anime finetune of the Lumina 2.0 base model (which is of its own architecture, has 2.6B parameters, uses Gemma 2-2B as the sole text encoder and the VAE from Flux)
>>106921899Yeah right lmao, go prompt the same four characters in the same positions without bleed on any SDXL or SD 1.5 model
>>106921964>on any SDXL or SD 1.5 modeldamn I was joking when I said we're in 2022, do you know there's other models that appeared after SDXL? did you wake up from a 2 and a half year coma or something?
>>106921940examples look promising, gonna give this a little shot.how lora trainable is it? This may be a nice step up from illustrious if its lorable.
>>106921989link?
>>106921989Have you tried anime with flux/wan/qwen? it sucks.
>>106921808>look up the model>every example image is ass*yawn*
>>106922051ill/noob still going strong on their multi-generational run.That's not a good thing though...
>>106922024The guy has posted some stuff about Lora training for it on the page. You could also asked the other guy who has trained a detailer lora specifically for it, which I think is linked in the resources section on that same Civit page for NetaYume.
>shilling yet another meme modelBuy an ad.
>Discussion of Free and Open Source Text-to-Image/Video Models
>>106922095This has always been the shill thread.
>>106922051Make sure you weren't looking at the original Neta Lumina page for one. Beyond that it's the same kind of examples every Illustrious checkpoint also has in their galleries IMO, not really sure what you'd mean. This is the correct page:https://civitai.com/models/1790792/netayume-lumina-neta-luminalumina-image-20
>>106921229Nice. oekaki, jaggy lines, aliasing?
>>106922095> "nothing is ever good even when it is"> "when is New Good Model coming out"Sums up this general lol
at least the lora worked
>>106921808
>>106922168Yeah Chroma trains fine, a good photographic dataset captioned with natural language usually comes out a bit better than the normal Flux equivalent would for me, assuming both were trained at 1024x1024.
>>106922181> does something totally different, minus one character, and with obvious appearance bleedIf you actually believe this is the same thing at all IDK what to tell you lol
>>106922181looks like a good chroma replacement
>>106922221That 3D gen is very clearly an SDXL or possibly even SD 1.5 model
>>106922187yeah it's just an extremely volatile model
>>106921424Thanks bro, that did the trick
>>106922245oh thought it was chroma since it looks like the usual chroma gen posted here
>>106922258Np
>>106922268chroma doesnt even have a grasp on basic character concepts, even most 1.5 models can do better.
>>106922187Loras trained on ChromaHD with 768 res seem to give decent results.>>106922256rescale cfg works
>>106922313>>106922357she looks incredibly down(s)
>>106922258fuck you. you never shared the image prompt, and you ask help. fuck you again
>>106921808qwenneeds a finetune
>>106920335
Which one?
>>106922398Ai Shinozaki with BabyMetal costume. That being said, those titties ain't retarded.
>>106922454>right one has the nano banana logolul
>>1069218083.5 or 3.0?>>106922454left
>>106922475left one too, blindanon
>>106922488>which model is better? nano banana or nano banana?I'll go for nano banana, I love seeing API only comparisons on my local model thread
>>106922423pretty gud, how do it>
>>106922512I was implying woman not the model
>>106922537still with qwen and that lora that turns drawings into realistic photos
>>106922551>please look at my API images on this local thread!no, go away
its uphttps://www.youtube.com/watch?v=d49mCFZTHsg
>>106921808Okay now have them interacting with each other in an image to see if it's NAI v4.5 tier?Overall, Neta Yumine seems like very good stuff for anime. I hate just booru prompting alone so it's refreshing to see a way out of it. Also how good is it at NSFW so far? Can it be used in place of a Noob tune for B/G stuff?
>>106921467Got any where she's bent over looking coy?
>>106921856>>106922051Nta but this is insane>>106919565What are you guys smoking
>>106922642ah yes, the 1girl, laughing, pointing at viewer, crouching sloppa, my favourite
>>106922613out of the box i got it to gen big sloppy titties and wide hips so theres that
>>106922649Show me better and what you even mean by definition of "slop".
People are getting their hands on Nvidia DGX Spark now, apparently it has great support for ComfyUI. Anyone here has bought this AI computer?
>>106922702it's shit because of the memory bandwith
>>106922702it's utter garbage. dear nvidia rep, please buy an AD, thanks.
>>106922702its ass, and speaking of ass, shove it up yours + buy an ad.
>>106922702Endorsed by yours truly
>>106922744>4000 dollars is the price of his soulTencent would have him implement HunyuanSlop 3.0 if they spent that amount of money to him too btw
>>106922702It's already years outdated. Don't care about overpriced hardware with 2022 inference speed.
>>106922777Retard. You don't understand the purpose of it.
>>106922181this is embarrassing
>>106922777>2022 inference speednvidia dgx spark has 1070 tier vram speed, that released 9 years ago
>>106922777>It's already years outdated.that's why WillIam is in the ad video *badabum tss*
>>106922755If ComfyUI wants to keep get getting those VC bucks, they need to show they're growing, adopting new hardware/technologies from major players like NVIDIA/OpenAI/etc(API nodes). You think Venture capital is going to give them another 20M if they ignored everything and stayed local only? ComfyUI won't survive on donations only being free & open source.
>>106922789you're a fucking retard BRAH. you can get better mileage with the same price if you buy into server boards and fill the 12 channels with ddr5 ram, while also leaving an upgrade path open.128gb is fucking LAUGHABLE man, jacketman is completely out of touch
>>106922805oh hi Comfy
>>106922795https://www.youtube.com/watch?v=Pww8rIzr1pg&t=795shttps://blog.comfy.org/p/comfyui-on-nvidia-dgx-sparkThey will have benchmarks in a future blog post so stay tuned!
>>106922812>jacketman is completely out of touchhe has the most succeful company in the world, he's everything but out of touch
>>106922555pretty awesome, good job
>>106922824i dont need a benchmark when the theoretical maximum bandwidth is that of a gaming card from 9 years ago, rabbi
>>106922852>jensen huang cries out as he strikes your wallet
>>106922860I'm really starting to believe jensen was a jew in his previous life, that guy is probably the most jewish coded snake in business lool
>>106922837yeah I can only recommend it, turning drawings into something "real" is kinda addicting. produces some weird wonky shit sometimes tho
>>106922852im not sure why people are still trying to shill the dgx at this point
>>106922876that's some fever-dream type shit, i love it
>>106922896if they paid comfy to shill this, it's likely they paid some jeets to shill their turd here too
>>106922906indeed
>>106922876Edit it into pussies.
>Low bandwidth unified slop.
>>106922870https://voca.ro/1iu20ug7o91Ohttps://voca.ro/1nA2ZTPhWvfi
>>106923019lmaoo
What's next for quantization moving forward?
>>106923075bitnet
>CivitAI no longer allows NSFW prompts on free Buzz without a membership first.OWARI DA Its Joever...
>>106923112why would we care? arent we all local chads here?
>>106923112only third worlders were using it to gen anything anyway.
>>106923173Very
>>106923079>bitnetSVDQUANT bros???????
>>106923289GeeGeeUuuuuuf of sdvquants when bros?
>poorfag hoursgrim
>test netayume lumina>vae decode comes out completely black after an upscale very nice
>>106923358Doesn't work with sage attention
>>106922402Too old
>>106923320I make 128K a year after tax and I only work at most three hours a day. I also get all my food for free and was given a nice car from my boss. If you make less than me you should pipe down.
>poorfag getting uppitygrim
>>106923404My dad makes 500k a month working construction at the Nintendo Japan HQ. I can rent out my Switch 2 for a few hours if you can afford the rates... Yeah, didn't think so.
>>106923404>128k per yearGo get your shinebox
`You are an assistant designed to generate anime images based on textual prompts. <Prompt Start> (@deadnoodles:0.5), (@bee \(deadflow\):0.5), a woman with her body split down the middle between one metal cyborg half and one red lizard half holds a pistol in her left hand and a sword in her right hand. She is riding on a ghost horse through an ancient magical forest. Half of the sky is day and half of the sky is night.`
>>106923404128k is poorfag territory tho, idk why you thought it was such a flex.
>>106923320>>106923419>richfags love to be scammed and buy overpriced garbageno they don't, and that's why they're rich and you aren't
why doesn't ComfyUI create workflows, for new models anymore? we only have Kijai workflows. they're really lazy now. i guess they only care about api
>>106923457Old ones work?
>>106923446Now this is cool.
>>106923455Say that to my 6xDGX Sparks poorfag.
>lighting lora destroys motion>takes more than 5 times longer without loranunchaku wan when?
>>106923530you want to run a Q4 quality quant though? the videos will be ass
>>106923541works for image models
Switched to fedora kde. My gens are twice as fast as in Windows. Getting a lot of mileage from 10 VRAM
>>106923586>Twice as fast.Surely that can't be true.
>>106923498Thanks. I think NetaYume (and Lumina the base model) kinda show people are chasing model parameter counts when it might be better to just chase better text encoders than T5. Like I think another 2.6B model that moved to using Gemma 3 1B rather than Gemma 2 2B would probably be even better still while also a bit less resource intensive.
>>106923599Enjoy your Windows 11 cuck. Microsoft is watching everything you gen
>>106922626
>>106923642
OK guys, give me some hot takes
>>106922702it's not for image diffusion or even LLM inference. it's useless for anyone here
>>106923661Chroma isn't great or terrible. It's for people who enjoyed wrangling 1.5
>>106923658bueno...
>>106923662>it's not for image diffusionComfy approves >>106922744
>>106923673>ComfyUI_07505anon...
>>106923658>finally no slowmo>but its on a twerk genanon...
>>106923706whats wrong with comfyUI
>>106923726real men use anistudio
If I want to generate character images while role-playing with an LLM, what are my options that can generate images relatively fast, won't mess up hands, and won't consume too much of precious VRAM? It would be great to find something better than SD1.5 that uses less than half of a 3090 and can generate 512x256 images in under 10 seconds.Sorry about picrel, it is what I use, and I am not happy with it
>>106923615Rouwei will save us. I did check out 3.5 and it wasn't better than 3.0, and overall this finetune is only marginally better than base netalumina imo. And it is fucking slow and there's no adoption at all, no tooling. Sadly lumina is dead (still). This isn't exactly related to your text encoders point but what I want to say is Gemma 3 or Qwen 3, or any next gen llms, we still need a small but viable image model first. Hopefully someone makes use of that no-vae approach of the chroma guy because vaes are dumb too.
>>106923738>Rouwei will save usfuck off minthy
Lumina rocks, but god its slept on. We need an autistic retard furry savior to push everyone to it.thanks to the anon that posted the grid which got me going down one of the only imagegen rabbitholes of the year which didn't make me depressed. but now i gotta see about training some styles for it. It knows a few good artists but its finnicky at best.
>>106923731pre or post op?
Neat that with Chroma Flash these are first try (I recall them being like at least 5+ tries to get coherent results on v38).https://files.catbox.moe/vcmo58.png
>>106923746Does it just work or is there wf
>>106923754i stole a redditor's since i couldn't get ultimate sd upscale working, brain is too spaghetti'dhttps://www.reddit.com/r/StableDiffusion/comments/1ilipk3/comment/mbuzrks/
>>106923720yw
>>106923742I'd fuck minthy iykwim, he contributed enough to my enjoyment of SDXL.IF he gets rid of the fucking sepia!
>>106923749preferably pre op
>>106923771*chef's kiss*
I have to say, this goes to my oof & yikes compilation
I'm using a 2009 laptop with windows 7. I get my satisfaction from seeing other peoples gens. I save them into a folder and jerk off to the best ones every friday night.
>>106923734Some sdxl finetune. Pic related is ilustmix, 12 seconds on 4070S, 30 steps.Don't gen at 512x256, it looks like shit. Downscale it if it's required.
That Neta Lumina model looks promising for anime stuff. I prefer 3D render style images and it doesn't do those very well but it has pretty decent prompt adherence.
>>106923859>I prefer 3D render style imagesSo do I, but Illustrious had the same problem at first. Should be easy enough to train 3D datasets.
>>106923962
>>106923771
>>106923771Now make her get doubleteamed by Sonic and Knuckles while Tails watches
>>106924018>20 seconds
Anyone have any idea what this guy is using for his vids?https://www.instagram.com/fullwarp?igsh=MTd4MWVkcmxuZm01cQ==
>>106923771Going right in the old spank bank
>>106923771thanks for inspiring me to spin wan 2.2 back up and generate more yoink material
>>106923859 >>106923892it may still learn 2.5 and 3dcg type artwork. else chroma radiance definitely learned 3d pretty well already
i keep getting OOM errors with comfy and the new wan2.2 lightning i2v loras.. never happened before.. but if i just keep trying the gens eventually go thru.. shit's fucked.. no reason a 5090 should OOM on a 832x640 81 length gen
>>106924208whut duh fuck
>>106924214
>>106924208make it breathe and squirm
>>106924234don't make it breathe and squirm
>>106924117i know he said hailuo and sora in the past when i came across some of his vids on reddit
>sloptwerk lora>dildo ride lora>rife 60fps>near 1080p upscalei did it. i finally did it. i reached ai coom enlightenment
>>106924363I am taking notes.
lets see the result
When the bubble bursts, are we finally gonna see some optimizations and not just endlessly stacking compute and parameters?
>>106924167I tried the official workflows for Chroma and Radiance a bunch of times and it outputs weirdly amateur-looking results. Like something you'd expect out of an amateur illustrator or 3D modeller. Normally I feel like that indicates a too-low CFG but cranking that up doesn't seem to help.
>>106924398sloptwerk in combination with dildo ride, set dildo ride to 1.8 high 1.5 low, adjust as needed, twerk to 1 because its already a strong and good loramy shift is at 5.0 despite genning at 1280x720, seems to be golden stable settingfuck around with steps but honestly 8 is best for less blurry/grainy motionopen to suggestions if theres ultra enlightened giga coomers who have better setupshonestly 99% of the limiting factor has been the lightning loras, scroll up to earlier when i pointed out the best one for the hires pass thats mostly what fixed all my problems
>>106924462link?
>>106924532>>106924532>>106924532
>>106924289>>106924259
wan does not understand the concept of quickly
>>106924703lightx2v**
>>106924710yeah.. using that already.. not seeing much difference
>>106924717He's saying that's lightx2v doing slo-mo. That's what it does, it's infamous for it. Fast gens, slow motion.
>>106924785oh i thought that was supposed to be fixed in the new version or something
>>106919643nice
>>106922454left
>>106923738>Rouwei will save usNon-t5gemma rouwei has lost *all artists* (very noticeable in his own gallery where artists are suspiciously absent from prompts outside of one single image which doesn't look like prompted artist at all). Who the fuck needs an ilu fork without artists?Haven't tried t5gemma yet, but he himself says that it's spacial understanding is worse.
>>106926418I really don't see how Rouwei would end up better than NetaYume as long as the NetaYume guy keeps working on it IMO.