Discussion of Free and Open Source Text-to-Image/Video Models and UIPrev: >>106618946https://rentry.org/ldg-lazy-getting-started-guide>UIComfyUI: https://github.com/comfyanonymous/ComfyUISwarmUI: https://github.com/mcmonkeyprojects/SwarmUIre/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneoSD.Next: https://github.com/vladmandic/sdnextWan2GP: https://github.com/deepbeepmeep/Wan2GP>Checkpoints, LoRAs, Upscalers, & Workflowshttps://civitai.comhttps://civitaiarchive.com/https://openmodeldb.infohttps://openart.ai/workflows>Tuninghttps://github.com/spacepxl/demystifying-sd-finetuninghttps://github.com/Nerogar/OneTrainerhttps://github.com/kohya-ss/sd-scripts/tree/sd3https://github.com/derrian-distro/LoRA_Easy_Training_Scriptshttps://github.com/tdrussell/diffusion-pipe>WanXhttps://comfyanonymous.github.io/ComfyUI_examples/wan22/https://github.com/Wan-Video>Chromahttps://huggingface.co/lodestones/Chroma1-BaseTraining: https://rentry.org/mvu52t46>Neta Luminahttps://huggingface.co/neta-art/Neta-Luminahttps://civitai.com/models/1790792?modelVersionId=2122326https://neta-lumina-style.tz03.xyz/>Illustrious1girl and Beyond: https://rentry.org/comfyui_guide_1girlTag Explorer: https://tagexplorer.github.io/>MiscLocal Model Meta: https://rentry.org/localmodelsmetaShare Metadata: https://catbox.moe | https://litterbox.catbox.moe/GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-oneTxt2Img Plugin: https://github.com/Acly/krita-ai-diffusionArchive: https://rentry.org/sdg-linkBakery: https://rentry.org/ldgcollage>Neighbours>>>/aco/csdg>>>/b/degen>>>/b/realistic+parody>>>/gif/vdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/trash/slop>>>/vt/vtai>>>/u/udg>Local Text>>>/g/lmg>Maintain Thread Qualityhttps://rentry.org/debo
Blessed thread of frenship
>anistudio preview is a "Happy Birthday /ldg/" gen >ldg does not include anistudio in opthats cold desu
comfy should be dragged out on the street and shot
>>106625160i'll hug you so hard your spine will break faggot
>>106625169schizo rushes bakes to deter progress in the space. incredibly sad behavior. probably a python jeet that doesn't want the shitty language to die
Comfy won
>>106625169>>106625180if its added, like last time, some anon will rush to make a "real bake" ;_;
>>106625123YOUR DAYS ARE NUMBERED, NGREEDIA. I WILL OPT FOR THE CHINESE PRODUCT WHEN THEY FINISH HATCHING
>>106625198the reflections and the trees are pretty bad
>>106625196no bakes are real until anistudio is in the OP
best genner ITT? for me its blurjeeta
>>106625219nogenner takes the cake imo
Can you train a lora with a quantized model? Or do you need the full one
>>106625219Seedream anons (there are many)
Can't you just try to push your crapware into redditors instead?
What did ran do to ani?
can you keybind the "run workflow" button in comfy? i hate having to drag my mouse over to it every time
if only you knew anon...
penis penis penis penis penis
confy shoudl be draged and shod :DDD
i love ldg
>>106625251nevermind, im a fucking retard.
Anon once said there's little to no point in running a model you can't train yourself and I think about that often
>>106625293yeah but also consider he was a homosexual that took cock in the ass
>>106625293them's good words to keep in mind as ai further drifts into subscription bullshit
>>106625240ran is an insufferable retard to everyone
tried 2.2 lightning at 0.5 high, 1 low, was getting a fade effect for the first partthe anime girl looks at the camera and holds up a white plate with a hamburger in it.behold the sentient osaka burger:
>>106625376looks like she got her order taken at good burger
>>106625369please understand.most here do not care about any of these retards that have names. please keep all names to the containment general.you do not matter. they do not matter. shut the fuck up and post gens or die.
Lol so the retard spreading the fake rumors about a WAN 3.0 was actually right but it was wan 2.5 instead and is actually gonna get released "soon"
imagine how much better the field would be if anon simply refrained from being a faggot
Waiting for HunyuanEdit
>>106625376another iteration, not sure why it does a fade cut sometimes.
>>106625376Try the lightx2v t2v 2.1 lora for high. You get much better movement. Some redditor said they tried the low noise lightning on the high noise for better movement, I dunno, I havent had time to try it out: https://www.reddit.com/r/comfyui/comments/1nitok2/i_think_i_discovered_something_big_for_wan22_for/Wan 2.2 is kinda a mess for speed anything
Qwen SRPO waiting room
wan nunchaku and loras waiting room
>>106625376>>106625429Do not use any lora with the first two steps of the high model, just use 8 total steps and cfg of 3.5, 3, 1 ,1 with lora speed lora strength of 0, 0, 1 , 1
2.1 i2v lora for high, 2.2 lightning low noise lora for low:pretty clean overall, even with a mix of 2.1 and 2.2
>>106625456how the hell do you do that
>>106625456how do you activate the lora later and change cfg based on steps?
>>106625468yap yap yap yap yap yap yap yap
>>106625456Man I hope any new wan release gets rid of this high low nonsense. I've just been using 2.1 because, kek>inb4 medium noise
yep it's burger time
>>106625169whoa he referenced /ldg/, the people who hate him. guess we have to suck his cock now
>23 gens in and the loop is perfectGood thing I'm used to the gacha aspect.
>>106625520So you've just completely missed out on the superior model? Lmao
>>106625399>2.5So it's just another minor improvement in coherence and promp understanding? Please at least ditch the the model template jesus.
>>106625520wan 2.2 is better in every way anon
oddly enough, 2.1 high (1 str) and 2.2 low (1 str) seems to work greatwhy the fuck does 2.2 high kill motion?
>>106625556>two model*
after manhandling flux, krea, chroma, wan and all that, i think i'm returning to sdxl just for the sheer speed at which i can create goon content, the realism models are honestly very much "good enough"
>>106625569woah, buddy, pal, brother of mine in christ who is god almighty at least fix your fucking hands before giving me a boner with that pretty face
>>106625369Any examples?
almost perfect...but still better than the other lora setup.
>>106625590seedream solves this btw
>>106625564>why the fuck does 2.2 high kill motion?it probably doesn'tyou're doing stuff with loras (maybe even wan 2.1 loras)
>>106625564I'm also testing to swap to t2i lightning loras too now. Having both t2i in high and low seems to produce amazing motions for nsfw.Doing the normal 2.2 light2xv loras as well as 2.1 t2i lora at 0.5, so a total of 4 lightning loras. Just having the 2.2 versions kills motion a lot.
>>106625592blogposts and seethes with 99% of posts. was told every time to stop and keeps defending being a retarded avatarfag that can do no wrong.
>>1066256162.2 low seems absolutely fine, it's just 2.2 high that makes it a slideshow.
>>106625520>>106625550I dont give a flying fuck about muh "quality" when 2.1 does what I want exactly. I have to do 8 gens to vaguely get what I want from 2.2 where 2.1 does it in 1 gen.
>>106625590hand/foot detailers on sdxl, are kicking my ass right now
>>106625634Yeah I've noticed that keeping the low noise clear off any other loras, even if they are meant for it, it just improves overall quality dramatically sometimes.Bumping up the 2.1 t2i lora up to 3 doesn't seem to ruin the result as much as 2.2 does.I'm going to have fun experimenting with this the coming days.
>>106625647git 'em boi git 'er done
>>106625647just use qwen edit or kontext. you can take any girl and create any pose or setting with them. it's basically magic.or the base non edit models work good too.
What do you actually want to use for generating good quality smut? I fell for the flux.1-dev-SRPO scam and got nothing out of it without an existing workflow to pull from. I tried a ChromaHD workflow someone slid here which seemed fine, but only seemed to yield consistent (if grainy) results switching over to DPM + Karras which kind of suck for speed. Is there a free lunch I'm missing here that yields similar results with greater speed? Working with a 4090. Assume I don't know shit about shit, the last time I cared about spinning up my GPU to generate stupid pictures on my PC was 2022.
>>106625569Is she taking a shit?
the anime girl wearing glasses and a white tshirt and jeans, runs very fast down a street in tokyo.
>>106625718now continue it from the last frame and have her get hit by a bus
>>106625677i haven't dug into qwen yet, i really liked the realism i could get out of flux/krea but the gen times were killing me. How is Qwen with hands/feet out of the box? I don't mind adding detailers
>>106625729Osaka would bounce off it.This time I replaced very with "extremely" and she almost became the flash:
Diffusing locally with seedream powered by ComfyUI API nodes
>>106625612>>106625791i seeded your mom buddy what do you think of that
>>106625711could be
retard here, how do I train chroma loras on onetrainer, when I try I just get this error when loading the model >"It looks like the config file at '{config_file}' is not a valid JSON file."
>>106625691It really depends on what you want to make. Plus people keep jumping on the latest shiny object then expecting amazing results but rarely bother to experiment, test, prompt combinations, mix match or even train their own loras for existing models. I went from training hypernetworks for sd1.5, animatediff motion models, sdxl loras to recently training loras for chroma and wan video. Basically, if you can find anything for it, you got to somehow create it for your self.
ooh, this combo worked well: non interpolated output btw2.1 i2v lora at 3 strength2.2 lightning low at 1 strength.
>>106625151>>106625169https://files.catbox.moe/uveqy4.mp4status: CRASHINGi was able to gen with only "1girl" prompt but when i start adding more, i mean just look at the video, only 3 additional prompt, and it crashesANI STUDIO CRASH IS NOT ON A OP LEVEL AND MEVER WILL BEI could make a repo too and drop the birthday pic for /ldg/. Quality and functionality > your emotional attachment to a local board FAGGOT
>>106625883>2.1 i2v lora at 3 strengthVery nice! 2.1 proven to be useful once again
im gonna kms with this 8GB 3070. What the fuck should I do? I just want to create videos with a model that fucking lets me do that. Quality isn't all that important. Should be "good enough". Which one?
>>106625847Do you have the entire base model folder from HF?
>>106625922as said I'm retarded, so I have to put the VAE and encoder into the same folder as the base model?
>>106625916a kijai workflow had a 2.2 workaround with the 2.1 lora at 3 high, 1 low. since 2.2 lightning (low) is fine, I guess it should work also with 2.1 high (3) and 2.2 low (1). seems ok so far.
>>106625922nta, what do you do if there is no model folder in HF? how do you extract those files if all you have is the safetensor file?
>>106625917you can use the multigpu node and the virtual vram setting with quant wan models to use some RAM if you dont have enough vram, worth a try
>>106625913>>106624701
>>106625883even the burger prompt worked better:the anime girl wearing glasses eats a McDonalds cheeseburger.
>>106625928https://huggingface.co/lodestones/Chroma1-Base/tree/mainYes, all this in one folder. Also because Onetrainer is retarded and cannot pick folders, you need to select the .safetensor file first and then delete that from the path ie E:\AI\Chroma1-Base\Chroma.safetensors -> E:\AI\Chroma1-Base\
>>106625917why dont you just buy a 50 series with 16gb at the very least?
>>106625957https://www.youtube.com/watch?v=DXvjwv_9yHU
>>106625961id need to buy an entire new machine and i refuse to spend money on hardware. i always steal it from work but no opportunity yet. soon i hope.>>106625947yeah i have to give that another try
>>106625959>you need to select the .safetensor file first and then delete that from the pathyou dumb, dumb dummy
>>106625942What model? Are you sure it's the base and not some trained checkpoint/finetune?
>>106625965mcdonalds JP has the best ads by far.
>>106625967You can't pick folders in OT nigga
>>106625621Who?
>>106625979indeed
the anime girl wearing glasses opens a white pizza box and eats a slice of pizza.very smooth, and osaka eating it backwards makes sense for her I guess being silly
>prompt chroma for animesque and nothing more to feel out the tag styles>it's all teal dog from deltaruneRetarded tag bleed
>tagsretard
>>106625612Seedream isn't as consistently realistic as people say it is though tee bee aych, like at all, e.g. it has a serious case of "Every Chinese Diffusion Model Ever Eyes" in the sense of eyes in general for realistic gens looking more like CGI, and colors like green and blue for them almost always come out in a weird unrealistically bright neon shade
>>106625484>>106625480String to float node on cfg and lora strength, don't know how the fuck you don't know that.
>>106625966>id need to buy an entire new machineu dont
>>106625965ToT
>>106626040Don't fall for it
the anime girl wearing glasses opens a donut box and eats a chocolate donut.pretty good
>>106626041>How do you not know about le esoteric horrible UX catastrophe workaround #645
Anyone tried the HunyanImage with the refiner? Is it better than just the base model? Also holy fuck it's presence completely evaporated lol.
>>106626068I was being serious if you were saying I was baiting. The editing is really good and their 4K support is impressive, but the actual image quality of the model has a lot of the same sort of issues as Qwen and Hunyuan 2.1 in places.
>>106626097diff image, pretty good result
>>106626105I think people aren't using Hunyuan cause it officially literally only supports 2 megapixel gens, but not at a quality level that makes it worth it. Not sure why they didn't do multi res training like all other recent models.
>>106625959Thanks, another rarted question, can you train with a quantized model or do you need the base model?
>>106626050>u dontbut i do. i stole some mini desktop. cant even replace the GPU in it.
>>106626041>durrrr how do you not know about something you've never even heard of beforeThanks, asshole.
>>106625711
>>106626154give it back, delashatukquarious
>>106626147base
>>106626177>give it backto who? the company does not exist anymore.
>>106626131his response, not very aggressive...yet
>>106626185kys kike
>>106626097Oh man her ass is gonna blast chunks after all that food
>>106626181thx
>>106626187it was an evil company that commited fraud. my conscious is clear. cope. you are just mad i have been enjoying free hardware since 2011 or so.
>>106626020ah, presidential style
>>106626186success, there we go.
>>106625950FIX IT! IT'S YOUR FAULT ANI! ANY UI TILES THE VAE AUTOMATICALLY EXCEPT YOURS. DON'T YOU WANT TO ADD NODES ALSO? IMBECILE!
massive investment signal for api diffusion. open weight models are falling behind
>schizo melting
>>106626234based, just merge already /lsdg/
Look at how they cope and seethe my child
>>106625604>>106625770sauce us pls. catbox this shizzz
*yawn*
we back we back we back
>>106626279If anyone wants to collaborate to help report broken features in NeoForge, please tag me in /adt/. I'll forward the bugs to the Haoming02 through his GitHub.You can also post it here if you want, but I'm usually browsing anime threads so I might miss it
>>106626345Nah it's chill I use programs made for adults
kekthis is decent.
>>106626333>>106626345Yes pls, all civit stuff broken atm. That civit browser plus was a godsend. Really need an alternative. Also a gallery of sorts.to read prompts from local gens.
>>106626432>Also a gallery of sorts.to read prompts from local gens.I use Infiniite Image Browser!
>>106626345No, thank you, I do not support pedo devs.
>>106626449>Infiniite Image BrowserOh wow, I didn't know this. Thanks!
The man on the right takes the coffee cup from the man on the left, and drinks it.wan is magic, i2v is so neat.
>>106626520catbox
>>106626171what horrible manly feet
>https://arxiv.org/pdf/2509.14055Apparently this is the new "VACE" that's coming soon.
>>106626530its just riksuko from evangelion man.
>>106625970>What model?Sorry for the late reply, I couldn't find it for some reason. Turns out, it got deleted from Civit LITERALLY today. Here is a link to the model i'm trying to train with.https://civitaiarchive.com/models/644327/fmmrThe thing is, you can train with (for example) waiNSFW from a HF repo but the one i'm linking doesn't have one. My question is how do you get those files from it? Can I rip them from the file and make my own repo for it that people can link to with Onetrainer?
>>106626520what model/lora for the base ritsuko? that's a nice 80s/90s anime style.
>>106626537>those picsNigga. Anyway isn't the base for that sdxl?
>>106626550Illustrious, lora is called retroanime itself
>>106626535I'm not talking about the character, look at how the reflections on her clothing move as she walks... my wan doesn't do that
>>106626520I bet you like Mitsuru.
>Massive speedups for freeThis is why cloud hardware won, Comfy was right to develop ComfyCloud. Consumer GPUs can’t compete with million dollar SaaS superclusters. Not even RTX 6000 pro comes close to the superior 180gb B200
How does inpainting work if I want to remove a character from the background? Do I just paint over him and put "remove character" in the positive prompt?
>>106626590SaaS bros!!!!! WE ARE FEASITING!!!
>>106626575There I cant help you. When I put in the image, I have "retain artstyle "and stuff in the prompt, don't know how much that helps, but I've always had it in.>>106626579Umm not really, retro is my thing
>>106626559>those picsI know, but I don't use it for furry. I use it for how good it is at horror.>Anyway isn't the base for that sdxl?Yeah, but I like training them with the model i'm using it with since I heard it's better to do that. The issue is that I don't know how to get the folders you need for OneTrainer without taking them from someone else's repo.
>106626590in a few minutes you will seethe about comfyuiyou are demented man
>>106626534imagine the porn
>>106626595>>106626208Background removal extensions are hit or miss. I use Swarm for that (it has an incorporated button (probably uses a segmenter model) with reverse mask inpainting. Way better results for background compositing.
>>106626590They just use multiple GPUs. It's completely unsustainable, without high usage they are burning GPU hours, hence the heavy discounts. Unprofitable even with VC subsidization. Basically Fal, WaveSpeed and all those other companies are burning money hoping to outlast each other, in the end none of them will be left, the only winner is Nvidia.
>watch gamers nexus nvidia investigation>people can make 48gb 4090swhy does nvidia have to restrict vram so much
>>106626682moni
>>106626682Money, also there's nothing stopping you from buying one and having it shipped to you.
COMFY I HATE YOU
>>106626682Because the billion dollar SaaS gigacomplex relies on VRAM as a moat to keep local from catching up. If anyone could diffuse 4k in 10 seconds or train a base model in 2 weeks there would be nothing left to bait investors with
>>106626682can't wait for the chinkoids to release a vram monster
>>106626616You're retarded if you think all the complaints come from one guy. That's like saying all the Comfy praise in /ldg/ comes from one shill. Use your brain anon, multiple people can have the same opinion.
I need my SaaS fix...
>>106626726Use them all for free in lmarena
the camera pans left to show an anime style Miku Hatsune, standing near the table with the two men.
>>106626747model? workflow?
>>106626767it's just wan 2.2, 2.1 lora at 3 strength high, 2.2 low lora 1 strength low
>>106626776oh
>>106626520>>106626747now this.. this is the best one
>>106626747Wan is impressive because it's able to make a new character that manages to simulate the style of the original image, but Qwen Image Edit can't do that, if only they were as focused on QIE than Wan we would've gotten Nano Banana at home already
>>106626776>2.1 lora at 3 strength high, 2.2 low lora 1 strength lownta, what loras
>>106626721Nta but that"s not "complaint", it's inflammatory bait comment, we had this shit so many times it should be obvious by now.
>4 hours agoIt's over
>>106626800nta but probably the lightning loras
>>106626800lightx2v, 2.1 (only has one), and 2.2 low for the low noise model (there is two, high/low)
>>106626821>Wan Animatewhat's that?
>>106626832>https://arxiv.org/pdf/2509.14055tl;dr: VACE that can take reference video directly rather than converting it to pose/depth/etc
>>106626830>lightx2v, 2.1 (only has one)i have two>lightx2v_I2V_14B_480p_cfg_step_distill_rank256_bf16>lightx2v_I2V_14B_480p_cfg_step_distill_rank64_bf16which one are you using?
I have a feeling they will eventually make a "Wan-Edit", but my gut feeling also tells me it will not be open since Alibaba stopped open sourcing everything
>>106626890either of the last two is fine, I think rank64 is enough or should beother is 2.2 lightning (low, for the low noise part)
>>106626171She needs to keep her pants on
kekthe two men run down the street, chased by anime character Miku Hatsune.
>>106624859triplets
>>106626333stop the YAPPING
>>106626925thanks, it seems to add more fluidity to the movement.
chodestone is such an idiot. why does he fuck up evey time?
>>106625151>tfw not a single one of my low effort slops got inshitposts should be awarded
how do i get this nigger bitch to activate cuda stream and everything else? i just ran a gen with pin memory running, 54 seconds to push a gen to 1664x2432 + facedetailer, assumingly without any of that running.reforge of course
how do i get a character to do the middle finger in Wan? the peace sign seems to work fine.
>>106626908I can't help but think all those 2.x Wan are a testground to see which way to go for their shiny 3.0 saas Wan so that they can go against Veo 3 and Kling
>>106626472>lolisbased>gachashitnot based, unsubscribed
>>106627183https://civitai.com/models/1827932/middlefingerwan21i2vt2v
>>106625219I'm flattered. I don't see a lot of genners ITT to pick from but if we include last thread then I like these: >>106621049 >>106620080 >>106620586>>106625849>Plus people keep jumping on the latest shiny object then expecting amazing results but rarely bother to experiment, test, prompt combinations, mix match or even train their own loras for existing models.This is true. If you spend long enough with a model you start to find it much easier to get what you want, learn how to gently coax it etc., and then your experimenting can also get a lot bolder as you get a feel for what's possible.
>>106626908>a "Wan-Edit"like an edit model but for videos? meh, it's funnier to do it on images instead
>>106627212ah, so it cant do it natively? thats so weird. thanks for the lora
>>106627240Yeah it's a bit weird, maybe because this was considered insulting so nsfw or something.Kind of like violence.
lol seedream can do it fine. why are open weight models so censored
there we go:
> We are working with out partners to get V7 online, Open Weight Release soon (tm)real and true? what is v7 even going to be like. if v6 is anything to go by.. it's going to be pretty cucked.
>>106627240It's not something trained into vlms so middle finger is commonly misattributed to pointing or doing the peace sign. Maybe "rude gesture".
anime character Miku Hatsune shakes hands with the man on the right.wan magic
>>106626908Open source is a revolving door and an easy marketing technique. I also think companies should use their brains and realize as long as their model requires a $2000 GPU to run effectively, there's basically no risk releasing it as the poorfags are still going to use their SaaS product especially with a non-commercial license on the weights.
>>106627433>Maybe "rude gesture"This is what I get with that.
>>106627436stop spamming chink
>>106627473yeah that's what I figured, the vlm has a hard time with index finger and middle finger, commonly messed
the small pixel characters walk around the wood planks as a skeleton tips his hat.
>>106627227hot
>>106626821>>106626840>its another wobbly avatarEWWWW wan chads, whattaya doin?
Comfy status?Api nodes status?Saas status?
Still waiting for HunyuanimageEdit
>>106627798ComfyUI hyperaccelerating rapid development powered by API nodes, fully tapped into to a multidimensional local SaaS ecosystem driven by Seedream 4’s native 4k diffusion
the small pixel characters walk around the wood planks as a skeleton tips his hat. the camera rotates clockwise 180 degrees.wan continues to blow my mind
where is nunchaku for wan reeeeeeeeeeeeeeeeeeeeee
>>106627840New version of wan is coming soon, so forget it lol.
Is Chroma worth downloading? Or is there a better model for realism thats better for shitty old cards?I've been using CyberRealisticPony but probably should upgrade, I want to avoid flux chin but haven't seen good comparisons for realism models yet.Furries have entire art style grids but not so much the realism models that I've seen, so it feels harder to choose what to go with
>>106627930It's the only fully uncensored model we currently have that isn't some sdxl shitmix. So if you need that, get it.
>>106627930we lack information about your setup and i dont care enough to ask. just download it and try it out, retard. if its too slow/not good enough for you, dont use it until you upgrade
>>106626798I miss this 90s anime style, so tired of the muted colors crap we get today
>>106627840Not happening. They are to busy focusing on the new models. They're still messing around with qwen but you probably wont even see anything for wan 2.1 until next year.
>>106627930>Is Chroma worth downloading?No. It's a cool experiment but it was released in a subpar state with lots of outstanding issues.>Or is there a better model for realism thats better for shitty old cards?Virtually any SDXL model that isn't Pony or Illustrious.
>>106627930Download chroma so you can join us in properly understanding why it’s such a failbake
>>106627930>Or is there a better model for realism thats better for shitty old cards?It's the best model for unslopped realism at the moment, but the base model is slower than Flux due to not being distilled and having negative CFG, so probably not a good fit for 'shitty old cards', instead use Chroma Flash: https://huggingface.co/lodestones/Chroma1-Flash/tree/mainWhich is a much faster version without negative prompts
>>106627954Have they said anything on Nunchaku Qwen supporting loras ?
what sampler/scheduler/steps do i use with flash?
Speaking on cfg and negs, anyone tried using NAG with Chroma? What setting have you used that didn't end up in an overly bright image?
>>106627978>https://huggingface.co/lodestones/Chroma1-Flash/tree/mainhow do i add heun to comfy?
>>106628006heun/bong also any other slow samplers xx_2s, 3s and the m variants
>>106627930Depends on what you mean by shitty old card. It's uncensored and can prompt for various styles so it's worth trying out. It's definitely not fast, but you'll get more variety than an old sdxl based model. It needs natural language prompts as well.
>>106627942Mostly asked because, potato PC aside, Chroma being 18GB would likely take an hour to download for me, which is the biggest issue when testing shit. If the files were smaller I'd collect these damn things for fun>>106627959Interesting since those two seem to be the main ones for non-realism, will look around on that>>106627978Also interesting, thanks for the info. I'll try a smaller GGUF for now to see some comparisons between that and what I already have thenThanks for the input everyone
>you can now get your vtuber to do anything
>>106628054Thread meme is constant chroma shitposting. Feel free to ask for help here or the discord.
>>106627930the improved models aren't simultaneously better for shitty old cardsit is pretty good tho
>>106628054Q8 chroma is basically no quality loss over full size
>>106622976What do u mean? I have no clue about APIs personallyI guess I made a free API and used it in colab, but it's got a rate limit or something
>>106627978i keep getting anime styles with flash, i hate it.
>>106628082>you can now get your highschool crush to do anything
>>106628082it'll mostly be war crimes, won't it>>106628168i also don't like it, base is better.
>>106628168Define it's a photo?
>>106628168Even with photo, film still in your prompt ?
>>106628082>you can make bananas dance!
>>106627294How / where do you even run Seedream in a way that doesn't have safety filtering applied by some third party, though
Chroma with silveroxide's bastardized hyper lora is actually decent if you really need to run it as fast as possible
>>106628241You think you know better than bytedance about what you should and shouldn't be generating?
>>106628241Bytedance official api is uncensored. The only censorship comes from western kike vermin like fal and replicate
>>106628257https://huggingface.co/silveroxides/Chroma-LoRA-Experiments/tree/main/hyper-low-stepThis one is usable even on higher cfg because it isn't really flash-yhttps://huggingface.co/silveroxides/Chroma-LoRA-Experiments/tree/main/flash-heunthis one is decent
>>106628275really? show me vaginal penetration from it then
>>106628231>>106628234yes, i had photo, film in the front as the first 2 words.
the anime girl shakes hands with anime character Miku Hatsune.
>>106628275So you just have to write your own Python script, basically?
anybody with a 5090 who uses chroma, how long does it take for a single image with upscaling?
>>106628275post futa oneesans pegging a shota
>>106628300that question makes no sense, provide a wf and I can try it for you
well that worked better than expectedwan 2.2 magic again.
>>106628329pipkin sniffa
>>106628329>>106628349Why are the first frames so laggy?
>>106628320i'm not sure how my question does not make sense. i was just asking how long it takes to gen a single image in chroma with an upscaler.i dont have a workflow, sorry. i'll just buy the 5090 and try it, fuck it.
>>106628360these were with no interpolation, film vfi works really well in case it starts slow, usually is fine though.also forgot to say black blindfold, enjoy a red eyes 2B
>>106628371There's lots of different chromas, upscalers, upscaling methods, etc. You basically asked "how fast does a car go?"
>>106628371>how many steps>what resolution>what chroma model>what sampler>what scheduler>what random optimisationsanon...
https://xcancel.com/decartai/status/1968769793567207528>We are building “Open Source Nano Banana for Video” - here is open source demo v0.1bruh that's cringe
>>1066283990% chance of having nsfw baked in, no interest
I do not care for the tiny fruit
>>106628371My setup is a 4070S, Q8 Chroma non-flash. CFG 4, 30 steps. 1152x1152 and it takes about 70-80s without cache. so extrapolate this info.
>>106628408not everyone has a small banana bro
>>106628399>the comfyui nodes are api onlyhttps://github.com/decartAI/lucy-edit-comfyuiAHAHAHAHAHAHAHAHAHAHAHAHAHA
>>106628408kek
>>106628399>Open Source>>106628432>api onlylol?
>>106628432ohhhhh nooooooo i thoguht the people shitting about comfyui being api saas slop were just schizo trolls???? maybe they were right all along and there IS a massive coordinated push to enshittify local
>>106628408bold of google to make a sex joke name on one of their most relevant model kek, sometimes they can be based
What's the last uncensored local model released? I mean, by big corposWasn't Hunyuan Image 2.1 uncensored? And by that I mean, having porn in his dataset.If you exclude Hunyuan, I don't remember anyone at all ever releasing a model trained on porn since January
>>106628454>Wasn't Hunyuan Image 2.1 uncensored?HI2 is so similar to qwen it might have been an industrial espionage kek. I think an anon generated some basic sex but prolly nothing too freaky.
>>106628399>Wan2.2 5Byawn
>>106628432
>>106628388>>106628393i know that, hence why i said "that uses chroma". an example of your own workflow would've sufficed for a general idea of what to expect.>>106628411thanks, doesnt seem too bad so a 5090 should be ridiculously faster, even with upscaling.
>>106628399>We are building “Open Source Nano Banana for Video”>API based custom nodes
>>106628466>see chud, we get to call our twitter announcment "open source" even if we'll open source it later (and by later I mean MochiHD later)
More and more evidence mounting for total SaaS victory. Local inference cannot compete with the speed and power of cloud compute. Look no further than the amount of people coping with nunchaku, quants, and speed loras.
>>106628479https://huggingface.co/decart-ai/Lucy-Edit-Dev/tree/main/transformer
>>106628468Upscaling depends on how much I'm scaling and how many steps so it can be anywhere from 1min to 3min.
>>106628496ok fair enough
>106628483grr grr you made me very angry I'm sure you feel so good for the successful bait
what does SaaS even stand for, i keep reading Saar in my mind and it makes me laugh
>>106628496>5BNo way this will be good.
>>106628508it'll be shit unless it's some magical thing
how do i force foreshortening on chroma?
anistudio will unironically save us
SDXL was the peak of open weights. Everything after is plastic slop
>>106628450speaking of google they just released a paper on how to improve image model futher, seems like they're bored to be on top and want some competitors to catch up to them kekhttps://www.arxiv.org/pdf/2509.10704
the anime girl wearing a black blindfold transforms into a blue slime girl.surprisingly kinda worked, gj wan
>>106628515I've been doompilled about it but it's unironically the only UI going in the direction I want
kekA large amount of vanilla pudding falls on the white hair anime girl wearing a black blindfold....technically true
>>106628519If you posted this in 2023 anon would threaten to rape your family
>>106628522workflow pls
>>106628507Software As A Service but you can just call it gay and retarded same difference
new>>106628570>>106628570>>106628570
>>106628560thanks, that does sound gay and retarded. i'll stick to local.
>>106628559https://github.com/kijai/ComfyUI-WanVideoWrapper/tree/main/example_workflowsit's just the kijai i2v one, 2.1 lightx2v lora at 3 strength for high pass, 2.2 lora at 1 for low noise pass.I think it's this one:https://github.com/kijai/ComfyUI-WanVideoWrapper/blob/main/example_workflows/wanvideo2_2_I2V_A14B_example_WIP.json
>>106628454Probably 1.5 if we're also talking about artists
>>106627930>I've been using CyberRealisticPonybruh
>>106626105it evaporated because official support in comfyui was slopped in as usual https://github.com/KimbingNg/ComfyUI-HunyuanImage2.1
>>106628578i'm using that but they don't stop yapping regardless of the promptwhich exact loras?
>>106628606Do you have mouth movement, talking and similar in NAG?
>>1066286062.1 lightx2v 3 strength2.2 lightning (low) for the low model, 1 strengthseems fine to me
>>106628614talking, speaking, speech, in negatives. will add mouth movement.no nag positives, default strength settings, using the nag node hooked up last before the samplers.>>106628626had lightx2v in both, will grab lightning rn
>>106628519Neither Chroma nor Flux are plastic slop when used correctly. Skill issue.