Discussion of Free and Open Source Text-to-Image/Video ModelsPrev: >>107311297https://rentry.org/ldg-lazy-getting-started-guide>UIComfyUI: https://github.com/comfyanonymous/ComfyUISwarmUI: https://github.com/mcmonkeyprojects/SwarmUIre/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneoSD.Next: https://github.com/vladmandic/sdnextWan2GP: https://github.com/deepbeepmeep/Wan2GP>Checkpoints, LoRAs, Upscalers, & Workflowshttps://civitai.comhttps://civitaiarchive.com/https://openmodeldb.infohttps://openart.ai/workflows>Tuninghttps://github.com/spacepxl/demystifying-sd-finetuninghttps://github.com/Nerogar/OneTrainerhttps://github.com/kohya-ss/musubi-tunerhttps://github.com/kohya-ss/sd-scriptshttps://github.com/tdrussell/diffusion-pipehttps://github.com/ostris/ai-toolkit>WanXhttps://rentry.org/wan22ldgguidehttps://comfyanonymous.github.io/ComfyUI_examples/wan22/>NetaYumehttps://civitai.com/models/1790792?modelVersionId=2298660https://nieta-art.feishu.cn/wiki/RY3GwpT59icIQlkWXEfcCqIMnQdhttps://gumgum10.github.io/gumgum.github.io/https://huggingface.co/neta-art/Neta-Lumina>Chromahttps://huggingface.co/lodestones/Chroma1-BaseTraining: https://rentry.org/mvu52t46>Illustrious1girl and Beyond: https://rentry.org/comfyui_guide_1girlTag Explorer: https://tagexplorer.github.io/>MiscLocal Model Meta: https://rentry.org/localmodelsmetaShare Metadata: https://catbox.moe | https://litterbox.catbox.moe/GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-oneTxt2Img Plugin: https://github.com/Acly/krita-ai-diffusionArchive: https://rentry.org/sdg-linkBakery: https://rentry.org/ldgcollage>Neighbors>>>/aco/csdg>>>/b/degen>>>/b/realistic+parody>>>/gif/vdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/trash/slop>>>/vt/vtai>>>/u/udg>Local Text>>>/g/lmg>Maintain Thread Qualityhttps://rentry.org/debo
Anyone tried this?https://civitai.com/models/2141474/wan-22-flf2v-i2v-continuation-for-infinite-loopage-long-compositions-not-sucking-and-whatnot
how do i unload wan text encoder after it does its job in comfy? it seems it causes the rest of models to offload
Blessed thread of frenship
>>107321221fuck off tranny
https://files.catbox.moe/pw7ydz.png
>>107321296
https://files.catbox.moe/cu81jc.png
>>107321349>>107321360Why are your wan gens not moving?
https://files.catbox.moe/b9cfxt.png
>>107321362Because I'm generating a one-frame "video", to use WAN as an img gen. It's on purposehttps://files.catbox.moe/c135ow.png
cw: edgyhttps://files.catbox.moe/gwhkxh.png
https://files.catbox.moe/3x1gz4.png
https://files.catbox.moe/4czpyb.png
https://files.catbox.moe/c8c5ub.png
A reasoning mode? so it'll be an autoregressive model? Now that's interesting...https://xcancel.com/bdsqlsz/status/1993286379418402818#m
>>107321409wan t2i is very underrated
>>107321424lodestone won't like that!
>>107321500furries btfo
>>107321500DOA
Any new cool banned WAN lora?
Is anyone here using the Qwen model locally?Can you do porn with it? What kind of hardware do you need?
>>107321424>A reasoning mode? so it'll be an autoregressive model? Now that's interesting...Cool, guess we need to update the name of the general again as generative AI enters an autoregressive phase
>>107321696I'm using it on 3060 12 gb, takes 3-4 minutes per gen without speedups but it's doable.The model itself is a mixed bag honestly. It has a very good prompt adherence and it's possibly the best local model when it comes to drawing hands. But it's also very rigid, doesn't know a lot of styles and has a very obvious baseline "style" to it that never goes away. Didn't try making nsfw with it, but i think it works better as an inpaint/edit model, genuinely good for fixing mistakes from better but less consistent models like chroma, especially if you want more "artsy" outputs.
>>107321566This just produces nightmare fuel with my settings.
>>107318121>q8 works at the same speed as fp8 on 30xxCan't believe I bit, but I bit.Top is q8, bottom is fp8I'm staying with fp8.
How do I get comfyui to use more of my ram to prevent OOMing?I have 48gb system ram and 16gb vram but when I'm using wan trying to load 500 frames, it only goes up to 28gb system ram then OOMs? Shouldn't it at least eat up the remaining ram? I already use ComfyUI-MultiGPU
>>107322056kek, nice
>>107321500He can always just finetune it.
now that the dust (async offload and that other shit) has settled into comfyui, it seems like old chroma workflows from a month ago are 10% faster now
>>107322057>retard doesnt know what a seed ishow new are you?>>107322092wan needs 64gb minimum during inference, it probably tries to allocate a big batch and fails so it doesnt bother
>>107322286>wan needs 64gb minimum during inferencebtw i mean minimum ram, assuming you already filled out 24gb vram, but even then it needs 96gb ram to not swap to disk at all in between gens128gb ram is a good place to be if you want to gen a lot of videos but 64 is enough to clear the biggest performance milestone
>>107322057>https://medium.com/@furkangozukara/comprehensive-analysis-of-gguf-variants-fp8-and-fp16-gguf-q8-vs-fp8-vs-fp16-c212fc077fb1i'm all for making your own experiments and drawing your own conclusions but fp8 on a 30XX isn't optimal.. if you're happy though, do you
>train aberration lora>accidentally make white woman lora
>>107322286>>107322315Was hoping there would be some kind of node to remedy this. I tried the block swap node, it just hangs. Apparently comfy has some kind of automatic block swap but this also hangs. I can load 350 frames no problem but 400 really struggles (even with the speed boost slop cocktail and at 512 x 512)Funny enough, I had plan on buying 128gb of ram but will have to wait until the retard prices come down.
>train istripper lora>miss one watermark>>107322428adding frames uses more RAM exponentially, 400 frames is a fuckton of usage
>>107322428>will have to wait until the retard prices come downits more likely that we get a 3x better model that also fits on your setup ever before ram prices start going down, in for a rough year for ramlets
>>107322056lol
it's crazy how bad the optimization is for hunyuan 1.5. for a 8b mode, it's unacceptable. the tencent team has learned nothing in a year
is there a way in comfyui to combine two loras into a new file? i like to run one as a negative to the other but it would it be cool to save that as a different lora.ie. a professional photograph lora as a negative to an amateur lora and save that as a new file?
it's outhttps://x.com/bfl_ml/status/1993345470945804563
>>107322891>32BRamlets need not apply. Sorry! HAHAHAHAHAHA
>>107322907I wonder if comfy will refuse to implement it like he did with hunyuan image.
>>107322891i legit just nutted
>>107322286>retard doesnt know what a seed is>how new are you?I dare you to demonstrate your incredible experience by finding a magic seed that increases gen time almost *three* times compared to all others.>>107322358Real quality furk research right there, as always. But no offense to esteemed professor, there's literally nothing new in it, as always.I would love to speed ggufs up to fp8 speeds and go back to them, but I don't see how. Maybe if city implements all the llamacpp dev hacks.P.s. beaten by bfl by a thread.
>>107322891https://bfl.ai/blog/flux-2>Run FLUX.2 [dev] on a single RTX 4090 for local experimentation with an optimized fp8 reference implementation of FLUX.2 [dev], created in collaboration with NVIDIA and ComfyUIcumfart won
i cant wait to gen close ups of eyes !
>>107322891Flux will always be overly censored nurtured garbage. I don't understand why anyone would care to use it.
>>107322932youre a retard
>>107322891basedfacing and stimming irl. can't wait to install this on comfyui and be let down!
>>107322891WHATTA TIME TO BE ALIIIIIVE!
>>107322918https://github.com/comfyanonymous/ComfyUI/pull/10879lol
>>107322358>unironically using turkish roach """research""" kekd
i have never nutted so hard in my life thanks to ai
>>107322891>>107322907My 4090 is obsolete now?
>>107322967>mistral tokenizerLMAO
Chroma 2 when?
>>107322988soon
>>107322891https://bfl.ai/blog/flux-2>FLUX.2 [dev]: 32B open-weight model, derived from the FLUX.2 base model. >Run FLUX.2 [dev] on a single RTX 4090 for local experimentation with an optimized fp8 reference implementation of FLUX.2 [dev],uhh excuse me how is that possible? Qwen image (20b) at fp8 is already using 24gb of vram
>>107322990>kleinLOTM bros... WE WON!!!
>>107322992maybe its an fp8 pruned version. comfyuiorg still hasnt uploaded their weights, also no templates available yet
>>107322983the baguettes and the bretzels allied together again I see
>>107322958Don't get too excited, that's probably their (((Pro))) model
>>107322990chroma is so fuckin dead, what an incredible _waste_ of resources that turned out to be
>>107322891is it just an image model or does it let you edit as well?
>>107322891using same prompt as >>107311951
>>107322891benchmemes
ok but hear me out, flux2 nunchaku when?????????
>>107323018how did you test it?
>>107323017
>>107323022BENCHMAXXXX BROS WE WON!!!!!!!!!
>>107323022sneeddreamsisters...
>>107323041thats the pro version thougheverbeit
>>107323041>>107323055>FLUX.2 [dev]: 32B open-weight model, derived from the FLUX.2 base model. The most powerful open-weight image generation and editing model available todaythe dev model can do edit as well, 32b though... :(
>>107322891>The model couples the Mistral-3 24B parameter vision-language model with a rectified flow transformer. The VLM brings real world knowledge and contextual understanding, while the transformer captures spatial relationships, material properties, and compositional logic that earlier architectures could not render.How many B's again? At least they've ditched T5.
>>107322891>still using vaeguess everyone will train on new banana to publish something now before we actually start looking into chroma radience tech to actually make editing the images not lose quality through vae in the next iteration...
>>107323062should I unironically buy a 5090
>>107322891>>107323012Trying the pro version on their website; yeah yeah I'm an apicuck, I'm aware. I don't really see the point in using Flux if you go to API route. If you go the apicuck route, it makes more sense to use either Sora or Nano Banana Pro"proffesional quality modern analog photo, film grain, vibrant colors, beauty shot of a F-35 Lighting II with detailed irezumi tatoo-like illustrations on its bodywork in-flight. the plane is doing a pirouette and deploying colorful smoke from its wingtips. the photo is in a professional magazine, and text in Japanese about the f-35 is readable"
>>107323063So it's one model or it's a 32b model + 24b vision model?
>>107323072Same prompt on Nano Banana Pro.
>>107323072>even the page foldlmao
>>107323072can you try the dev instead? that's the one we'll be using
>>10732307632 + 24 saar
>>107322952Where's the seed?
>>107323087>32 + 24 saarbruh... they're just showing they can't improve shit they just go for more parameters to improve their images, STACK MOAR LAYERS
>>107323016I hope he doesn't try to finetune flux 2 Klein.
>>107323063>>107323087>not an autoregressive modelwhat a mistake, nano banana 2 showed how powerful this shit can be
@comfyuibfl are linking to your dead link lolhttps://bfl.ai/blog/flux-2https://blog.comfy.org/flux2-state-of-the-art-visual-intelligence
>>107322891Nice! Looks like I got my Comfy back up and running just in time (needed ChatGPT to fix all the errors this morning).
>>107323084gotta login and buy credits on those API sites. fuck that noise. Know what I mean? If I'm gonna go the apicuck route, I'll go with the best ones at least
I have to upgrade to 5090?
>>107323113well it wasn't trained overnight
>>107323113We have hunyuan.
>>107323138>we have a 80b autoregressive plastic slop modelno thanks
>>107323084Ok, FAL lets you do some freebies. This one is using Flux 2 dev
>>107323156can you go for english text I don't speak ching chong I can't verify if it's accurate
>>107322891>>107323122vramlet shit?
>>107323084>>107323156Had to do 50 steps with a CFG of 4 to get something halfway-decent
>>107323071nah, you should be doing that kind of thing ironically
>>107323087doa lol
>>107323171proffesional quality 1970s analog photo, film grain, vibrant colors, beauty shot of a Tupolev Tu-22M Backfire with detailed irezumi tatoo-like illustrations on its bodywork. we can see the hangar doors open, flooding the scene in a stark blinding white light, bathing the entire scene in it. the photo is in a professional magazine, and an infographic in English about the tupolev is clearly readable
>>107323155>80bIt's moe, it's fine to be 80b if it's moe.>plastic slop modelBfl is literally the trope codifier.
license status?
>want to try flux 2>62gb file and no sign of qBut why release it?
>>107323173Mine (the girl) was just Flux.1 Dev. I don't use API at all.>Those with 24-32GB of VRAM can use the model with 4-bit quantizationOof! Time to get a 6000 Pro for Christmas...
>>107322891did they improve anything on the architecture since the last year or did they just (((stacked more layers))) and called it a day?
>>107323206retard
>>107322958trained only on the finest stock images i see
>>107323188>doa lolhttps://blogs.nvidia.com/blog/rtx-ai-garage-flux-2-comfyui/>And to make this model accessible on GeForce RTX GPUs, NVIDIA has partnered with ComfyUI — a popular application to run visual generative AI models on PC — to improve the app’s RAM offload feature, known as weight streaming.>Using the upgraded feature, users can offload parts of the model to system memory, extending the available memory on their GPUs — albeit with some performance loss, as system memory is slower than GPU memory.I was old enough to remember we called that ram offloading lool
>>107322958Looks like it can do even less styles than the first one. We're stuck with lumina forever.
chromabros? our response??
>>107322958>>107323256I'm more interested about its editing capabilities, let's hope it's good
>>107323254Lodestone claimed it can be as fast.
>The diffusers team is introducing a remote text-encoder for this release. The text-embeddings are calculated in bf16 in the cloud and you only load the transformer into VRAMWho would ever use this?
>>107322891omg bruh can they stop stacking more and more layers? why can't they try to optimize their training or something, there's still plenty of fat to remove
>>107323259They target userbases that virtually don't intersect.
>>107323276some exampleshttps://www.reddit.com/r/StableDiffusion/comments/1p6h2sz/flux_image_editing_is_crazy/
>>107323283running a 32b model with offloading will be painful, remember how slow flux 1 was? and it was only a 12b model
by the time you guys have gemini 3 pro image capabilities locally, we'd be at gemini 5/6
From what I tested, it's slopped. Sorry goyim
>>107323294>https://www.reddit.com/r/StableDiffusion/comments/1p6h2sz/flux_image_editing_is_crazy/ok if this is not made with the pro version this is actually really good
>>107323297enjoy your high quality cat memes I guess
>>107323314gem3pro is actually quite uncensored and does almost everything except genitals/nipples, including politicians/tons of anime styles/whatever
>>107323307OMG IT CAN DO MIGU! LTFG>>107323314>enjoy your high quality cat memes I guessthis shit can do comic/manga pages in one try though
>>107323324>this shit can do comic/manga pages in one try thoughYes, although I really don't understand people who say "WOW LOOK OMG AN IMAGE MODEL CAN WRITE PYTHON CODE", niggas don't understand that it's literally just the full beastly Gemini 3 Pro with an image output thingie bolted on top, it still does reasoning, and does it mostly with text internally, so it can obviously solve everything normal gem 3 can (which is a good LLM)
>>107323324I don't like Flux 2 dev very much. Maybe using my workflow in ComfyUI I can make it better, but as it is, every image screams AI SLOP to me
>>107323324that's great buddy! remember to wipe your drool off the keyboard when your computer time is up!
>>107323336you think the normie care about that? they just want a good result and they have it, it's not that deep
So no local models of flux that fits in a 5090?
>>107323350no they said that flux 2 dev fits into rtx 4090 with some optimizations
>>107323356>no they said that flux 2 dev fits into rtx 4090 with some optimizationsit's just ram offloading lol >>107323254
>>107323375so? enjoy your goyslop
>>107323377>32b goyslopI was to say DOA but they seems to be cooking hard on the edit part >>107323311
>>107323375Oh, so there's going to be a new feature that allows offloading beyond the vram limit? That's insider-info for the raised RAM prices.
>>107323385why are you linking me shit from kontext max from half a year ago? is this news to this general lol?
Are you feeling safe right now?
>>107323296I'm talking about ramtorch, I don't think it was ever implemented outside of onetrainer. Maybe it is now.
/ldg/ is the most delusional general and asian footfag's meltie here >>107309685 is proof of that
>>107323391>a new featureI bet it's lodestone's method
>>107323393it's better than kontext max though, look the mememarks! >>107323022
>>107323396Black Forest Labs is committed to the responsible development and deployment of our models. Prior to releasing the FLUX.2 family of models, we evaluated and mitigated a number of risks in our model checkpoints and hosted services, including the generation of unlawful content such as child sexual abuse material (CSAM) and nonconsensual intimate imagery (NCII). We implemented a series of pre-release mitigations to help prevent misuse by third parties, with additional post-release mitigations to help address residual risks:1. Pre-training mitigation. We filtered pre-training data for multiple categories of “not safe for work” (NSFW) and known child sexual abuse material (CSAM) to help prevent a user generating unlawful content in response to text prompts or uploaded images. We have partnered with the Internet Watch Foundation, an independent nonprofit organization dedicated to preventing online abuse, to filter known CSAM from the training data.2. Post-training mitigation. Subsequently, we undertook multiple rounds of targeted fine-tuning to provide additional mitigation against potential abuse, including both text-to-image (T2I) and image-to-image (I2I) attacks. By inhibiting certain behaviors and suppressing certain concepts in the trained model, these techniques can help to prevent a user generating synthetic CSAM or NCII from a text prompt, or transforming an uploaded image into synthetic CSAM or NCII.3. Ongoing evaluation. Throughout this process, we conducted multiple internal and external third-party evaluations of model checkpoints to identify further opportunities for mitigation. External third-party evaluations focused on eliciting CSAM and NCII through adversarial testing with (i) text-only prompts, (ii) a single uploaded reference image with text prompts, and (iii) multiple uploaded reference images with text prompts. Based on this feedback, we conducted further safety fine-tuning to produce our open-weight model (FLUX.2 [dev]).
so can we put cunny in it or
>>107323035https://playground.bfl.ai/Went there and used their Pro model to gen the image you saw
Just came back here because of the news. What's the best local model right now if not Flux 2?
8. Monitoring. We are monitoring for patterns of violative use after release. We continue to issue and escalate takedown requests to websites, services, or businesses that misuse our models. Additionally, we may ban users or developers who we detect intentionally and repeatedly violate our policies via the FLUX API. Additionally, we provide a dedicated email address (safety@blackforestlabs.ai) to solicit feedback from the community. We maintain a reporting relationship with organizations such as the Internet Watch Foundation and the National Center for Missing and Exploited Children, and welcome ongoing engagement with authorities, developers, and researchers to share intelligence about emerging risks and develop effective mitigations.im gonna email all nsfw flux 2 finetunes to them
>>107323418wow I'm feeling so safe now, I bet it can't do nipples anymore!
>>107323016I still use it quite often. yeah, it sucks ass with hands/feet but other than that it's fucking golden. no other model comes close to its uncensored capabilities, and by uncensored i obviously mean gen'ing cunny natively.
>>107323297theres only so many improvements that need to happen until a model is good enough for a specific task, meaning at some point further improvements have severe diminishing returns and wont even matter in most casesfor example it doesnt matter how smart agi will be in 100 years if i need it to classify images into 30 predetermined categories, summarize a page, explain 90% of topics in this world to a basic level, give me quick scripts, regex, ffmpeg commands, terminal commands, move files around, do basic web search and data recovery, basic information extraction from any file or page, fix basic bugs in any project, ocr text from basic images and screenshots, etc, aka, the actual majority of the simpler things people need on a daily basis, these are all already permanently solved issues locally with toy size models that can run on higher end phones.
>>107323427WAN for realism or Spark.Chroma for better prompt adherence, from my experience
>>107323022Do you think Alibaba will report their release of the next iteration of QIE? they seem to not be the best local editing model right now
https://github.com/Comfy-Org/workflow_templates/pull/323flux 2 templates are in!!!
>>107323427Qwen
FurkGod already on the case, thanks to his grifter money he's able to run Flux.2 without issues
>>107323016Spark.Chroma unslopped it quite a bithttps://files.catbox.moe/8z9vdv.png
>>107322891https://huggingface.co/Comfy-Org/flux2-dev/tree/main/split_files/diffusion_models>35 gbhttps://huggingface.co/Comfy-Org/flux2-dev/tree/main/split_files/text_encoders>18gb fp8bruh...
>>107323467Wasn't it unslopped to start with? Sloppiness wasn't it's main problem.
flux 2 pro on replicate lets you generate naked women with actually good looking nipples wtf
>>107323478catbox a gen lil snikka, paw paw wanna see some tiddies
>>107323467>Spark.Chroma unslopped it quite a bitit did, this guy is litterally saving chroma, can't wait for his next iteration>>107323476it got more slopped with more epochs
>>107323476I meant anatomy-wise. Spark.Chroma makes it a viable model, especially for porn
Every paid API I've ever tried felt so tightly constrained that you couldn't even get two different faces out of one prompt. Prompting becomes this ordeal where you need to figure out every single magic word it needs to hear to get what you want, similar to a booru model. Are these new ones different?
>>107323478>flux 2 pro on replicate lets you generate naked women with actually good looking nipples wtf
>>107323464The quality seems on par with qwen light.
>>107323488>>107323511honestly its slop https://files.catbox.moe/d2xzfh.jpghttps://files.catbox.moe/hzp96e.jpg
>>107323504gemini 3 pro image preview (so called nano banana pro) can do quite literally any face and any style, but it costs $0.13/image for 1/2k or $0.24/image for 4k output. make of that what you will. and of course you can't do nsfw, but swimsuits/bikinis/suggestive stuff is usually completely fine, children are fine, and so on
>>107323517>no more flux chin>the skin is more plastic than flux 1the fucked it up, maybe it won't be so bad if you edit an image and go for a real life woman
>>107323418How would CSAM be in the training set in the first place? Hmmmmm
>>107323519nigga i steal api access to replicate, gemini and other models
>>107323478Can you try generating feet? Flux.1 couldn't generate feet at ALL
We are never getting a new good anime model, are weAlmost one year since the last good one
>>107323532By expanding the definition of CSAM to things where the S and the A aren't even involved in the first place.
>>107323471So they expect us to run a 32b model at 4k? how? only an A100 would have enough vram for this shit
>>107323528you have seen literally two images, how are you drawing these conclusions
>>107323528>>107323555https://www.reddit.com/r/StableDiffusion/comments/1p6hul3/flux2_dev_t2i_that_looks_like_new_sota/here's more images of flux 2 dev, this is an AI image of the father of furkan kek
>>107323548>Almost one year since the last good oneIncorrect
>>107323569where are the instagirls grifter photos, the jeets are waiting
>>107323569this is actually pretty realistic wtf
Bros, how do I avoid the 3d look with wan 2.2 i2v?I even run a 2d anime lora.https://files.catbox.moe/xvw5he.mp4 NSFW
>>107323596not the right time to ask that during the release of flux 2 lol
>>107323522>children are fine,Wait, was there ever a model that would refuse to make normal pictures of children?
>>107323596you dont, and wan is the best you have for it anyway
>>107323604yes lol, google's models in API didn't let you gen children, but it also depended on the region (EU being more cucked). nowadays gemini 3 pro image preview doesn't let you generate known politicians/public figures if you're in EU
>>107323582here's the instagirl grifting photo saar!
>FLUX.2 [klein] (coming soon): Open-source, Apache 2.0 model, size-distilled from the FLUX.2 base modelThis is what I'm most excited for. Depending on the exact size (hopefully not too big), it could be a very good medium sized base model for community finetunes.
>>107323602lets be realistic, less than 10% of this general can run flux 2 at acceptable speeds
>>107323618if the lightning fags make a 8 steps out of this model then maybe it'll be acceptable to use, we'll see
>>107323615yeah bro we've seen so many beautiful finetunes for schnell everyone LOVED that model bro!
>>107323618a good way to filter shit opinions, you shouldnt even be able to post with less than 16gb of vram and 64gb ram, they should implement a PoW with those requirements
>>107323596Try with Anisora 3.2https://github.com/bilibili/Index-anisora
>>107323609Like if you typed in "a little boy in a big winter coat holding a kitten" it would refuse and say the prompt or content was inappropriate?
>>107323618>lets be realistic, less than 10% of this general can run flux 2 at acceptable speedsNow I wait for a nunchaku quants, if they manage to keep the quality at 4bit then it'll be a good way to use that new flux model
So it'll be a smaller model + step distilled? lmao this shit will suck so hard
>>107323640yes, they literally disallowed generating children on the API specifically https://docs.cloud.google.com/vertex-ai/generative-ai/docs/image/configure-responsible-ai-safety-settingsthankfully with gemini 3 pro image gen they've toned back their censorship a ton, currently its the least publicly (as in, user facing mass available model) censored model, and the most capable as well
>>107323606Even after all this time? Nothing similar to a 2d-3d slider lora?>>107323635That could be something, thanks.
>>107323625it's size distilled, not step distilled, so it won't be the same thing as schnell (hence that new name)
Do they do this on purpose or does it actually load on a 5090?
>apikek: "mines the least censored cuck model!!!" lol
>>107323691nope, you'll have to offload to run it, even on a 5090, and fp8 is not a good quant imo so I'm really dissapointed of all of this :(
>>107323697or this
>>107323701>it kept the "01" on her left shoulderfirst time I'm seeing this from a local model, this is the dev model right?
>>107323704Doesn't the model need to fit into the vram at first in order for it to be offloaded?
why is blud advertising cloud models in the local thread?
>>107323712no, that's the point of offloading, you put only a part of the model in the vram, and the remaining shit goes to the ram
>>107323711I think that's chroma browski
>>107323706Is there a site that collects jav covers in ok quality? Maybe i will train a lora for it
>>107323716We're all s.oyfacing over Flux 2 that was released today, which is basically cloud-only for now; none of us had the time to install it on our machines yet
>>107323701looks like cuckdream 4
>>107323733>We're allSpeak for yourself kek
>>107323712no... I'm using wan 2.2 fp16 (28 gb) with 24 gigs vram
so flux 2 q8 when
https://www.reddit.com/r/StableDiffusion/comments/1p6i8t3/dman_flux_2_character_consistency_is_really_insane/>indian posting>"Might be my most beloved DeepFake model in the history of mankind."not beating the allegations...
>>107323726javdatabase seems like the best one of the bunch
>>107323711>first time I'm seeing this from a local modelYou must not use local models much
>>107323747why the fuck is there a gemini icon in bottom right? did flux 2 train on nana banana outputs without even removing the visible watermark or what?
>>107323761I think he used a gemini image as an image input
>>107323764yeah i figured, stupid lol
https://comfyanonymous.github.io/ComfyUI_examples/flux2/
>>107323722>>107323742Guess I'm retarded, that's what buying the biggest consumer gpu gets you.
https://files.catbox.moe/7r1hig.png>>107323711>>107323739chroma, prompted for the "01"
>>107323769SDCPP BROS!?!?!?!? HOW IS IT POSSIBLE!!!!!! WHERES OUR DAY1 SUPPORT FOR THE NEW SOTA!?!?!pls do the needfull and contrbiute to sdcpp so I can grift more with my shitty IMGUI frontend thanks!!!!!!!!!!!!!
>>107323777looks like a man
>>10732376918 gig ENCODER? wtf
>>107323783at fp8... this is a 24b text encoder anon...
>>107323784luckily I already had GOOFS of it.NEMO BROSSS
>>107323769>35GBeven a fucking 5090 cant run this
>>107323789based, going for Q8 is gonna be better for the text encoder
>>107323793oh wait ive just read its a bit customized, so we need a new set of goofs. prolly gonna run it q4 too, Mistral 24b quantizes good
>>107323790>even a fucking 5090 cant run thisjust buy a RTX6000 goyim
>>107323747>indian poster>intermediately he gens himself with white womenWhy are they like this? lmao
>>107323804white women are the superior women saar, praise the izzat
wwait didnt the leak guy say the new model to drop today would've been fewer params?anyway, this week we eating good: QIE and FLUX2
>I pulledWHERE THE FUCK ARE MY GRIDS COMFY??
>>107323814zoom in puller bro
>>107323804>>107323809kek, he's also here, I'm suprised that many "anons" also post their shitty gens on reddit too
>>107323814I hate what they did to the top bar recently. Put the tabs under the command bar fuck.
>>107323769>>107323254I thought comfy worked nvdia to use that new offloading feature, where is it?
Sirs with a 5060 ti and 16gb of regular ram on the system will I be able to use wan to generate 480p videos?
>>107323813It's another model.
>>107323813>anyway, this week we eating good: QIE and FLUX2there will be also that small 6b model + LTX-2
the flux-chin still lives
>>107323848localchads we cant stop WINNINGI saw some ltx-2 gens and they're kinda meh, not sure how it compares to wan before I try it for myself tho!!
>>107323769that looks like shit
>>1073238362.1 yeah probably
>>107323848Doesnt ltx2 coom out at the end of this month?
>>107321182post more pig slut
https://www.reddit.com/r/StableDiffusion/comments/1p6g58v/comment/nqqeyiw/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button>On a 5090 locally , 128gb ram, with the FP8 FLUX2 here's what I'm getting :>loaded partially; 20434.65 MB usable, 20421.02 MB loaded, 13392.00 MB offloaded, lowvram patches: 0>100%| 20/20 [03:02<00:00, 9.12s/it]>a man is waving to the camera
Is comfy here? I have a question.
>>107322951i don't care about porn
>>107323790bwahaha
>>107323890lmao the ledditors aren't really hyped by that model
just give me my qwen edit update, it's already great but I want to see what they added.
>>107323904you still have to unload the model to run the text encoder, it's gonna be sooo slow
>>107323890I'm struggling to see what's the point of this model, since it can't be run on consumer hardware and people or companies with high-end enterprise hardware cannot use it for commercial purposes, so its essentially a model for rich hobbyists?
kinda depressing that we get a new SOTA and the coomers can't handle that they can't gen tits. especially when you are two clicks away from an endless supply of them
just wait for klein lol
>>107323925yeah but it's BFL, what the hell did you expect
>>107323897Supreme details
>>107323922>so its essentially a model for rich hobbyists?if it works well on nunchaku it could be run by everyone, 4bit -> 18gb for the transformers model
nunchaku will take care of flux2, should be enough for 24GB?Someone do the math for me, I'm stoned out of my mind.
>>107323922>I'm struggling to see what's the point of this modelme too, it's too big, it looks slopped, it's censored as hell (and they bragged hard about it)
>>107323890>?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_buttonkys
>>107323978it's to share this specific comment you subhuman retard
>>107323986its not
out of the box on a 3090, first image is 330s, second is 190s, running the full model @ 20steps
>>107323996can it do woman lying on grass
>>107323996>running the full modelbf16? with a 3090? how?
https://files.catbox.moe/3o0bxk.mp4why does WAN like to add so many moles
>>107322891>32bwhy do I feel Alibaba is the only non retarded company of them all, they knew the absolute limit was 20b, not everyone has a fucking RTX6000 in their home the fuck
>>107323861A low detail anime picture is a shit choice for revealing model capabilities. I think he just really likes anime.
>>107324025they really expect us to download 50gb of censored slop lmao
>>107324010block swapping the shit out of my fucking setup>>107324008let's find out, after loaded it's pretty consistent at 190s/gen. i will try fp8 out soon. i know these need more steps for sure
>>107324046give it a challenge man, fuse two prompts and do a splitscreen image
>>107324046>>107324051No, the challenge should be a double expose. To this day only one model can do it accurately.
>>107324008
>>107324063it's slopped as fuck
>>107324063even the pro model is slopped lol
>>107322891So in more than 1 year between Flux 1 and Flux 2 the only improvement they could come with is "make it 3x bigger"? loool
>>107323904is this 3x12gb? cope cards
Does anyone earnestly expect any new foundational model not to be slopped? Anon has to inject the soul into it every time this isn't new.
>>107324010>what is offloadingfucking retards
>>107323986remove all of the shit after the question mark and it still gets to his exact comment.
>>107324071>>107324087just because you don't understand how any of this works doesn't mean a model is "slopped"
>>107324107why should I do that? scared of more letters? this is how it look like after using the "share the comment feature" on reddit
>>107324123shut the fuck up
>>107324063prompt issue
>update comfy after many weeks of not touching it out of fear of it fucking everything up>seems to go ok>errors about front-end sperging shit>figure out wtf its talking about and fix itnow .pngs don't load workflows anymoreit's the best we got right? many folks are saying it
>>107324135>an enterprise company didn't give me a free 8k cunny model!!!REEEEEEEEEEEEEEEEEEEEEEEEEE
>>107324144>moving the goalpostsConcession accepted.
quick, someone generate feet with flux.2
>>107323254https://huggingface.co/black-forest-labs/FLUX.2-devUnfortunately, like 70% of the FLUX.2 [dev] HF page is literally just them bragging about how heavily lobotomized and censored the model is both from pre-training physically removing concepts from the training and multiple stage post-training for even more safety (save us, Qwen!)
where slurpzilla
>>107324165
>>107324183So flux 2 knows that character? interesting
This mf is fasthttps://www.youtube.com/watch?v=qWDpPos6vrI
>>107324183i wanna chew those toenails off
>>107324142>prototype idea to use nodes for diffusion>everyone dogpiles and contributes garbage>get millions of dollars>enshittify the fuck out of it and annoy the entire community with breaking changes, refuse to add/update features everyone wants, and constant shillingevery fucking time. also this new model looks like ass
>>107324198
>>107324208better
>>107324183>tranime crapPlease generate real feet
>>107324203>this new model looks like assI'm sure the edit feature is great, but I'm not gonna download a 36gb model and wait for minutes just for a single output, they're delusional
>>107323922This Chroma?
>>107324219>crapbrother, your AI woman is literally bathing in actual crap
>it's too big to run REEEEERTX 6000 Pro is 3x faster and 4x the VRAM of a 3090.Running Flux 2 on a 6000 Pro is like running Flux 1 on a 3090.The solution is to get a decent paying job and just buy a 6000 Pro instead of complaining.Fuck VRAMlets. This is what improving the SOTA looks like.
>>107324178>https://huggingface.co/black-forest-labs/FLUX.2-devthey're so out of touch it's hillarous, bnb 4bit is a terrible quant, they should've gone for a nunchaku quant on day one
>>107324219
>>107324242maybe it's just me but I'm not seeing the improvement the extra size provides
>bakernotfound.jpgduring the flux 2 release he was nowhere to be found...
>>107324242>This is what improving the SOTA looks like.just stacking moar layers isn't improving anything, it's like saying that you improve your gpu by using more gpus than one
anyone got a json workflow instead of the png workflow for flux2? can't get any pngs to load workflows in comfretardy anymore
>>107324165catbox?
"On consumer grade GPUs like GeForce RTX GPUs you can use an optimized fp8 reference implementation of FLUX.2 [dev], created in collaboration with NVIDIA and ComfyUI." - BFL on their page
>>107324195Ostris is a god amongst men
>>107324242>let's go the HunyuanImage 3.0 route, what could be wrong???
>>107324284https://github.com/Comfy-Org/workflow_templates/blob/main/templates/image_flux2_fp8.json
>>107324288with RAM offloadijng
>>107324288>>107324299but how do you do the ram offloading though? it's doing it automatically on comfyui? you need a special node for it?
>>107324308if it's not doing it automatically use distorch2 loader from multi-gpu nodes
>>107324322can you try to do some edit stuff, I'm interested in that
>>107324332>>>/r/
>>107324144>REEEEE>>107324242>REEEEEthis bfl employee is sure annoyed about our reactions lmao
>>107324298>https://github.com/Comfy-Org/workflow_templates/blob/main/templates/image_flux2_fp8.jsonty
>>107324336/r/etard
bake?
>>107324242>3x fasternot at the first non toy quant (q8)
>>107324242>RTX 6000 Pro is 3x faster and 4x the VRAM of a 3090.and 10x the price of a 3090 kek
patty cake patty cake
>>107324381poors should just use api models
local is dead. just make a datacenter at home bro
>>107324385>poorsyou are poor since you can't run that hunyuan 80b model, how about that?
>>107322891Wow, that looks based. China is still the only one that can do video though, China numba wan.
>>107324392that's exactly what i meant
>>107324393US is dead, long live China
>he doesn't have a 10 stack of A100'sYou're poor. HAHA, BROKIE.
>>107324393the text is pretty good for a non autoregressive model desu
"FP8 Quantization: NVIDIA and Black Forest Labs quantized the models to FP8, reducing VRAM requirements by 40% while maintaining comparable image quality.Enhanced Weight Streaming: NVIDIA partnered with ComfyUI to upgrade its "weight streaming" feature, which allows massive models to run on GeForce RTX GPUs by offloading data to system RAM when GPU memory is tight."Is this noticeable in other models?
>>107324410>textwhen will you learn this doesn't fucking matter in an IMAGE MODEL. the first fucking benchmark is generate GOOD LOOKING IMAGES. YOU ARE A FUCKING STUPID SHITTER. FUCK OFF
>use example workflow on 5090>great success
https://blogs.nvidia.com/blog/rtx-ai-garage-flux-2-comfyui/>The new FLUX.2 models are impressive, but also quite demanding. They run a staggering 32-billion-parameter model requiring 90GB VRAM to load completely. Even using lowVRAM mode — a popular setting that allows artists to only load the active model at a time — the VRAM requirement is still 64GB, which puts the model virtually out of reach for any consumer card to use effectively.>out of reachwhich fault is that Nvdia?? geez I wonder...
>>107324438it actually does matter because integrating text so that it *properly* fits into the image is a really hard fucking task in photoshop/whatever
>>107324438>when will you learn this doesn't fucking matter in an IMAGE MODEL.it does subhuman, you can make great memes if the model understands how to make comic pages >>107323324
>>107324439try less dicks noob
>>107324452they will cope forever because their local model is worse than the corposhit
>>107324439lemao, that nvdia blog page talks about how they worked with comfy to implement some offloading shit yet when I look at the code I'm seeing none of thathttps://github.com/comfyanonymous/ComfyUI/pull/10879/files
https://huggingface.co/orabazes/FLUX.2-dev-GGUF/tree/mainIs Q6_K good?
>>107324456nou
>For local deployment on a consumer type graphics card, like an RTX 4090 or an RTX 5090, please see the diffusers docs on our GitHub page.>As an example, here's a way to load a 4-bit quantized model with a remote text-encoder on an RTX 4090:>Can only run Q4Interesting, so I guess we'll have to wait until nunchaku guys give us weights to have a proper Q4 speedup, and that will be the definitive way to run it.
>>107324464I'll tell you what happened>hey comfy, will that offloading you have work with this?>uuh, maybe?and that's that
>>107324475keek
>>107324446>skill issue in baby tasklmao>>107324452>it does subhuman, you can make great memes if the model understands how to make comic pagesmaybe just fill in the panels and write the dialog yourself non destructively instead?
>>10732447514 is the new 12
>>107324482I really hoped comfy would implement that lodestone offloading shit, it would greatly help>>107324479>Q4Q4 is terrible, so yeah only nunchaku can save that model >>107323938
>>107324491>yourselfwhy? it's the goal of an AI to do this shit for me, if you want to do shit by yourself take a pencil and draw instead of using an AI nigger
>>107324491holy cope
>>107324508yet nobody reads you trashy slopstyle comic. why even make it in the first place?
>>107324520>why even make it in the first place?because it's fun, do you know that concept?
>>107324496>lodestone offloading shitit's for training only dipshit
>>107324479>we'll have to wait until nunchaku guys give us weights to have a proper Q4 speedupNunchaku will deliver within a week. Flux has always been a priority for them.
>>107324527if that is fun for you, you are a boring person kek
it can't count for fucking anything apparently.. 12 dogs.. TWELVE DOGS
>>107324533>dipshitdebo's favorite word
>>107324547>debotranfag's fave boyfriend
>>107324544the skin texture is as smooth as Qwen Image, there's no reason to switch lol
>>107324558yeah the skin is absolute trash with flux2 .. great job failfags
>>107324544prompt issue
flux2_dev_Q6_K.gguf26.7 GBlol, who is this model for? RTX 6000?
>>107324574>>107324575no prompt is gonna fix that slopped skin though
>>107324575in case someone was wondering if this is just an issue with local inference stack, here's>A man showing both of his palmsfrom flux 2 dev on replicate
>>107324575>yellow tintdamn they trained on 4o imagegen or what? lmao
>>107324598with go_fast turned off btw
>>107324575>>107324598the texture is weird desu, and the fact they still went for a VAE even though it's an edit model is a retarded move
>>107324611>>107324598>>107324575can someone explain how this redditor managed to get way better results than yours? kek >>107323569
>>107323016>>107323401>chroma is so fuckin dead, what an incredible _waste_ of resources that turned out to beLol, lmao even. Haven't tested Flux.2 yet, but you are a funny guy.
>>107324633photoshop
>At a private birthday party, a sad, chubby woman in a penguin costume rides a unicycle across a wooden plank between two skyscrapers that are part of a miniature toy city. In her left hand she holds a glass of wine, in her right a cigarette holder. Someone holds up a banner that reads “Happy 41st Birthday.” The photo was taken by an amateur photographer with an SLR camera in fisheye mode.
>>107324611>photorealisticyou goddamn retard. this is why you can't trust any examples without prompt.
>>107324653>A Koala wearing a cowboy hat rides a giant donut that has sprinkles on it. In the background a mega explosion but it also raining cubic shaped pieces of hale and there is tornado weather clouds in the back. The Koala is getting away from a metallic reflective SUV with the writing "ZOO POLICE" on the side. The scene is action packed with various people running around screaming.
>>107324662This is Qwen Image tier in terms of slop.
can't believe I turned out to be a vramlet with a 4090 :( fuck this gay earth
>>107324658i put that in there as a test because the skin is so garbage looking.. it didn't help
>>107324662Just add 'amateur photographer with an SLR camera' faggotThese models are trained with image metadata.
>>107324662>A man standing in the rain on a city street near a lamppost, holding a black umbrella. His reflection is visible in the puddle at his feet. warm street lights, realistic street photography style.
>>107324695>Just addshut your mouth parasite, if you want to make your own images then run this shit in your pc, what's the problem? you're poor or something?
>>107324695>prompt for specific camera >puts a camera in the gen but doesnt change the style baka
>https://comfyanonymous.github.io/ComfyUI_examples/flux2/>Fp8 diffusion model file: flux2_dev_fp8mixed.safetensors (goes in ComfyUI/models/diffusion_models/). If you want the full sized diffusion model you can find the flux2-dev.safetensors on the official repo hereThis doesn't fit on a 3090 right? Where's nf4?
>>107324725correct, 36 is bigger than 24you'd need to borrow 12gb from somewhere else
>>107324725are you seriously asking if 35.5 > 32?
>>107324725>>This doesn't fit on a 3090 right?it doesn't, you have to use an offloading nodehttps://github.com/pollockjj/ComfyUI-MultiGPU
>he bought a 5090Should've gotten a modded 4090 instead. VRAM is king.
>>107324725>3090>>107324740>32anon... it's 24gb for the 3090
>>107324676Same, I got one slopped image out of fp8, the next one OOM'd then killed the Comfy instance and a relaunch and try #3 crashed my PC to no displays. I don't think it's handling memory correctly right now. Nothing was cleared on my first gen and it just tried to pile everything on top for the next.
>>107323287>(this setting can get as low as ~18G of VRAM)>Even an H100 can't hold the text-encoder, transormer and VAE at the same time.>pipe.enable_model_cpu_offload() slows you down a bit. You can move as fast as possible on the H100 with the remote text-encoderUse case is fairly clear, no?This was hlky's idea btw, introduced remote VAE months ago, there were plans to add CLIP and T5, but they fired him
>>107324725I have a 3090 + 3060 so technically I can run it, but it looks so slopped I don't see the point of wasting my time with this shit
>>107323287>>107324757so it's just an API node lol
>>107324762snap, plus it's 50gb of models to download
https://huggingface.co/orabazes/FLUX.2-dev-GGUF
>>107324708snowflake brr
>>107324756>I don't think it's handling memory correctly right now.use this >>107324741
>>107324771Yes but for pipeline components rather than the entire generation, you're still generating locally
>>107324308>but how do you do the ram offloading though? it's doing it automatically on comfyui?seems like it
seems like this model is the best on edit mode
>>107324756some jeet redditor claims its working on a 3090
>>107324805it doesn't look like him and it made everyone sweaty for some reason, those models can't help it but to make the skin shinny for some reason
>>107324762>but it looks so slopped I don't see the point of wasting my time with this shitAs always, Chroma is the model to use if you want to be free from slop. No idea if a Chroma style tune on Flux.2 is even possible.>https://xcancel.com/bdsqlsz/status/1993295498137288709#mHuh? I thought that model he was teasing was Flux.2. I guess not.
>>107324807comfyui does this for every natively supported model. if you have a lot of ram it'll offload there
>>107324805nano banana pro sisters... thanks for the free training data.
Isn't it ironic that this upcomming 6b model looks more realistic than Flux 2?https://xcancel.com/bdsqlsz/with_replies
>>107324796Oh wow, so I just run the Comfy workflow on my 3090 and it works?
>>107324807I did get that one image out of it, so it does work somewhat. Took just 91sec on my 4090. My memory is at 2.4Ghz but that's always been stable with everything else (including Wan and Hyvid1.5 which stress my card harder than image gen).
>>107324854This is not surprising given that Flux.2 is a distillation.
>>107324835>comfyui does this for every natively supported model.but nvdia said they were working with comfy to implement its offloading featurehttps://blogs.nvidia.com/blog/rtx-ai-garage-flux-2-comfyui/>And to make this model accessible on GeForce RTX GPUs, NVIDIA has partnered with ComfyUI — a popular application to run visual generative AI models on PC — to improve the app’s RAM offload feature, known as weight streaming.
>>107324856It didn't for me. I had to use the ComfyUI multiGPU node and offload 16GB to the CPU.
>>107324835thank god I'm not a RAMlet, there is hope bros
>>107324878you have a 20gb video card?
>>107324887No, a 24GB 3090 and all my applications are running on the integrated GPU.
>>107324878Leave it to Comfy to not actually give us the VRAMlet friendly workflow, at least below the official one. Got a workflow?
they called me a madman for rammaxing and gen 4 ssdmaxxing two years ago, now 128gb of ram is more expensive than average used 3090 global price
>>107324856>https://xcancel.com/bdsqlsz/with_repliesdidn't work for my 5090.. had to load the fp8 clip to make it work.. the default was to load the bf16 clip which caused it to OOM
>>107324905>Got a workflow?just replace the loading node from the workflow with this one and you're good to go >>107324741
https://files.catbox.moe/023bu2.png
https://xcancel.com/multimodalart/status/1993351690851103028#mthat's interesting, you can see how the quants affect flux 2
https://bfl.ai/research/representation-comparisonthey made a "paper" for flux 2
>>107324805Finally some good fucking food for local.
>>107324947SD has sovl!
>be 16gb vramlet>cant into flux2sigh...think i'll give chroma radiance a try
>>107324982you can if you offloadmaxx >>107324741desu the only cope remaining are nunchaku (quality images even at 4 bit) and their klein model (it's supposed to be smaller)
20 steps vs 50 steps on same prompt
copechakusisters... its our time to shine.
>>107325007it makes the skin better indeed, but 50 steps bruh...
>amateur photo, low-lit, Low-resolution photo, shot on an old mobile phone, a woman in a fast-food restaurantsomewhat painful to see the people posting here all day have such low reading comprehension that they can't get a model with a built-in workflow working. >>107324898it works fine on a 24GB 3090 both fp8 and full model (full model is actually faster). ask me how i know.
>>107325033
>>107325014>noooo the SOTA needs to work on my pc from 2017!!!
>>107325033any flags used? mine's not having it, just shuts off
>>107325051even with a 2025 gpu card you'll have to wait for minutes before getting one image, what a corporate cocksucking you are anon
>>107325029try a similar realistic photo of a group of friends at comic con, and then list some character cosplays to see what it knows
https://docs.bfl.ai/guides/prompting_guide_flux2Kek json
>>107325070Worse than that, even 25k enterprise GPUs can't fit everything.From their GitHub page>Even an H100 can't hold the text-encoder, transormer and VAE at the same time. However, here it is a matter of activating the pipe.enable_model_cpu_offload() And for H200, B200 or larger carts, everything fits.
>>107325087>https://docs.bfl.ai/guides/prompting_guide_flux2wait so you can ask for something vague and it'll think of the text by itself? that's pretty based
>>107325070you're upset because they didn't make the model work on non-corporate gpus? are you diffusing with a card you made at home?
>>107325118>are you diffusing with a cardsaar?
https://docs.bfl.ai/guides/prompting_guide_flux2#style-reference-guide
>>107325113>coffee consumption worldwidethis image has nothing to do with the prompt, I think they made a mistake lol
>>107325131why are you reposting the link?
>>107325056ah, was this it?what a time I picked
>no artist stylesGaaaaaaaaaaaah
>>107325131
>>107325113>wait so you can ask for something vague and it'll think of the text by itself?Seems like it.>Women’s Health magazine cover, April 2025 issue, ‘Spring forward’ headline, woman in green outfit sitting on orange blocks, white sneakers, ‘Covid: five years on’ feature text, ‘15 skincare habits’ callout, professional editorial photography, magazine layout with multiple text elements
>>107325087rofl.. yeah.. putting json in for the prompt crashes comfy
>>107325146you can use an image input to get your style >>107323311
>>107325131this is probably all pro but still
20 50 80 steps
>>107325132>doesn't use title>doesn't have 3 sections with statistics>no icons for countries>wrong color scheme
new>>107325191>>107325191>>107325191>>107325191
An amateur-quality iPhone photo taken in a dim, slightly cluttered living room. The image has the usual smartphone flaws: mild grain, uneven white balance, slight motion softness, and faint reflections on the TV screen. The camera angle is eye-level and a bit off-center, pointed directly at a medium-sized flatscreen TV that dominates the frame and is clearly the focal point.Around the TV, the room shows typical domestic details: a low TV stand with random household items, maybe a few cables visible, a muted lamp in the corner producing warm ambient spill, and a soft shadow cast across the carpet. Nothing is staged or aesthetic — it has the casual, imperfect feel of a real snapshot.On the TV screen is a generic late-night election news broadcast in standard American cable-news style. The design includes a lower-third banner with red-white-blue graphics, moving tickers, and bold sans-serif overlays. The screencap shows two fictional candidates, . one is labelled API KEK and the other VRAMLET. Both candidates appear in the split-screen format typical of election coverage: two head-and-shoulder shots side by side, each under their names, with vote percentages or placeholder numbers beneath them.The broadcast uses typical colors — bright blues, saturated reds, glowing white highlights — with a slightly overexposed edge glow caused by the iPhone camera auto-adjusting to the TV brightness.The living room lighting remains dim compared to the TV, creating a strong contrast where the TV glow illuminates part of the room unevenly. The whole image should look casual, unstaged, and shot quickly by someone standing in front of their TV.
how do you run this shit if the model is 35GB
>>107325393offloading. you did buy ram before the price went to the moon, right?right?
>>107323072Understood from the context, but unreadable
>>107325393>he had 3 years to buy 128gb
>>107323922If you’ve got a catbox for this, bless us anon. See how I do. kek.
>>107321375"Because I'm generating a one-frame "video", to use WAN as an img gen. It's on purpose"what's the purpose of doing this?
>>107321859the fuck?? for like a single image? or for a clip?
>>107326173...to generate images