Discussion of free and open source text-to-image modelsPrevious /ldg/ bread : >>102187084>Beginner UIEasyDiffusion: https://easydiffusion.github.ioFooocus: https://github.com/lllyasviel/fooocusMetastable: https://metastable.studio>Advanced UIAutomatic1111: https://github.com/automatic1111/stable-diffusion-webuiComfyUI: https://github.com/comfyanonymous/ComfyUIForge: https://github.com/lllyasviel/stable-diffusion-webui-forgeInvokeAI: https://github.com/invoke-ai/InvokeAISD.Next: https://github.com/vladmandic/automaticSwarmUI: https://github.com/mcmonkeyprojects/SwarmUI >Use a VAE if your images look washed outhttps://rentry.org/sdvae>Model Rankinghttps://imgsys.org/rankings>Models, LoRAs & traininghttps://civitai.comhttps://huggingface.cohttps://aitracker.arthttps://github.com/Nerogar/OneTrainerhttps://github.com/derrian-distro/LoRA_Easy_Training_Scripts>Fluxhttps://huggingface.co/spaces/black-forest-labs/FLUX.1-schnellhttps://comfyanonymous.github.io/ComfyUI_examples/flux>Pixart Sigma & Hunyuan DIThttps://huggingface.co/spaces/PixArt-alpha/PixArt-Sigmahttps://huggingface.co/spaces/Tencent-Hunyuan/HunyuanDiThttps://huggingface.co/comfyanonymous/hunyuan_dit_comfyuiNodes: https://github.com/city96/ComfyUI_ExtraModels>Index of guides and other toolshttps://rentry.org/sdg-linkhttps://rentry.org/rentrysd>GPU performancehttps://vladmandic.github.io/sd-extension-system-info/pages/benchmark.htmlhttps://docs.getgrist.com/3mjouqRSdkBY/sdperformance>Try online without registrationtxt2img: https://www.mage.spaceimg2img: https://huggingface.co/spaces/huggingface/diffuse-the-restsd3: https://huggingface.co/spaces/stabilityai/stable-diffusion-3-medium>Maintain thread qualityhttps://rentry.org/debo>Related boards>>>/h/hdg>>>/e/edg>>>/c/kdg>>>/d/ddg>>>/b/degen>>>/vt/vtai>>>/aco/sdg>>>/u/udg>>>/trash/sdg
ballees brgewa of cfefeiesnship DN AMAGGyah
>schizo thread
>>102191214ty baker
>mfw
>>102191279>That does seem to gel with what I've seen.Let's not forget that the QK quants were optimised for LLMs (Large Language Models) and that there's no guarantee it would be as efficient for image models, maybe there should be an alternative to QK that is optimised only for image models
Blessed thread of frenship
>>102191330>QK quantsquantization is a mathematic method that works regardless of the neural networks task .. it works equally well LLMs as diffusion models.. if you wann know more read>https://www.maartengrootendorst.com/blog/quantization/
>>102191388>it works equally well LLMs as diffusion modelsthe reality says otherwise though >>102191134
Q4_1 top, Q5_1 bottom
>>102191408A quantized LLM also is not as "smart" as non quantized one, especially if go as hard as fp32 -> q4 .. its just not visible in images. As they say an image is worth a thousand words
>>102191446>A quantized LLM also is not as "smart" as non quantized one, especially if go as hard as fp32 >-> q4We got the bf16 not the fp32 model (if we suppose such model even exists) though?
>>102191430Bottom looks crispy
>>102191463>not the fp32 model (if we suppose such model even exists) though?probably internally at BFL .. but they chose they parameter count and weight precision they released very carefully so it just run on 24GB VRAM is my guess.. also I am not sure the flux1 default is bf16 or just fp16
why does flux start generating grid-like artefacts at high res and how do i prevent it (if even possible)?full upscale would be much better than shitty tiled seams
>>102191494>also I am not sure the flux1 default is bf16 or just fp16it's bf16https://huggingface.co/black-forest-labs/FLUX.1-dev
>>102191493I've been doing everything with a CFG of 6 so I need to try adjusting that next>flesh high-heelsNew fetish unlocked
>>102191514>it's bf16let's be clear there, are we running the bf16 or the fp16 when we do that then?
>>102191528>I've been doing everything with a CFG of 6 so I need to try adjusting that nextwhat's your anti burner? for me it's AutomaticCFG
>>102191568>AutomaticCFGWasn't aware of this, will try it out
>>102191430>>102191528these don't appear as close to the source material as id expect flux to be able to do desu
>>102191588I think the dude who made it said he was going to retrain and optimize because he wasn't fully satisfied either. I'm happy tho, and I'm probably not prompting as carefully as I could.
>>102191587you can see some comparisons there, AutomaticCFG seems to be the one that gives the best prompt adherance to them allhttps://reddit.com/r/StableDiffusion/comments/1eza71h/four_methods_to_run_flux_at_cfg_1/
>>102191514where does it say that there?
Q4_1 top, Q5_1 bottom again
>>102191656>where does it say that there?it's written: "torch_dtype = torch.bfloat16" anon
The new Chinese "MiniMax" text to video model is surprisingly coherent with little warping. Almost feels like it's an actual video of a 3D environment. Hope flux video has these capabilities.
>>102191675I see.. I was searching for bf16 .. thx
>>102191544all releases of flux are bf16 originally.converting it to fp16 doesn't matter except for compatibility"default" here means you're not converting from whatever dtype your local file has>>102191656it's in the config file for the relevant archs
>>102191544BF16 unless you have a shit card that doesn't support it.
>>102191715>>102191693
>>102191715that foxgirl identifies as the Golden Gate bridge, desu
>>102191704>converting it to fp16 doesn't matter except for compatibilitygoing from bf16 to fp16 is lossy though, that's why it's important to know if it's been done somewherehttps://new.reddit.com/r/StableDiffusion/comments/1f6obs0/comment/ll20fbt/?utm_source=share&utm_medium=mweb3x&utm_name=mweb3xcss&utm_term=1&utm_content=share_button>All looked "worse". The original weights are actually BF16, not FP16.>i.e on my non BF16 supporting card, gens are subtly inferior. Doing merges from quantized/casted weights.. also not as good. Precision is lost. Model makers don't seem aware of this yet.
>>102191746Based Comfy driving anons to suicide
>>102191568>>102191587>>102191600Tried it but don't see any differences with a CFG at 6 and Q4_1 or Q5_1, but I'll keep it on in case it helps for other prompts
Are there any good online generators / generation services that are good and don't have limitations? Ideogram fulfilled this purpose before and now it's became nogger, unsure if Flux is a meme (I've been using Flux Pro too); any recommendations to use?
>>102191777it's really negligible if the bf16 values aren't outside the fp16 rangeby negligible I mean a MAD below 1^e-5 for example
>>102191947>local
>>102192138You will never be a human.
>>102192147
>>102191214does t5 overwhelm clip_l?
going through epochs, trying to pick the ones i will keep. these are i, j, and k. k is most interesting but most off-prompt.
>>102192037I haven't went local in ages; I've got lazy in my old age. What're the advantages of going local VS Le cloud based? >I have a 4090GTX
>>102192242online generators don't give you as much freedom in configuration, cost money and are often censored
>>102192242ow and also my mother is already dead
>>102192242dont post "your mother will die in her sleep" stuff please
>>102192284What would result in better results? What beats the meme of the week Flux or DALLE 3 in terms of Local then? I'm out of the loop and SDXL was the last I used with Automatic1111
>>102192242not sending your data to some company
>>102192412Flux (or what will be next fab) local gives you access to free choice of generating parameters, extra nodes (like SDUltimateUpscale for nearly unlimited resolutions!) that change quality of output, free choice of loras>>102192412>I used with Automatic1111atm its rarely being updated.. there is a fork called Forge that slowly is replacing it, also ComfyUI has seriously matured and has a huge trickbox extra nodes that can greatly alter and enhance quality of gensbut running diffusion non local is fine.. it just won't give you the freedom (and the unlimited gens)
>>102192502Interesting, thanks for the insight. I saw LORAs on civai or whatever it's called with the begging. I've heard of Forge and ComfyUI. "Trickbox" I've heard once, what exactly is it? See I'd be interested in making some " Spider-Man, Batman, and Iron Man" together unique and HQ images for my best friend's kids and wonder how much extra I could get from doing it locally. Definitely going to be trying it out tomorrow when I get back from the hospital.
>>102191528>I've been doing everything with a CFG of 6 so I need to try adjusting that nextDoes that help for this LoRA? I've been using the default 1.0 and getting inconsistent results.
>>102192412>What beats the meme of the week Flux or DALLE 3 in terms of Local then?>Dalle 3>LocalAnyway, I don't know how someone could use a paid service and enjoy it. 99% of the time I'm just dicking around or tinkering, I don't get why I'd have to pay someone for that privilege. But yeah, flux is the meme of the week especially if you like to have fun with the prompts.
>>102192584>"Trickbox" I've heard once, what exactly is it?you can install custom nodes in ComfyUI .. they range from simple loaders to upscales and whole animation making setups, since its modular in design some anons came up with insanely intricate workflows to make very high quality outputs>See I'd be interested in making some " Spider-Man, Batman, and Iron Man" together unique and HQ imageswill work in FLUX .. but I hope you got the hardware to run it effectively, else you will be waiting long and tired for hires outputs that will please you
>>102192651SD ultimate upscaler works for flux for tiled up scales. It's how I do all of mine
>You MUST have long verbose captions for your dataset>NO I don't care if you use joy caption which was trained in the basement of a literal who and hallucinates all over the place and make assumptions about the image.>You just HAVE to. OKAY?>I don't care that the model has seen billions of examples of everything you're putting in your LoRA and you're more likely to mistag something than actually describe it accurately, you MUST use the meme captioner, how else would the model KNOW that it's looking at a red dress? GOD!>NO it's not enough just to tag names and unqiue concepts to a dataset, you MUST tag that flower as well or the model will be confused. There's NO WAY a model knows what a flower is.
>>102192682thats how I do mine to, pic related
>>102192242I can gen tits and Nazi propaganda locally.
>>102192691the meds will stop the voices anon, atleast for a while. Take them
>>102192691OK please train 2 LoRAs, one with verbose captions and one with no captions with otherwise the se dataset and compare the results. I don't think anyone has done this before.you can think whatever you like and I'm not saying you're wrong because I simply don't know, but I'd like to see some evidence, since verbose captioning has produced excellent quality LoRAs for me.
Pytorch 2.4.0 is fucked up lol https://imgsli.com/MjkzMjI3/0/2
>>102192730https://civitai.com/articles/6792/flux-captioning-differences-training-diary
Learned to how use Lambda Labs cause I am scared of starting house fires when leaving the LoRa baker on while at work. The downside is can't train anything questionable unless I want to be potentially vanned by the shifting laws regarding AI.
>>102192691How does not tagging the flower affects the probability of it being associated with the other elements that were tagged?
>>102192731>Schizos complain that torch 2.5 results were different, and faster therefore worse>Turns out it was actually 2.4 that was shit.The ironing.
>>102192781both 2.5 and 2.4 look worse than 2.3.1 though
>>102192778You don't tag anything but the common element between the concept you're trying to train, ie, the character's name or the idea of an an upskirt shot.Everything else the model already knows better than you do.
>>102192765Thanks. I'll try this on my next LoRA.This is specific to style or will it work the same on character LoRAs too?
>>102192651I have an i13 and a 4090GTX so hopefully should be good
Does anyone have a settings json for kohya that will work on 16gbVRAM? I've been struggling
>>102192807*RTX I hope .. or you got arcane hardware.But an advice: If you are used to A1111, you might try Forge first for flux local before going ComfyUI, since the Forge UI is mostly the same as A1111 .. ComfyUI is more powerful tho
https://www.reddit.com/r/StableDiffusion/comments/1f523bd/comment/lkpzeh9/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_buttonSome more evidence in the "Your captions are probably doing more harm than good" camp. This is from the guy that showcased his block specific training yesterday. He's a fairly prolific LoRA trainer so his ideas carry at least some weight behind them.
>>102192804NTA, I prefer WD14 tagging since I already have a ton of datasets in WD14. And it does work with characters.
>>102192839>I've always preached against captions, captions should only be used for non existent concepts, how are we supposed to know Flux knows or doesn't know a concept?
>>102192839I don't care who people are, I care about the results.
>>10219286412B parameters, unless it's your own ass hole you're training, it probably knows it.
>>102192837Shit, yes, RTX. Faux paux on my part lmaoI swear I've used ComfyUI now the more I think about it. The stringy lines connecting and the likes. By Flux local, is that Flux Schnel, Dev, or Pro equivalent? And should I even be focusing on Flux to begin with?
>>102192884>12B parameterswhat does this has to do with anything? We have no idea how many pictures this mf eat during the pretraining, maybe BFL has filtered it hard and there's not much Flux has seen in the end
>>102192895Schnell is a weaker 4 step model, Dev is the default for local, and pro is api only.
>>102192895Faux pas*
>>102192901Given just how easy it is to train I'd say that's highly unlikely, it already knows them it just doesn't know what they're called. All we need to do is tell it.
>>102192895>And should I even be focusing on Flux to begin with?yes.. SDXL is mostly stuck in porn limbo>>102192895>I swear I've used ComfyUI now the more I think about it. The stringy lines connecting and the likes. By Flux local, is that Flux Schnel, Dev, or Pro equivalent?yes thats the noodle UI .. local FLUX is either FLUX.dev or FLUX.schnell.. schnell is abit retarded tho, use dev
>>102192924if that's the case then we should go for the embeddings insteads of Loras? That's what embeddings were made for
>>102192935Because everyone is used to loras and won't change to embeddings because people are resistant to change. It's as simple as that.
>>102192910What about the free Pro options online? How do they compare? And is Flux the right way to go? I find Ideogram to be the comfiest I've seen for neat results personally.
>>102192950that's too bad because you can make embeddings with a potato PC kek
>>102192959Stronger than Dev but censored, token limited and worthless.
>>102192731Now I start to get why Comfy doesn't want to switch to Pytorch 2.4.0 lol
>>102192963Well I have a 4090 but I haven't made an embedding since early early sd, before we had loras. I don't even know how to begin training an embedding
>>102192585>Does that help for this LoRA? I've been using the default 1.0 and getting inconsistent results.It helped with my first few gens but I never really went back to see how widely it applies
>>102192871Neither do I, but he trains good LoRAs and recommends against anything but the most essential captioning, he's a valuable point of data among many opinions.
Barn
>>102192997Dunno if you're the melty anon or not but assuming you are just provide the evidence next time instead of being schizo about it. More people will listen to you.
>>102192977weird.. my ComfyUI portable downloaded 2.4.0 tho
I just made a rather peculiar discovery. Using base Pony, keeping same seed, and changing prompt one term at a time, I found out that the token "!" (Yes, just an exclamation mark, alone, between commas, nothing more) has a dramatic effect on generation. Seems so far to be for the better.What gives? Is this effect intended? What does the model interpret ! (alone) as?
>>102193040it installs 2.4.0 if you open the "update_comfyui_and_python_dependencies.bat", or else you have 2.3.1 by default, that's how it went for me
>>102192997Alright, next lora I'm training is of my cat, so I'll just use his name and no other captions and see what happens.I have no idea how to into layer training though.
>>102192803that's not how it works. by doing that you end up with hallucinations in your images because it thinks thats what you want when you try to use your concept token. it comes to associate everything with the token. you've clearly never trained a proper lora before and are just trying to cause discourse and spread misinfo
>>102193048Pony was trained in an incredibly retarded way, that score bullshit is fucked up. So it's jot surprising
>>102193060Scroll up, there's evidence to back this up. Check the links
>>102193050okay.. I guess ill downgrade once this queue finishes
>>102193069there is not.
>It doesn't exist if I close my eyes
>>102193071to help you you have to do this:1) go on ComfyUI_windows_portable\update folder2) do this cmd command here:..\python_embeded\python.exe -s -m pip install --upgrade torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cu121
>>102193086thanks anon! that saved me 10 minutes searching for the right cmds.. love you
>>102193116you're welcome o/
are there any good fp8 based finetunes for making ludes yet? i'd rather avoid experimenting and trying the 100s of nsfw loras to find the best one myself
>>102192997That's for character lora's though, style lora's are different
>>102193181there are no finetunes of flux yet, this shit asks for too much VRAM
>>102191214>all avatars in OP>/sdg/ 2.0
>>102193086okay the difference is extreme.. wtf are the torch devs doing that the output is so drastically differentpic related is >>102193071 but with 2.3.1 instead of 2.4.0
>>102193071>>102193240>okay the difference is extreme.. wtf are the torch devs doing that the output is so drastically differentIkr, they seem to not give a fuck about quality and shit, now I wonder if going for even older torch versions (like 2.2.0) can give even better quality than the 2.3.1 one for examplejust made a side by side comparison for everyone interested: https://imgsli.com/MjkzMjM2
>>102193239Who do you think made this thread, Anon?
>hmm, i should train yet another emma watson lora and upload it to civitaiwhy is he like this?
>>102193239whats sdg
>add large breasts or busty to the prompt>the person becomes a fatasswhat did the flux devs mean by this
>>102193288>https://imgsli.com/MjkzMjM2Interesting. Probably should do a test with some full body gens. Iwonder if anatomy is better with 2.3.1
>>102193288...I...is it possible that because kohya sd-scripts is updated to 2.4.0 the loras trained are also shittier because of it or it wouldn't work that way...?
>>102193390it's possible yeah, to be sure you'd have to make the same lora training for both 2.3.1 and 2.4.0 and see if there's a difference, and if there is, which one is worse
>>102193412ill do that over night.. Ill downgrade to 2.3.1 on ai-toolkit and retrain my disgea lora and compare tmrw
>>102193430Godspeed anon
>>102193430Good luck anon
Switching from forge to Comfy. What am I in for anons?
>>102193480unbridled comfort
>>102193480you're gonna be overwhelmed and you're gonna feel like going back to forge, but you need to spend a few hours finding yourself a workflow that does the same thing you usually do in forge and you will eventually pick up how everything works
>>102193480you'll be able to put the text encoder on your cpu (or on your second gpu if you have one), you'll be able to use AutomaticCFG (the best anti burner) if you're into CFGmaxxing, and you'll be able to use Negative Guidance aswell
>>102193459>>102193467ty, its running..
>>102193480you'll spend every second wondering why the fuck you swapped but the sunken cost fallacy and that single WF that's customized just slightly behind what forge offers will keep you stuck. you'll hate every second of it, you'll learn not to pull updates ever, and you'll eventually realize what a talentless, egotistical and annoying faggot the creator is, as it slaps you in the face over time even if you generally avoid petty drama faggotry enjoy
>>102193525and the worst part is that you don't feel like leaving ComfyUi because you've spent weeks making the most perfect workflow ever and you don't want to ruin that work kek
how do i add the automatic cfg hack to my workflow? what nodes do i need?
>>102193555You just need that node and that's pretty much ithttps://github.com/Extraltodeus/ComfyUI-AutomaticCFG
>>102193589before or after loading a lora? does it matter?
Buera Ruierd
>>102193640I don't think the order matter, but personally I put this after loading the lora
>>102193650>Buera RuierdDon't know what that means but pretty pic
>>102193650merhily beren branel,and the mome raths outgrabe.
https://github.com/chrisgoringe/cg-mixed-castingthis node is quite insane, you can make your own quants by choosing which precision goes to each layer, and it's saved as a safetensor, so no more GGUF slow shit, it's loading like a regular fp8 model
>>102193658oh god
>>102193678wtf bruh, show me a screen of your workflow, you messed something up that's for sure kek
>>102193678looks tight thodesu
>>102193589>Extraltodeusif you took time to read that dev repos, you will find out that he's autistic as hell
>>102193694maybe, Idk, I tested a lot of anti burners and his node is the best so farhttps://reddit.com/r/StableDiffusion/comments/1eza71h/four_methods_to_run_flux_at_cfg_1/
>>102193688i'm using the simple version, maybe i have to run it on my gguf workflow?
>>102193430>>102193459>>102193467>>102193507can't do it... ai-toolkit goes OOM with torch 2.3.1 .. so something in 2.4.0 handles VRAM better .. guess we will never know, I can not train the same 1024x1024 dim 32 lora on 2.3.1 with my 4090
>>102193710>Guidance 1.5maybe that's the culprit, put it on a default value (3.5)
>>102193731>guess we will never knowhttps://youtu.be/GzlKja1ySzo?t=10You could try on 2.5.0 though, it's different to 2.4.0 aswell
My heart weeps for all the beauty that won't be posted, unless some gentle anon should read this, and take pity upon a lusty old gentleman, and post sweet maidens for his pleasure...
>>102193732yeah that did it. thank you for the help
>>102193760Did someone call for a sweet maiden?
>>102193335Unfortunately, and I really mean it, to the point that I'm crying right now, big breasted slim bodied women are not the majority in the west(or maybe even in the east, can't say for sure) due to a wide variety of reasons like lack of exercise, too much trash food, etc. So, if left unattended, the model will probably associate big boobs with ham planets.
>>102193760
>>102193768you're welcome, and yeah, low guidance doesn't like high CFG somehowhttps://reddit.com/r/StableDiffusion/comments/1emow5p/finding_the_sweet_spot_between_guidance_and_cfg/
>>1021937103cfg is insanely high for flux. default is 1.0.
>>102193668based Carroll reference, I love that poem
>>102193335>>102193783Works for me
>>102193796that's the point, he wants to use AutomaticCFG to prevent the burning of high CFG
>>102193753uuh..>ERROR: No matching distribution found for torch==2.5.0not sure if I wanna go for nightly builds
>>102193784>The boomer shooter LoRA again
https://civitai.com/images/27312232Finally, Migu won't be the only vocaloid that could be used on Flux kek
>>102193899https://www.youtube.com/watch?v=rSoo9WS8t7k
>>102193899I don't even know who you are
>>102193899Migu will always be better because she is in the default model, no LoRA needed.
>>102193931>Migu will always be better because she is in the default model, no LoRA needed.
>>102191214>avatar OP
Noicehttps://civitai.com/models/690948/paper-mario-style-f1d?modelVersionId=773299
>>102193973
who is the worse attention whore, koff or debo
>>102194034which one do you spend more time thinking about, or is it equal
>>102194034Who's koff?
>>102194046nick fe, aka creepy loli alien horseface poster (also banned for posting real cp on 4chan before)
>>102194065Oh right. Probably debo then. But it's hard to tell if I'm looking at debo or jus a horrible person sometimes, at least koff has the decency to be identifiable
>>102194046>>102194091this guy
>>102194034It's you, because you keep talking about them as if they matter, unprompted.
I miss schizo anon 2
>>102192182Why does that face show up so much? Is it a character, or just a base model in Flux?
>>102194187I am still here. Also who is that?
>>102193784She is a cool delicious nectar to my parchèd eyes. You gratify me greatly yielding freely such a prize. I offer thanks as freely, for her generous bosom's size, and every woman thing in her which bids my manhood rise—arising like a haughty tower thrust into the skies, whose stony tip parts tufted clouds, the heavens' spotless thighs, and reaches into hidden regions no man e'er espies—I will not tell of all the places where my fancy flies; of balmy daydreams that I dream I need not you apprise, but just my thanks for the delight your lovely gen supplies.
>>102192731That's false, if you have the exact same settings, you get the exact same result. This is true regardless of gpu, or if using extremely slow cpu. Regardless, if it's all the same, you get the same result, exacly.
>>102194187The Nekopara porn poster? That dude was legitimately autistic
>>102194178kinda sad.>>102194188it is a mystery even to me.
>>102194091That's the same face again. The "Flux face". Or one of them. Who is it?
>>102194206There is no way this isn't debo trying to stir shit.
>>102194220This is a bad faith argument and you know it.
>>102194202You're welcome
>>102194220idk, just weird, all you have to do is load the example workspaces and you get 100% identical gens, with the same everything, seed, cfg, guidance.
>>102194256my face after 12 hours of gooning
>>102194188Baked in as much as the butt chin.
>>102194250post a catbox of one of your outputs, I will test this
>https://comfyanonymous.github.io/ComfyUI_examples/flux/#regular-full-versionfollowed this and got an (unknown error) popupput everything in correct folder4090 with 32gb ramany ideas?
>>102194309>(unknown error)That's all it said and nothing else?
>>102194330yep
>>102194097I was post a gen. that debo that you see around every corner, behind every post, and every night in your dreams is something wrong with your brain>>102194187this is getting out of hand. now there's two of them?>>102194256is this a controlnet?>>102194357>>(unknown error)I like how its whispering it to younothing else in the console window?also doesnt the weight need to be flux? f8 or something
>>102194357Share your full workflow to catbox
>>102194357What does the command prompt say, even though it's giving you unknown error there, there's likely more information in the command window.
>>102194399>>102194357Actually don't, I just realised it's the default one. Does it work with a different model file?
>>102193731rip, thanks for trying anon. maybe we're better not knowing..
>>102194384>also doesnt the weight need to be flux? f8 or somethingDefault should run it at fp16, which on a 4090 he can do
>>102194384>is this a controlnet?no, only prompt
>>102194309>>102194357>>102194399>>102194420>>102194422so i got it to work literally once then same unknown error i changed weight_dtype from default to fp8e5m2
So is it better to train lora at a high rank, then resize it to a lower rank? I remember anon saying it's faster to train at a high rank.
>>102194460Are you OOMing on default?
>>102194465It's better to train specific layers, you can get a 128 network dim LoRA at 4.5mbDon't ask me how to do it because I don't know.
>>102194449ooo, ok. that makes sense>>102194458cool. I'm surprised you can get such dark outputs
>>102194475>layersblocks.
>>102194480layers, blocks, same thing only different
does ai-toolkit lora training work on a 4080? or is 4090 minimum?
>>102194469i get this on top right
>>102194488I find my gpu usage hovers around 20gb regardless of what I do on ai-toolkit, but I also find the LoRAs that come out of it to be kind of better... but that's probably because the settings are very straightforward.
>>102194497That's because comfy stopped. You need to re-start it. Wait, did you use run_nvidia_gpu or run_cpu bat?
>>102194504nvidia
>>102194479>Buttchin
>>102194510toshiba
>>102194511I don't care at all about chins
>>102194531
I liek birba
I'm gonna post some pics of my new LoRa now
>>102194589
>>102194589neat door, cute girl>>102194599neat robit, cool scene and instagram filter
>>102194604>>102194589>*Begins panting heavily*>Is that... IS THAT 1 GIRL?
>>102194531you don't care about quality or fingers either
>>102194604
>>102194460you're getting OOM, use the fp8 model for now, you need more RAM, 32 isn't enough sorry even if you have a 4090, 32GB of RAM isn't enough
>>102194612
>>102194617my quality is always high and my fingers are always perfect
'grocery nuisance' paragraph of seethe still amuses me.
>>102194630tfw 31.2 usable memorythanks anyway
>>102194645or you can use 32gb with the full dev model but stop playing vidya, watching youtube, twitch, anything that eats ram.You can also try a GUUF model for less memory
/sdg/ 2.0
>>102194639Fellow automaton enjoyer
>>102194661gm
>>102194661It didn't take long for the avatar fags to realize their captive audience had moved here. What's next?
>>102193332sexual degeneracy general
>>102194698Sustainable development goals.
>>102194685Schizo anon 3
>>102194685Shitposting
>>102194712wow its literally me
>>102194718stay in your confinement thread
sigh, now all we need is PW to start posting here... it was good while it lasted
I had an epiphany while training specific blocks when doing LoRAs. What if we just trained all the blocks?
>>102194752>now all we need is PWIf that troon starts posting here I will leave. inb4>That's a good thing!I cannot stand that troon.
Heres your controller bro
>>102194767thanks for not touching the d-pad or face buttons I guess
>>102194761>If that troon starts posting here I will leave.thats what we call a win-win
>>102194779 licked them clean no worries brah
>>102194790>meanwhile chad at the bar with your crush
>>102194819HELLO SAAR
>>102192242fuck off fren
>>102194819
>>102194868>>102194790Anyone notice flux gives men surprisingly juicy tits despite being covered in hair?
>>102194888>>102194897>>102194904Good gens but fuck that looks miserable to live there. Really soviet block chic
>>102194917Was going for more of an outdoor liminal vibe, a creepy too-clean corporate campus
Hello? Where is everyone?
>>102194917not enough mcdonalds for you?
>>102194934>creepy too-clean corporateThat's definitely what it is
Accidental Attack on Titan moment
>>102194953I think a bunch of people are serving time rn
Sometimes we unconsciously fail to consider that these models do not "think" like the human brain at all.There's a prompt I borrowed that paints some goooorgeous orange haired girls, right. So you think "Well, even if it doesn't have experience painting girls with other hair colors, it'll just extrapolate, right. Swap every orange hair pixel to black/silver/blonde/whatever ain't rocket science. But no. EVERYTHING changes. The drop in quality is astounding.It makes all sorts of diffuse associations by correlation between everything. Track suits are only worn by athletic people. America only has fatsos. Black dudes are always carrying a bowl of AFC and a watermelon.You have the perfect prompt, you think you're going to get some sweet permutations out of it, but the moment you change ONE TOKEN, everything goes to hell.
Bread>>102195069
>>102195104Retard, let the thread autosage first.
>>102195137Thought it was 300 here, my bad
>>102192242Nyo
>>102194206go for it debo, exact same settings and change pytorch versions and you'll see you'll get something different, and it's not surprising at all, they change a lot of math shit for optimisation so you get different rounding results
>>102192242replying to save my mother
amen to that
>>102197627Nice
>>102197644you're welcome... anon
>>102197672Nice colors
>>102194685>What's next?Local Diffusion Non-Eukaryote General