Discussion of free and open source text-to-image modelsPrevious /ldg/ bread : >>102181685>Beginner UIEasyDiffusion: https://easydiffusion.github.ioFooocus: https://github.com/lllyasviel/fooocusMetastable: https://metastable.studio>Advanced UIAutomatic1111: https://github.com/automatic1111/stable-diffusion-webuiComfyUI: https://github.com/comfyanonymous/ComfyUIForge: https://github.com/lllyasviel/stable-diffusion-webui-forgeInvokeAI: https://github.com/invoke-ai/InvokeAISD.Next: https://github.com/vladmandic/automaticSwarmUI: https://github.com/mcmonkeyprojects/SwarmUI >Use a VAE if your images look washed outhttps://rentry.org/sdvae>Model Rankinghttps://imgsys.org/rankings>Models, LoRAs & traininghttps://civitai.comhttps://huggingface.cohttps://aitracker.arthttps://github.com/Nerogar/OneTrainerhttps://github.com/derrian-distro/LoRA_Easy_Training_Scripts>Fluxhttps://huggingface.co/spaces/black-forest-labs/FLUX.1-schnellhttps://comfyanonymous.github.io/ComfyUI_examples/flux>Pixart Sigma & Hunyuan DIThttps://huggingface.co/spaces/PixArt-alpha/PixArt-Sigmahttps://huggingface.co/spaces/Tencent-Hunyuan/HunyuanDiThttps://huggingface.co/comfyanonymous/hunyuan_dit_comfyuiNodes: https://github.com/city96/ComfyUI_ExtraModels>Index of guides and other toolshttps://rentry.org/sdg-linkhttps://rentry.org/rentrysd>GPU performancehttps://vladmandic.github.io/sd-extension-system-info/pages/benchmark.htmlhttps://docs.getgrist.com/3mjouqRSdkBY/sdperformance>Try online without registrationtxt2img: https://www.mage.spaceimg2img: https://huggingface.co/spaces/huggingface/diffuse-the-restsd3: https://huggingface.co/spaces/stabilityai/stable-diffusion-3-medium>Maintain thread qualityhttps://rentry.org/debo>Related boards>>>/h/hdg>>>/e/edg>>>/c/kdg>>>/d/ddg>>>/b/degen>>>/vt/vtai>>>/aco/sdg>>>/u/udg>>>/trash/sdg
Let me see a beautiful girl, a sweet summer rose...
>>102187084shit taste
>>102187117one from a project
>>102187084ty baker
https://drive.google.com/drive/folders/1eGrbstWLGOlinNL_d7WzaYcOzpSp5a9xThat's a cool way to find which styles actually work on flux dev
Blessed thread of frenship
>>102187144yeah
https://huggingface.co/leejet/FLUX.1-dev-gguf/tree/main>Q8_0 -> 12.8gbhttps://huggingface.co/city96/FLUX.1-dev-gguf/tree/main>Q8_0 -> 12.7gbI know that city's quants are a bit deprecated because he forgot to add some F32 shit on some weights, maybe the 12.8 one is the "real official" one expected from a real Q8_0 quant
Emad doing some revisionism on his takedown request on SD1.5 in 2022 kek
Corporate memphis
Trying to run FLUX controlnet for the first time on ComfyUI. I do not understand in which folder should I put my controlent safternsors (flux-depth-controlnet-v3).
>>102187370The censorship really started on day 1 and ever since they've cared more about censorship than even making a functional model let alone giving me the ability to finetune it. SAI never once released real training tools and so far the only group that has is Pixart.
>>102187445Emad got lucky Runway released the uncensored version of SD1.5 + Nai's leak, without that his cucked shit would've never taken off the way it did
bigma status?any chance it'll mog flux or no?
>>102187479nice lanky felines
>>102187476>We will try to let our model out in Sep.>let our model out It'll be a monster.
>>102187095yeah, using a few different ones I trained
>>102187476>>102187504imo they'll delay that release, everything worse than flux dev will be discarded anyway, I'm glad they don't have much choice but to release something actually good now
>>102187476For raw power nothing is going to mog Flux so smaller models can only compete on trainability on consumer hardware. The next Pixart model will probably be on par with SD3 but without the anatomical monsters.
pixartsexuals will rise again
>>102187514you will release them on civitai at some point?
>>102187372>>102187408Could be more exaggerated, that's where its charm comes from.
>>102187517>>102187519The fact that flux is a guidance distilled model is really annoying, and the more I use flux, training loras on it etc, the more I'm convinced that maybe it's not fixable. When training a complex concept lora on thousands of images, you basically have to use CFG to get good comprehension of the new things it learned. But it still hasn't lost the guidance distilled nature of it, so you have to use hacks like dynamic thresholding, automatic CFG and such to not fry the image.Basically what I'm saying is a model slightly worse than Flux but without this distillation bullshit might take off due to that fact alone. And you probably could fine tune such a model for much longer and not end up in this weird middle ground halfway between a normal model and a guidance distilled model like you do when finetuning flux.
>>102187635My theory is that you can get rid of that guidance bullshit if you make a giant finetune that will make Flux not burn at CFG > 1, I'm sure that's possible, and yeah I agree this shit is annoying as fuck, CFG is king and they made a mistake by adding this distilled guidance bullshit
>>102187611My gens are just prompts .Need to lora for more exaggerated limbs i guess .
>>102187635If Pixart is 3B and supports negative prompts, it wins. I also don't think Flux fully uses its 12B parameters and things like buttchin smells of overtraining.
>>102187380>>102187443>>102187479bongbat is bong
>>102187727>If Pixart is 3B and supports negative prompts, it wins.you only have one way to support negative prompt, it's to go for CFG > 1, so it will always be twice as slow
>>102187757negative prompts on Flux looks to be a janky hack
>>102187757but at 3B with CFG it will still run faster than Flux
>>102187770I know I know, it's one thing to make CFG > 1 work on Flux, but negative prompt seems to be working at 20% of the time (and I'm being nice there)
>>102187783>but at 3B with CFG it will still run faster than Fluxwe already have a 3b model it's SDXL (3.5b), and desu I don't mind to wait longer to get the Flux quality, I'm not going back to smaller models (unless they managed somehow to get that quality with such a small size)
>>102187809SDXL isn't 3.5B because SAI are a bunch of liars. We're talking strictly about the weights of the core model, not pretending the T5 and VAE should be counted.
>>102187824I think SDXL works with clip only, and that shit is small (300 mb), so SDXL is really a 3.5b model
>XL sized>16ch VAE>Better dataset
I will not post in the thread
>>102187863There won't be a better dataset, only Musk has the balls to make a completely uncensored model that has all the celebrities, characters and NFSW in there
>>102187839They include the VAE parameters.
it's pixart pride month
>>102187874Isn't grok 2 based on flux?
>>102187878You're right, the unet model is 2.6b
>>102187903It was also poorly trained because Pixart Sigma with 600m is on par if not better than SDXL.
>>102187889I think it's a finetune of flux pro or something like that yeah, I said that Musk has balls, not that he's talented enough to make an actual good model
>>102187903wow a thot
>>102187914as much as I despise SAI, SDXL is a unet model wheras Pixart is a DiT, the architecture difference helped Pixart to perform aswell with such a small size imo
>>102187915>finetune of flux proNot a finetune.
if you guys could wish for the perfect pixart model what parameter size would you want it to be?
>>1021879282B or 3B. Whatever is largest that can be feasibly full fine tuned on 24 GB of VRAM.
>>102187928I wanted to say 12b is fine (because you can run Q8 on it and the quality is almost on par with fp16) but then I remember we can't really finetune a 12b model with our current gpu's so...
>>102187809Flux could easily be 8B>>102187903>multiple text encoders is a GOOD THINGfuck Lykon, dropkick a Lykon
>>102187926SAI actively made sure finetuning was difficult as a censorship/safety strategy. "Loras are all you need".
>>102187961Huh, this sounds so familiar...
>>102187937that's just sad, Nvdia is nerfind everyone with their low VRAM gpu releases, if we had 48gb we wouldn't even question anything and go for giant model and locals would be really good
>>102187975I'll happily take an 8B Pixart model that fully supported Fairscale which allows you to fairly efficiency swap train on smaller GPUs.
Anyone using Forge? I was able to gen a few images the other day but now trying anything just causes it to eat all RAM and trigger oom killer. Even with the exact same prompt as before.
>>102187975A 48 GB VRAM model would make the poorfags seethe. They're already frothing with 24 GB models.
I have some good news, I found some alternative Q8 that are closer to fp8 than the regular Q8_0, and it works fine on the GGUF nodehttps://imgsli.com/MjkzMTU0https://huggingface.co/mo137/FLUX.1-dev_Q8-fp16-fp32-mix_8-to-32-bpw_gguf/tree/main
>>102187903SAIkeks flexing with 1girls will never not be funny
>>102188024and then anon would complain about NVIDIA not making 80GB VRAM consumer cardsthere is no satisfying the retards that think bigger models are always better
>>102188050People would be investing in 40/80 GB cards if we didn't know they'd be deprecated in 2 years. Think about how much people were spending on PCs in the 80s.
>>102188042Q8 is superior to fp8 though, did you mean fp16?
anyone know if there's an /e/ bake going on for flux? juice isnt worth the squeeze fussing with base flux and style loras. im willing to cough up a donation for server time.
>>102188050debo, why are you ok with Nvdia staying with 24gb for more than 6 years at this point, their RTXTitan (24gb) was made in 2018, we have to advance like we always done on the computer ecosystem, if we listen to you we would've stayed on 1gb from the 2000's pc because "hurdur that's enough for you goys!"
i will not post in the thread
>>102188083oh yeah I meant fp16 my b
>>102187609I wasn't planning to, I only just started training loras
>>102188111what lora did you use for THOSE
>>102188024well fuck them because I'm not paying 10000 dollars to get a 48gb card
>>102188070Very few people were buying those.What do you mean by "deprecated", are A100s "deprecated"?>>102188099I'm not debo.
>>102188140>I'm not debo.yes you are, responding to multiple people on the same post + having retarded takes is debo's best signature
>>102188140>I'm not debo.just means "retard" desu slang
@102188140d*bo
where's bigma
I keep seeing the same fucking name plastered over the threadsspamming about thread personalities is just as cancerous as them
>>102188140I don't care, people spent way more on computers 30 years ago then you will spend today. Guess what anon, Earth to anon, computers were $1000 in 1990. Do you need an inflation calculator? Yes anon, H100s by far excel over A100s. Let me know if that confuses you.
>>102188139okay, but how does that harm me?
>>102188209that harms you because we can't finetune Flux at the moment, our 24gb cards aren't enough, and we can't do multi gpu training innit?
>>102188226yeah I'm sure you're really a big contributor
>>102188192> Yes anon, H100s by far excel over A100sThat doesn't deprecate the A100s, anon, does it?
"Gamers" are okay with 24GB cards>these are not gaming cards>jewvidia is an AI companyYou're right, which is why the 32GB+ cards are marketed to those companies.
>>102187974thinking of flux?I'm working on it
>>102188235So you're ok with zero flux finetune? Because I don't, debo
>>102188236I'm not getting in an autistic semantics debate with a poorfag
>>102188138https://civitai.com/models/656458/big-boobs-flux?modelVersionId=734465https://civitai.com/models/661287/jk-perfect-breasts-for-flux-perky-torpedo-tits?modelVersionId=740030
>>102188250I accept your concession.
If Anon didn't figure out how to make Flux run on cards like the 1080 then it would be even more of a fuck up with far far less adoption.
>>102188244I'm sure someone that isn't you is going to finetune it. Don't worry anon, hope and pray that's what you like to do.
>>102188260A100s are still, why haven't you bought one again?
>>102188291Are still what?You didn't want to go into an autistic semantics debate but you do want to go into a non sequitur debate?
im a varamlet and im proud
>>102188018change this to automatic (fp16) so the loras stop getting rebuit all the time
can A1111 use Flux yet
>>102188369just use forge. it's literally an updated a1111
When will this fucker die
>>102188361I already set it to that, plus I wasn't using any loras.Seems to be this issue https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/343 except mine happens immediately.
>>102188396i'd much rather see 1girl than his bug face and unkempt hair
>>102188396
>>102188396Does it even look better? It's different, I'll give him that.
i won
2young nick
>>102187915i would say the opposite actually, he clearly has the people since grok2 is actually a competitive llm with the top dogs now, but making an uncensored image model is probably too much of a liability even for him
>>102188111
>>102188580cool
finally.. finished my final outputs test between all the different bakes/epochs, once I'm done sorting these I can decide on one and move the fuck on to new loras. I want to make some character ones but I'll probably wait til kohya releases layer training so I can try that.. how many layers does flux have? maybe I can spend the remainder of my runpod funds testing what some of them do before I move to vast
>>102188445Very nice
endless things to x/y compare
>>102188615thx
>>102188602Ty, Flux1.dev_Q8_0, with a clipvision load image.
>>102188826Model?
comfortable
>>102188396I wouldn't even be bothered if it wasn't for him just plastering the same fucking face every time
>>102188042https://imgsli.com/MjkzMTYzwtf, why the 11bpw one is the worse? it's the biggest one of them all
>>102189227wonder if it knows of lykoi cats
>>102189255>lykoi catsya looks like one you are right.. did not know em .. I just wanted a "rough charcoal sketch of a cat"
>>102187146Hmmm... could be prettier.
do you think training only specific layers will help reduce finger rapeage for loras?
>>102189344Theoretically yes because the layer in charge of rendering the fingers should be untouched.
>>102189344the less weights raped, the better
less weight, the better, agreed
what's a layer?
>>102189464Ukraine will lose, unless the West donates cannon fodder.
I can't get it to say "I drew myself". Is it because flux reads text in tokens and doesn't know how to spell some of them?
>>102184029Fucking based
amazing how far we've come
>>102184589>>102184735Once per user if it were once per image you would be able to game the buzz system with a 2nd account
>>102188939Flux dev with this lora https://civitai.com/models/651715?modelVersionId=750685
1.09B
>>102188340I see that debo hiding in the back there>>102188341>varamlet>>102188396I still don't know who this is>>102189503jackets, sweaters, maybe scarfs >>102189522what are the ramifications on local image generation?
>>102188490They just have nanny ai that runs fast after gens to check against tiddies, in addition to the usual policing of language. There's absolutely no reason to go back, because all of the capacities and elements will be here soon. The basic roadmap is:>every prompt, a lorathen>every inpaint, a lorarn there aren't that many loras, but eventually it will be seamless.
>>102189401>>102189435really want to test on style loras, it'll be more of a challenge to get good fingers still in the correct style I'm guessing
>>102189593fake. Spacebar's on the wrong side.
>>102189593A cornographic image? On a BLUE board?>>102189564Speaking of, upload some gens to the LoRA so I can recoup some of the buzz.https://civitai.com/models/709157
>>102189325
>>102189625>Speaking of, upload some gens to the LoRA so I can recoup some of the buzz.no
>>102189590>what are the ramifications on local image generation?More refugee mathematicians, I guess.
>>102189576Thank you!
>>102189401>the layer in charge of rendering the fingersno such thing
>>102189722Nice
>>102189738>beam me up!1.8 guidance is nice.
>>102189752What
>>102189644
>>102189778
>>102189822Who asked
>>102189829Try it and post the result to see if that person is cool or not
>>102189840No thanks
I don't know if this has any use to anyone who wants to test blocks, but I checked the state_dic of flux and got this:https://files.catbox.moe/i1cetj.txt
>>102189887What is happening here
I think ideogram 2.0 is best image generator right if we dont forget about its closed
>>102189887Sometimes when everyone has a nice day the answer is to smash a bee hive. Except it's a sub-conscious compulsion.
>>102188396>DownvoteWhy this one turkjeet makes /ldg/ seeth so much ?
>>102189910>ideogram 2.0No thx
>>102189910what makes you believe that? do you have some image examples that show how better it is compared to the rest?
>>102189929Picrel might be the best example in one prompt it chreated 4 different characters with every details i asked and style
>>102189958Wow you prompted characters all from the same franchise?
Bros we got nearly everything sd had for flux in 2 weeks
>>102189987Except for soul.
>>102189625>>102189794Does this LoRA require special settings/requirements for Flux? It seems to hang on my computer
>>102189987*4 weeks .. it was released a month ago
>>102190003
>>102190000checked and true
>>102189986Vector art illustrations of four major Norse gods in a single row, utilizing a flat design style. From left to right: Odin (an older man with a long white beard and one eye, wearing a blue tunic with brown leather accents and a winged helmet, holding a spear with a serious expression); Thor (a muscular figure with flowing red hair and beard, wearing a metal breastplate and red cape, wielding his hammer Mjolnir with a determined look); Loki (a slender figure with a green and gold tunic and a mischievous, cunning expression, holding a staff with distinctive green accents on his clothing); and Hel (a pale-skinned woman with long black hair and a stern expression, wearing a black and white dress, with half of her face appearing skeletal, holding a staff topped with a skull). Each god should be set against a simple white background. Maintain a minimalist style with clean lines, resembling character designs for a mobile game.
heck I love the crazy cool typefaces FLUX comes up with
>>102190003>>102190027nvm restarted and it seems to be working now, think I might have accidentally used run_cpu.bat, kek
How many charctors flux knows ? I only know about migu and trump
>>102189926>>102189910I unironically canceled once I found out SDXL existed (and I had my GPU on the way).Coincidentally, Flux came out right as I built my machine.Flux is so good, it's gonna be like interior decorating levels of skill needed. Like "um, I want xyz right there"
>>102190134lots of comic book characters with varying likeness strengthsand Meghan Markle
This is so weird...https://imgsli.com/MjkzMTc1https://huggingface.co/mo137/FLUX.1-dev_Q8-fp16-fp32-mix_8-to-32-bpw_gguf/tree/main
>>102190134>https://drive.google.com/drive/folders/1eGrbstWLGOlinNL_d7WzaYcOzpSp5a9xtroll gonna troll I guess
>>102190170he asked for characters and you give him styles, are you retarded or something?
>>102190159It's functionally identical
>>102190197yeah, white skin is identical to black skin, you just solved racism
>>102190134check an anon made some research last thread>https://mega.nz/folder/a2Ri0b4Z#SNKrUAChFFeovXJZT3V5SAKnows many super heroes, vidya and comic characters .. but got some glaring holes in the knowledge as you will see
>>102190134I'm running character tests, here's where I'm at so farhttps://mega.nz/folder/a2Ri0b4Z#SNKrUAChFFeovXJZT3V5SA
>>102190240it know useless shit like western superheroes, I want it to know my japanese waifus instead :(
>>102190260traint it on em.. its so easy and takes like an hour for a character lora
>>102190240>>102190245>>102190170Thanks bros
Favorite flux checkpoints???
>>102190321flux1-dev-bnb-nf4-v2
>>102190321Flux.1 Dev
>>102190159So basically? The 9.341 bpw one is the closest to fp16 and not the bigger ones? That's weird indeed.
>>102190321AbyssOrangeFlux2_hard
>>102190321FLUX.1 [pro]
>>102190333why going for nf4 when you can go for Q4_0, it has the same size but better quality
Heard you people are talking shit about me on here
looks like inspire_pack has the ability to isolate layers in already trained loras https://www.reddit.com/r/comfyui/comments/1f6bymd/lora_block_weight_for_flux_inspire_pack_in/might help in deciphering what block weights to use for testing training of lorasmaybe this is old news but I hadn't know about it yet
>>102190355I can't say anything about you without a patreon link
>>102190098
>>102190339>So basically? The 9.341 bpw one is the closest to fp16 and not the bigger ones? That's weird indeed.yeah, if you have a 24gb vram card, might aswell go for that one, it's the new closest quant to fp16https://imgsli.com/MjkzMTgw
>>102190372>Shake it. Bitch.
>>102190351I completely forgot it exists, have to dl the .gguf one. Ty man
>>102190372>>102190403Glad it's working properly took me some fuckery to find the triggers. I'm not entirely happy with it so I might try again with different settings.
>>102190195Nice shot>>102190321flux1-dev-q8_0.gguf
>>102190351doesn't Q4_K_S have better quality than Q4_0 despite being the same size?
>>102190432you're righthttps://github.com/ggerganov/llama.cpp/pull/1684#issuecomment-1579252501
>>102190240someone (not me) should publish this as a website>>102190355this guy must have the coolest tinder profile at this point
>>102190321Q8_0-fp32-09.341bpw >>102190398
>>102190432>>102190450Where does Q4_1 fit in quality-wise? It's what I'm using now on my 12GB card
https://aclanthology.org/2024.lt4hala-1.15.pdfgpt-4 works with Latin, to some extent. What others work with Latin?
>>102190477you can see it on the image, it has worse quality than q4_k_s (16.3% error for q4_1 vs 13.2% error for q4_k_s)
>>102190481wrong thread anon
>>102190477use Q5_1 or Q6_K if you have 12GB, Q5_1 should be slightly faster with loras since it isn't a K quant
>>102190492>>102190533Nice, I'll check all of these out
>>102190533I use Q8 with 12gb and it works fine, but probably a bit slower than those two. (around 4 something /it
>>102190207Oh shit you're right. I genuinely didn't even notice. I need to spend some time in /pol/
>>102190502oops lmao
>>102190576kek
>So Nicolette, tell me again that story of how you lost your virginity.
>>102190562I thought quants don't actually speed up gen time, just prevent oom?
>>102190606What's he holding?looks like a packet of mayonnaise
>>102190638it speed up gen time if the quant is small enough to fit on the gpu, if it's too big, some of it end up on the cpu and it makes shit slow
>>102190645They're about to shoot one of those Japanese image videos where they don't actually have sex but just simulate it using white condiments
>>102190663That's a thing?
>>102190638they are slower than their equivalent in fp, but they are faster than a higher fp .. sooo:(in matter of speed not quality fp16 is the slowest)>fp16 -> q8 -> fp8 -> q6 ... etc.
>>102190675Yes. Most of it is basically gravure with nipples and buttholes, the sex scenes are usually the final scene of the set.
>>102190647ohh derp, of course. thanks anon
>>102190715ooph what a pun.. nice gen tho
>>102190741Every time I do female robots it always gives them nips
is this the correct stuff?
>>102190785you should go for Q4_K_S instead anon >>102190450
So i hear Cog has an img2vid model, but they have no date for release and if it is EVER released it will not be open source.I hereby name 1st Sept as "ghey devs" day.
What's a good way to "fix" noise on an image so it upscales better?I like to tweak my gens to perfection and that usually means a LOT of manual drawing, inpainting, merging, photoshop edits and more. Unfortunately the processing sometimes leaves some parts looking too "smooth".Even manually adding some uniform/gaussian noise in Photoshop and running the whole image through with low denoise doesn't fix it, it leaves the added noise in and you can clearly see what parts were edited (picrel).What's a good way to fix this? I just want to add some actual (uniform and consistent) texture to the fabrics so they upscale nicely.
>>102190797meh, fuck them, BFL will deliver with their video model like they did with flux
>>102190782these are merely buttons, she is a coffee machine, the left button is for sugar, the right for milk
>>102190785same set up as mei have 'automatic (fp16 lora)' swap method: queue swap location: cpu and gpu weights setting at like 3400. no idea of those are good but it works for me currently
>>102190801get an different upscaler.. some radically remove noise, some smooth it out
>>102190808All our eggs in one basket again, and not by choice anon, that's the worst of it.No idea when BFL will release either, it's been 1 month to the day since BFL launched.I'm getting angsty for a good local img2vid model with coherency to come out.I hope they give ys something with high coherence, Flux has been fairly solid even though it has limitations as is.
>>102190782prominent nipples are a defining characteristic of women (not troons)
>>102190887Robots aren't women
>>102190880I don't see a reason why we should doubt them, this team left SAI because they were sick of their cucked policies and managed to make and release a fairly uncensored (AND GOOD) model just like that, now they have some ties with Musk they probably have more money now, so it's more likely it'll be a good video model, now I hope we'll be able to run it but if we don't, the GGUFs are gonna save us
>>102190900that robot identified as a woman you bigot!
>>102190794How is this compared to the nf4 model
>>102190924I hope so anon, the pain of SD3 still hurts.
>>102190936 no it identifies as coffee machine
>>102190946Looks like this: Q4_K_S > Q4_0 > nf4
>>102190841yea i guess this works. 3060 lacks horsepower
Finally got flux working somewhat reasonably. On a 3080ti with flux1-dev-Q4_K_S.gguf . I'm getting ~2s/it , or about 60 seconds per image. Is that within the ballpark of the performance I should expect or is something weird happening? Using essentially the canonical flux workflow with the gguf extension. I'm also running on a 7yo intel i7-8700 fwiw; I still don't have a good understanding to what gguf actually ends up offloading from the gpu.
Q4_1 on top, Q4_K_S on bottom. Q4_1 actually looks a bit better to me.
>>102191049Is that a remote control for a vibrator?
>>102191049KS deleted the women with green pants. What a racist.
>>102190987Thank you
>>102191049that's not how quant comparisons work, the only way to know which one is better is to know which one is the cloest to fp16
>>102191071I think one of those lovely latina ladies dropped a hairclip at yoga, if I return it she might marry me
>>102190987Do you know where to get the other stuff for the Q models besides the main model?
>>102191021> I'm getting ~2s/it , or about 60 seconds per imageat 1024x1024? ya that is about right .. a 3090 is ~1.5s/it, a 4090 ~0.75s/it
>>102191099what do you mean exactly?
>>102191110>>102190785This stuff
>>102191085I've proompted 8 pictures and in nearly all of them, the Q4_K_S has issues like clothing straps that just disappear or in some cases outright deformities.
>>102191118Awww
>>102191134K quants are wonky right now, they seem to perform worse with loras
>>102191134Here's Q5_1 (top) vs Q6_K (bottom) as well
>>102191120I'm not sure I understand your needs but I'm gonna try anyway:The VAE:https://huggingface.co/black-forest-labs/FLUX.1-schnell/blob/main/ae.safetensorsThe text encoders:https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main
>>102191093Oh ok maybe I watched too much JAV in the past, but I thought that was a scene where he controls all the women that have a vibrator inside them with his remote control
>>102191134if it looks better, go with it, dont waste a bunch of time on it
>>102191170Thank you so so much
>>102191120https://huggingface.co/city96/FLUX.1-dev-gguf/tree/mainhttps://huggingface.co/city96/t5-v1_1-xxl-encoder-gguf
>>102191100ok, thanks. A minute per image is bearable for the quality; If there's different quantization working reasonably at like 15s/image for some quality loss, I think I'd prefer it though.
>>102191175It's already been tested anyway. Each time the model gets smaller the further away from fp16 it gets. It's very linear.
>>102191166could you try this Q5_1 and Q6_K comparison without loras?
>>102191134I have a theory that the K quants are too "irregular" in terms of weights precision differences, and the image models doesn't seem to like that somehow, that can explain why a 11bpw perform worse than a 8.5bpw >>102190159
>>102191193A tip, you can gen at lower resolutions and still get excellent quality with flux. I get 4it/s on a 4080 using fp8 at 512x512. If you're just testing concepts drop the resolution, then crank it up when you've mostly dialled in
>>102190785What is the link for this model
>>102191142ty
Come and get your own loaf of...>>102191214>>102191214>>102191214
>>102191251the worst collage so far, gj
>>102191162>>102191203That does seem to gel with what I've seen.>>102191202In a bit, after I finish gooning to Deus Ex gym babes.
>>102191269original collage baker was best frfr
>>102191398sad but true
>>102191398>>102191423How are you guys generating these collages?
>>102191539https://www.befunky.com/
>>102191294>/ldg/ - Local Diffusion Generaltake your imagefx cloudshit and fuck off
>>102191549no lady ;)
1 more? or 29??
>>102191566https://www.youtube.com/watch?v=ebnYbhU9ukA
to slavery for the landed white men
>>102191860Sweet
>>102192716The color scheme here is so tasty it made me forget for a second that Monster already has a black cherry flavor