Discussion of free and open source text-to-image modelsBreviously baked bread : >>103165357The Demon of Laplace Edition >Beginner UIFooocus: https://github.com/lllyasviel/fooocusEasyDiffusion: https://easydiffusion.github.ioMetastable: https://metastable.studio>Advanced UIForge: https://github.com/lllyasviel/stable-diffusion-webui-forgereForge: https://github.com/Panchovix/stable-diffusion-webui-reForgeComfyUI: https://github.com/comfyanonymous/ComfyUIInvokeAI: https://github.com/invoke-ai/InvokeAISD.Next: https://github.com/vladmandic/automaticSwarmUI: https://github.com/mcmonkeyprojects/SwarmUI>Model Rankinghttps://imgsys.org/rankings>Models, LoRAs & traininghttps://aitracker.arthttps://huggingface.cohttps://civitai.comhttps://tensor.art/modelshttps://liblib.arthttps://github.com/Nerogar/OneTrainerhttps://github.com/kohya-ss/sd-scripts/tree/sd3>SD3.5L/Mhttps://huggingface.co/stabilityai/stable-diffusion-3.5-largehttps://replicate.com/stability-ai/stable-diffusion-3.5-largehttps://huggingface.co/stabilityai/stable-diffusion-3.5-mediumhttps://huggingface.co/spaces/stabilityai/stable-diffusion-3.5-medium>Sanahttps://github.com/NVlabs/Sanahttps://sana-gen.mit.edu>Fluxhttps://huggingface.co/spaces/black-forest-labs/FLUX.1-schnellhttps://comfyanonymous.github.io/ComfyUI_examples/fluxDeDistilled Quants: https://huggingface.co/TheYuriLover/flux-dev-de-distill-GGUF/tree/main>Index of guides and other toolshttps://rentry.org/sdg-linkhttps://rentry.org/rentrysdhttps://rentry.org/sdvae>Try online without registrationtxt2img: https://www.mage.spaceimg2img: https://huggingface.co/spaces/huggingface/diffuse-the-rest>Don't forget to maintain thread quality :)>Related boards>>>/aco/sdg>>>/aco/aivg>>>/b/degen>>>/c/kdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/tg/slop>>>/trash/sdg>>>/u/udg>>>/vt/vtai
>all 1girl collagegrim
noobai.
>>103171908i need to JACK OFF ASAP
>all 1girl collage win
>>103171908cringe>>103171926based
>>103171922
>>103171908one of them is a titan. so YOUR WRONG
>noobai vpred 0.6 hf link now leads to a 404https://huggingface.co/Laxhar/noobai-XL-Vpred-0.6what the FUCK???
>>103171938
>>103171961don't ask me why snoopy is in hell.
>the 1girl lobby is now too powerfuli kneel
>>103171982Very polished looking. My noobai look like this.
Is this a 1girl?
>>103171993you need img2img upscaling anon, not that hard to do
Any decent workflows for generating animations yet?I tried the animated diffusion extension but it doesn't seem to do well with XL models.
Blessed thread of frenship
>>103172002that's a 1fur
/ldg/ would have been a thriving paradise if anons would just jerk off before posting
found a way to make noobai work with sdxl efficient loader and sampler, just gotta unpack, route base model thru vpred and rescaleCFG then repackif it helps any comfychads itt tryna de-clutter their workflows
>>103172302Instead of all that crap I just load the model in forge then type words to get sexy pictures
>>103171862horniest thread in eons shameful
MochiHD waiting roomMochi i2v waiting room
>>103172312happy for you if you get what you want from forge anon, i'm just too used to comfy to go back now
armpit hair gens in collage waiting room
>makes you salivate
>>1031725711grill supremacy
>studio Nighli
>>103172538now do that gen but she's really big and sitting on a tiny city
Bigma sisters?
>>103172590died out waiting for sana
>>103172538>>103172541nice
>>103172595:(
prompting is not artisanal
>>103172614all of my slop is highly artisinal
Isn't forge supposed to completely unload models from ram when you switch to another one? It seems to throw models permanently in system ram until it overflows both the page file (30 fucking gigabytes) and available ram at the time (2 fucking gigabytes?)this must be a setting i'm missing but i'm pretty sure i have everything right, reforge was doing this too and comfy never had this problem.
>>103172692-this is my reboot from having forge crash from OOM>notice the fucking page file still gargantuan and how much ram its taking up just for an sdxl model>in VRAM its only 6 gigabytes then unloads down to less than a gigabyte after the gen is done
>>103172692>>103172704it's a pretty easy fix : use comfy
https://github.com/chengzeyi/ParaAttentionI like those speedup ratios
>>103172692>>103172704it's a pretty easy fix : use linux
>>103172802Noo 1girl my penis can heal you
>"hey anon, how do you like your meat?"
>>103172847add giantess, cityscape, landscape, destruction to your prompt
>>103172873maybe later anon, right now she's just grilling
slurrp
>>103173228cool
>>103173228it wasnt even that long ago where will smith eating spaghetti seemed like a pipe dream for local
>>103173281to be fair it's still far from the real deal kek
>>103173317MochiHD SOON
>>103173326I really hope they'll deliver, I won't doom now because they released their V2V vae recently (that shit is useless but at least it shows they are commited on delivering the goods little by little)
>>103173338What is even the use case for video2video?Just release img2video already
>>103173317yeah but we're not far from that original video now are we? kekand LOOK AT THAT >>103173326
>>103173406I'd say local has easily surpassed the original Will Smith spaghetti video (if it's the same one I'm thinking of) but we're still light years off MiniMax quality
>>103173349>What is even the use caseWhat is the use case for txt2img?
>>103173462>we're still light years off MiniMax qualityI truly believe we'll be close to MiniMax with MochiHD, once we'll get that and the i2v vae it'll be a fucking renaisance in the video gen ecosystem, it's insane what happened those last 4 months, we had nothing we were making fun of SD3M and right after we got Flux (a local model that is completly competitive with the best API models) and Mochi (same story here), life is unpredictable at times and when it's on the good unpredictable light I fucking take it
>>103173462I thank the pessimism for boosting us into an insane renaissance for visual genningbut its sad we had to lose LLM's in the process and lol audio>>103173489i haven't seen much of MochiHD yet, been more focused on how ponyfags are getting BTFO'd by Illustrious coming out of nowhere with the explosion in quality.
>>103171960If you don't want that to happen anymore, create a new HF account, and use this:https://huggingface.co/spaces/huggingface-projects/repo_duplicatorTo create duplicate of models you care about so they stay online.The great thing is that when a model goes missing and nobody made such a copy, it means nobody cared, so nothing of value was lost.
>>103173507the model was never available for download, but the page existed and the model was listed, which means that it exists, the page was juts removed during the night so idk what happened but it's not like i wouldn't have downloaded it if it was available
>>103173498>but its sad we had to lose LLM's in the process and lol audiollm? A few days ago, Qwen2.5-32b-coder got released and shit shit is almost Sonnet 3.5 tier, the llm fags got a renaissance aswell, the Chinks are starting to seriously dominate the AI space, and that was expected, that's what happen when you don't give a fuck about ""safety"" and ""copyright"", you get good models because you train your shit with the best data you can find, based chinks
Does anybody have the lomal Lora that was here?https://pixai.art/model/1757417777691869282/1757417777704452195?lang=zhApparently it was taken down from civitai and lost to time, and it was good.
>>103173595qwen2.5 is one of the most cucked models ever released, erp is bland as fuck, if it wants to comply at allllms are dead until further notice
>>103173595Did they get a renaissance? Anytime i check out lmg it's somehow degenerated into a even whinier, spammier, gayer ldg, hard to catch up with the news when no one wants to try out models and the few that do get spammed out with cope and seethe. That's awesome to hear at least. >given that's the same thread that always blasted chink models and essentially parroted MSM talking points to get people to stop talking about them
>>103173614yeah, I don't really like the /lmg/ community aswell, they can't stop whining, the worst community will always remain /sdg/ but /lmg/ isn't far behind lol
>>103173568Oh, I confused it with noobai-XL-Vpred-0.5 which is still online.So, what was it? Did he promise a 0.6 version and this is his way to "unpromise" it?
>>103173607>llms are dead until further noticeDid the ones we had disappear or did they stop getting better like we really don't have a worthy SD1.5 successor yet?
>>103173630there was no communication at all about it, some anon posted the link a few threads back and that's all we have. Maybe this was a mistake and the'yre not ready to release this one yet, or maybe they want to wait for a further milestone like 0.75 and they're vaulting this one, never to be seen again
>>1031736300.6 was released a day ago but somehow they removed it from huggingface quickly after
>>103173647From what I've gathered we're going to get a Vpred0.6 but it's going to be rebranded.
>>103173652That anon said the release had nothing to download.
>>103173652>>103173671what i said is that the HF page for the model was public for a day or so, we could see the model being listed and the files, but the files were not available for downloadwhich confirms that this model exists, but eventually the page was taken private or deleted, no idea what happened
>>103173686alright im placing my bets that there'll be a big update today and 0.6 was a mistake/dry cum
>>103173640for a moment you can cope when a shitty model is a few less % shittier than its predecessor, but the copium glasses quickly wear down and you realize it's still a shitty modelbut now the stream of "just a little less shitty" models is dry, the last improvement in erp was nemo, we've entered the era of ultracucked benchmarkmaxxing chinese models, and it's juts getting started
>>103172692god damn it looks like Reforge has the same issue
>>103173692hopefully, god i hope they don't fuck this up, it's the one model that will nuke pony forever if trained correctly
>>103173761glad a normal person finally chimed in, thought i was alone.>>103173791honestly man its already nuked pony out of orbit for me. There's just character loras i want and that's it, at which point, i can make them myself, illustrious seems even easier to train than pony ever was.Im testing how well it knows characters now and i'm surprised how many obscure characters are at least halfway trained.i've already deleted around 30~ish of my pony character loras because illustrious just knew them perfectly without even needing to prompt their clothing or anything.
>>103173823>right as i make this post i'm BTFO'd into orbit by illustrious genning mew from JSR as GUM from JSR dancing with the pokemon mewet tu, Illustrious.
>>103173823been messing with noobai vpred 0.5 for 2 days now and yeah, pony is already beaten, now who knows how V7 will come out but to be honest i don't expect much if anything at all from it>>103173854at this point noobai is more advanced than illustrious, i know they have v1.0 and v2.0 in stock but right now noobai just is the better of the two
>>103173823>glad a normal person finally chimed in, thought i was alone.I don't think it has done this before. I'm so used to shuffling trough checkpoints because of doing/testing merges. Now I have to restart reForge after 3 checkpoint switches
>>103173891so if noob is just trained off of illustrious, but its training is now more advanced than base, what do we even call it? Noob or Illustrious? I guess that's why im starting to see people call it noob now, because noob's better than its base.this shits confusing, doesn't help it all just came into existence within the past few months and became this good within a month.>>103173907>3 checkpoint switchesits a single switch to another checkpoint and i get heavily degraded performance that i need to restart.
>>103173932noobai is its own thing, their computing power is even higher than what ponyv7 is currently being trained on, also a bigger datasetmaybe it'll change when illustrious 1.0 releases but before that : noobai is the more advanced model by far
>>103173701So we never got to the level of AIDungeon's Summer Dragon for local, huh?
>>103171862>Don't forget to maintain thread quality :)>no rentryIs that supposed to provoke schizo anon?
>>103174112baker change is subvert forceful takeover from /sdg/. It explains college quality drop. Suddenly will will start hearing about how much dick Comfy is getting and we will have another thread split.
iykyk kinda thing
what is this?https://civitai.com/models/944844/2k-ultimate-xl-13gb-fp32?modelVersionId=1057826is it any good?
>>103174217kek the nudity is worse than 1.5 how the fuck did he do that?
>>103172302How can I get an xy plot for loras?I'm trying to get a row at which the lora is not applied at all
>>103174188Just saying. It might attract schizo anon. Carry on, fren.
>>103174252start by using <lora:whatever:0.0> in your prompt, then seek 0.0 and replace with 0.2, 0.4, etc...
uh oh
>>103174217>SDXLlil bro hasn't opened his computer in 5 months
any tips for negatives/positives to avoid animals in my pony gens. "Zebra print" is creating a zebra. I tried animorph, animal in the negatives. very nsfw image example. https://litter.catbox.moe/m26x16.png>>103174268I find that using <lora:lora_name:0.01> is better. Formatters have screw me on changing 0.0 to 0 and there are sometimes other 0s in the prompt for me.
>>103174401only humans, not furry
>>103174401source_furry, furry, anthroIn negative
>>103173640>a worthy SD1.5 successorNoob gets VERY CLOSE stylistically imo and i think that's the only realm where 1.5 shines (but how much of that is nostalgia?). Using that old model is really fun until you remember the specificity by which you can prompt Flux.
>>103174424>>103174488Thanks anons!>not furrydidn't work, maybe because I have BREAKs>only humans, furry, anthrovery helpful. I had to add some weights to avoid the basic blank background, normal pony boring >source_furrythis destroyed all 'creativity' of the gen. Details dropped and it just looked bland. I shouldn't be surprised eliminating a large part of the model wrecks stuff, but here I am.
ok anons, the true power of NoobAI, img2img upscaling a gen up to 3584x5888picrel is a compressed version, in the catbox is the actual imagehttps://files.catbox.moe/c6ynd5.pngtry and zoom in, each minuscule detail, each minuscule pen stroke, pony would shit the bed hard with this test, but not NoobAI
>>103174551thats fucking insane, are you using a 3090 or a 4090?
>>103171862Any tips on how to build a dataset ? Trying to learn how to lora but the amount of cleanup necessary seems gargantuan.I want to automate as much of it as possible
>>103174584309018s for base gen34s for 1st img2img upscaling2m47s for 2nd img2img upscalingi'm doing lots of testing to get the process down to a science, then i think i'll release my comfy workflow to take full advantage of NoobAI
>>103174026Local models should be better than AI Dungeon ever was.
>>103174539Did you try no animals in positive? It used to work in some old SD1.5 models.
>>103174488And I didn't mean SD1.5 specifically, but its merges and finetunes.I wasn't impressed by noob, then again, I don't care about characters or anime.
>>103174594Start with dataset of 50 images, try different training settings and captioning. Quality over quantity every time.
>>103174606what were your resolutions per step?-asking because i wanna get a 3090 soon and that already sounds pretty impressive
>>103174594Instead of working inside of folders you should be using something like MongoDB. Download your images, put the captions, titles, metatags, etc and imagepaths into a table in a database. When you need a dataset, have a script copy, resize and format the images for the trainer you're using.
>>103174675started with 896x1472, did 2.00 upscale every time, the secret sauce is to offset the starting step the bigger your image is, this way it wont generale things like extra belly buttons or nipples everywhere
>>103174649What utilities/scripts do you use for bucketing and cleanup ?
>>103171862Bonus round of: >>103165357
>>103174691that's nutty i knew the 3090/4090 should be able to gen base reses really high like that but fuck that blows my 1080 out of the fucking water, very cooltrying so hard to figure out the perfect resolution for performance cost and it's tough with how much resolution affects what the model will generate
>>103174695I clean images with Topaz ai and gimp, manual crop. Not viable with large datasets.
>>103174624>no animalsI never had to do that in 1.5. My tastes could have been different though. muh look at this scenery phase. Anyways, it didn't help much unfortunately. >>103174551When you get ultra res high it really loses value using a checkpoint during the value, pony or NoobAI, doesn't matter. It comes down the the upscaler choice. real_hat_ganhttps://litter.catbox.moe/rwyfts.jpgfatal_animehttps://litter.catbox.moe/vx6b8f.jpganime_6bhttps://litter.catbox.moe/yp4ecz.jpgperformed using a 4070. I am not retesting with 3090. I chose the weaker one on accident.
>>103174773listen, i bought my 3090 for the main purpose of doing AI, i think i got it just after SDXL released. Been training a lot of loras en model finetunes with it.What you want is raw vram, the performance increase on the 4090 is not worth the steep price increase, and you will not be able to run more things with a 4090 because the limit is the amount of vram that you have>>103174803not if you offset the starting step, if you're working with an anime and artowkr based model, you don't really need to go further than 4000x4000 anyway, could've added a tiled upscaling on top of that gen and boost it up to 7168x11776 but it doesn't really make any sensestill, img2img upscaling is mandatory if you want to correct the gen, it needs to be a low amount of denoise and the starting step offset by 6 to 12 (depends)
>>103174863holy wisdom, so its true. by the way from your comparisons it seems like fatal_anime is way better than the others because it actually adds that brushstroke detail, is it slower than something like 4x ultrasharp? thinking i want to switch to something that adds stroke details like that for my style.>actually gonna catbox what im working on so you can see what i mean and the struggle with resolutions making poses weirdhttps://files.catbox.moe/y7699l.png
>>103174921>https://files.catbox.moe/y7699l.pngI like the work you put into it, but god I hate the artstyle.
>>103174863>pony would shit the bed hard with this test>it needs to be a low amount of denoisealright fanboy. I went up there to show that the upscaler is what matters, especially if you are dropping your denoise. First pass upscale needs a good checkpoint, but you shouldn't be relying on it for upscale after that. Everything is tiling anyways and you are probably at least VAE tiling for the last step anyways.
>imagine genning above 512 pixels
>>103174945honestly it got too metallic-looking and digital, its meant to lean more painterly>and god adetailer is hard to tardwrangle it likes to make bimbo or boney faces
>>103174863>starting step offsetIs that done using KSampler Advanced start_at_step?
>>103174921all my gens use the old 4x ultramixbalanced as the way to upscale the image between img2img upscaling.Basically what I do : do the base gen, funnel it thru the upscaling model (2.00x upscale), then i convert it to latent, i inject some noise, then into the sampler it goes for another round, i do this 2 more times and voila>>103174952to further prove my point, i have used just the upscaling model with the same amount than the 2 img2img upscalings combined, here's a little comparison, now humor me by telling me which side is the bestif anything, you proved that you have no idea what you're talking about
>>103174973this is the backbone of my workflow, been autistically testing that shit for a while now trying to wrangle pony into compliance, but with NoobAI i feel like picrel is quite close to perfection
>>103175013>how many layers of upscaling are you on
>>103174976>>103175013i wish the majority of regular posters were like this guy, learned more in this past couple messages than i have in the past couple months of ldg/b/lmg/sdg kek its nuts. thanks for the insight man.>insert gen of vegeta kneeling here >>103175017kek
>>103174637I think most anon view 1.5 rose tinted
>>103174976>upscaling model with the same amount than the 2 img2img upscalings combinedwtf are you talking about. >if anything, you proved that you have no idea what you're talking aboutno I can't discuss things with people who don't know the difference between an upscaler model and and txt2img model. >>103175013noise injection plus start at step. claims the model is better. I am fucking done.
>>103175055>literally unable to understand that a txt2img model uses a latent to build the image upon, the resulting image can then be upscaled and reconverted into a latent to further improve it, maybe he'll learn about img2img one day, if we're lucky>even with the workflox, literally fucking misses the point of upscaling the image THEN doing img2img>ignores proof that my method is better than just using an pscaler model to do the entire thing, still doesn't understand than at base genr esolution, small details are blurry or wrong and that img2img is the only way to fix that>then has the audacity to claim i don't know the difference between an upscaler and a txt2img modelanon, this is getting really embarrassing for youam not expecting a reply from you, dont care
>>103174863If you can afford it 4090s are literally 50% faster than 3090s which is significant for training. But at this point I'd just wait for a 5090 will again be 50% faster than a 4090 and have 32 GB of VRAM.
>>103175143>then has the audacity to claimkek
>>103175013I see, thanks.Though I still don't know how's KSampler Advanced different from the regular one. I thought when using start_at_step without controlling stop_at_step (which is useful for multi-stage sampling?) the only difference from denoise level is that you can have precise control (i.e. a specific step instead of a percentage of the total steps that might fall onto one or the other uncertain step). But essentially it's still the same, so I've been lowering denoise to find a spot where a second navel doesn't appear. Tell me more.
>>103175027when i am sure it's as good as it can get, i'll share my full workflowmight do a rentry if there's demand, i'm nowhere near done testing and maybe the upcoming version of NoobAI vpred work a little differently, we'll see
>>103175180well it's just that the advanced ksampler does everything in one node, it's much less mess on my workflowyou don't need a "stop at step" because it's just the difference between your total steps minus the start at step. Let's say that you gen for 15 steps and your start-at-step is at 5, 15-5 = 10 effective steps, you're basically only doing steps 5 to 15. You can verify this in the cmd window which shows only 10 steps being used to gen.The basic rule for all this is : the bigger the image, the lesser should be the noise injection and the higher should be the starting step. also, since you remove less and less noise and you start further and further, the number of effective steps effectively diminishes the bigger the image is,see how in my workflow the 1st img2img has 8 effective steps while the 2nd img2img only has 5.
>>103175232Right let me reword that. Is there some practical or esoteric advantage of using specifically start_at_step, or can I achieve the same result by simply lowering the denoise in picrel like I have been doing till now?
>>103175321start-at-step is a backbone of this method. If you use any type of preview that lets you see how the image is generating step by step, then you see how at first it's a vague blob then details appear. If you don't do start_at_step, you're essentially telling the ai that it should go through that "blob" part again, which it doesn't need to do, it needs to fill in finer and finer details as the process goes on.The problem with just denoise is basically that. You need to tell the model to add details, not try and regenerate the image from the start. This will lead to unwanted element duplication or subdivision as well as other exotic problems the bigger your image is.tl;dr : yes, start_at_step is essential, use it.
Can someone turn this image of cells into a video of int morphing into a being? I ran out of credits for runway ai and don't have time to find a different website because Im at work
>>103175390cool idea, wish I had the vram for it
>>103175390>>103175560still need to make me a mochi workflow but i'd rather wait for mochiHD
>>103175368>If you use any type of preview that lets you see how the image is generating step by step, then you see how at first it's a vague blob then details appear.But it isn't? That's my point, or at least the point I'm trying to clarify. Empirically it's exactly the same, as I lower the denoise more of the original is preserved. A vague blob only happens if denoise is too high, which would never be needed when simply upscaling.From the community guide>Unlike the KSampler node, this node does not have a denoise setting but this process is instead controlled by the start_at_step and end_at_step settings. This makes it possible to e.g. hand over a partially denoised latent to a separate KSampler Advanced node to finish the process.So from my understanding the use of start_at_step and end_at_step is needed only when you chain the samplers WITHOUT finishing the image. i.e. not for upscaling "first image>second image" but for varying the parameters mid-sampling "first image>still the same image"
>>103175666these at not the same process, start_at_step works differently>This makes it possible to e.g. hand over a partially denoised latent to a separate KSampler Advanced node to finish the process.exactly, you inject some noise and then you can use start_at_step so that the noise is treadted diffeentlycan pretty much guarantee you these don't work the same. Even extremely low denoise levels like 0.05 would result in very different images when the start_at_step was set to 0 as the image got bigger and bigger. You don't want the model to repeat the first few steps during img2img, it's as simple as that.
>>103175738you too
>>103175753>the model to repeat the first few stepsThere's no "first steps" from my understanding. It's just steps. If I set low denoise the sampler essentially starts from some uncertain step and that uncertainty of the exact steps*denoise-starting-point vs a specific number when start_at_step is the only difference afaik.>You can right-click on the Ksampler Advance node and there is an option "Set Denoise" it will ask you how many steps you want, how much denoise then it will set the steps, start_at_step and end_step for you.Which I didn't know but now I do. And it obviously sets the same value I would have set myself judging by the intuitive understanding of how steps + denoise work in the simple KSampler.Initially I thought you really know what you're doing but now it seems you just convinced yourself of an illusory correlation.
>>103175908give him a moment. He will tell you that you are embarrassing yourself.
>>103171908We needed this.
>>103174594Depends on what you're trying to train.
>>103175908anon who is embarrassed here, the one with all the audacity. >There's no "first steps" from my understandingkinda? Leaning on the no. First step does something different than 5th step. The injection of noise depends on the scheduler and where you are in the process. Step 1 noise injection is different than step 5 noise injection. This makes more sense because you don't want to inject a bunch of noise 1 step before you are done. I am using the term noise loosely, please don't dig into it too much. Let me show with a few screenshots. First one is just proving my workflow works. I can set my ksamplers to do different steps and starts and stops and get the same image. The goal is to generate the same image with starts and stops of different lengths and denoise of 1. I am using a constant input for the latent and not the default latent.
>>103176290this screen a pure yellow latent (well image with decode) was fed into the ksamplers. So same input, mostly the same settings and the 3rd one was told not to cleanup the noise at the end. Just to say this outloud the image doesn't work because this is effectively 8 steps. The start/stop at 0-8 is massively more noisy than the 16-24 start/stop. The next topic would be handling the difference between determinate and non-determinate schedulers.
>>103176362I don't understand how these images are different when you think about it under the idea of noise injection. I get that it does different things when the steps are different. The thing I am foggy about is if the noise injection is every step and then working towards a minimum or if the scheduler/sampler are doing other things and the noise is still constant. I am leaning towards steps changing the noise injection. If I change the start/end steps to 1 then I get different images. I can't see the difference, but there is.
>>103176567here is the difference. I'll stop spamming this thread now.
>>103175546>anon, am i still cute enough for the collage?
Cooking is interesting, even at quite high guidance it's not always guaranteed. When I set my guidance to 3.0 (with Flux) most gens become very cooked,* but 1 out of every 15 or so will be not that cooked, one in 30 maybe not overcooked at all. If I go down to 1.4 most gens won't overcook, but a sizable minority of them still will. So the trade-off I have to consider is: do I want more uncooked gens per minute, or fewer but with stronger guidance from my prompt? And then there's the next question: are the uncooked gens actually getting the additional guidance, or are they by some fluke not being guided as strongly,** which is why they don't overcook? If the latter then high guidance would have no or little benefit.* I am defining cooked as meaning having that immediately apparent AI-generated look; this can manifest as sliminess, excessive contrast, buttchin, or in many other ways, but the important thing is that it's immediately discernible at a glance, it's an overall effect that the veteran prompter detects naturally without any need for close inspection or thought.**e.g., when there is poor correspondence between the state of the latents mid-denoising and the prompt this can lead to low impact of the prompt guidance on that denoising step, because the model doesn't know a plausible way to take the conditioning into account given where the image is at, so the prompt behaves almost like it's meaningless noise.
cozy bread
What checkpoint would you recommend I use in stable diffusion if I want something like this? And what setup in general
>>103177344>something like this?Bottom is Flux, not sure about top could be a Pony Realism model>And what setup in generalComfyui or Forge. Links in the OP
>>103177481Thanks!
what cfg scale are yall usin
>>103173532cool
>>1031775533 for Flux4-7 for SDXL
!! note this image is from Flickr, NOT generated, posted because relevant to the text of the post>>103177344bottom gen is mine. just flux dev nf4 with t5xxl_fp16, no loras no special checkpoints etc.I img2img something, usually a blurred social media pic but in this case picrel from flickr, at 0.90 to 0.94 denoising. My guidance is set in that picture to 1.37, sometimes as high as 1.6 if the prompt is really simple and open-ended. 44 steps sgm_uniform dpmpp_2m.The prompt will be written to be like a social media post often about a friend or family member, sometimes a celebrity or a stranger, sprinkled with just a little bit of "and also she's hot". I seldom just go straight for "sexy lady with big breasts", I always have to establish a context for the photo so it doesn't just look like pornIn that image she was supposed to be "my sister Melanie" (I always use a name) who is "kinda thicc" wearing a jean paul gaultier bodysuit on her birthday at a chain restaurant. You can use brand names, of clothing or of places, to exert a strong influence on the subject. High fashion expensive stuff tends to make subjects look much older because young people can't afford it, so be aware of that."Grainy facebook photo of [such and such]" also helped set the style but that was near the end of the prompt.I'll gen 1000-2000 images overnight, keep 50-200 of them, and post sometimes one or two, sometimes more, often none.>>1031775531.4 for flux, usually. (Guidance, not CFG. CFG stays at 1.)
>>103177639This is excellent advice. What's your power bill like?
>>103177639>!! note this image is from Flickr, NOT generated, posted because relevant to the text of the postdamn thats a really nice img thodesu
>>103177656I prefer not to think about it, but power is pretty cheap where I live so not too bad.
>>103177214why?
Getting a new build. I gather GPU is the prime concern for performance in image gen but is the cpu a concern at all? Would I be gimping performance if I go with an x3d with less cores?
> genning 1girls holding weapons with pony model> frequently fucked up, swords all over the place> get annoyed> switch to genning naked apron images> almost all come out perfectlyI forget its basically a porn model
>>103177904>is the cpu a concern at allno, not really
>>103177917Make her ride a lawnmover and hold white monster energy
>>103177917A job more suiting her talents.
Actually might as well ask over here as wellFor some reason regional prompter only works for me on forge, not reforged. Not the worst thing in the world at the end of the day, and I would normally just move over, but reforged has these CFG++ samplers that I kind of liked using. Is there a way to get those on forge?
>noobaigood, at last we're moving from sloppy flux
>>103178081She even has a Monster branded mower
>>103178603God damn that's great!
Actually just more directly, has anyone got regional prompter working on reforged?
>>103178603hahaha what the fuck
>>103176362>The start/stop at 0-8 is massively more noisy than the 16-24 start/stop.Because in regular KSampler "steps" {value} is the total amount of steps it does, but in Advanced "steps" is from 0 to {value}.Advanced 0-8 is equivalent to 8 steps at 1.0 denoise of regular; Advanced 16-24 is regular's 8 steps at 0.33 denoise. So when in Advanced you do 0-8 you start from pure noise, and when you do 16-24 you start from 0.67 of the image. Which is why you see so much noise.I was talking about what you shown on your pic 1, that regular KSampler with denoise and Advanced KSampler with start_at_step and corresponding values set via Set Denoise produce identical images.On my picrel inside of two groups the images are the same because I set the values correctly.>>103176567Stochastic samplers remove some predicted noise but also add some random noise at each sampling step. So when you do more steps the due to that randomness the final image is different from what it was with fewer steps, it's not just linear denoising of deterministic samplers.
denoise
start_at_step
Deadline's rapidly approaching, so start genning some bakerslop if you want to make it into the next collage.
>>103179085please see>>103175143>am not expecting a reply from you, dont careI am done with the stupid games anon. If you want to have a conversation be a human. I am tired of explaining things to you and you losing your shit.
Babe wake up, Pyramid Flow released their 768p modelhttps://huggingface.co/rain1011/pyramid-flow-miniflux/tree/main/diffusion_transformer_768p
>>103177917>>103178081>>103178603holy kino
>>103179344What is this?
>>103179365a video model
>>103179374Sounds too hard for my pc.
>>103179384as usual, kijai will make a Q8_0 version of it and it'll probably be usable for 24gb vram cards
>>103179384having you checked the folders? I am not seeing anything too crazy for requirements. I could be missing it.
found this on /lmg/, why are they funni and not us? :(
>>103179411it'll probably ask for a lot of vram, 768p is a really high resolution
>>103179418/lmg/ had soul at one point that gen is like a year old brother
>>103179418that's not funny
>>103179450that's weird I was a frequent lurker in /lmg/ a year ago and I never seen that picture
>>103179334How's that relevant to what I have been talking about?
>>103179463my timeframe is probably fucked it could be from jan-march this year but i'm 99% sure the miku/teto posting started Q4 last year.that looks like a WAY better version someone just inpainted/regenned recently for sure though.
>>103179344Since they were too lazy to provide any examples, I searshed a bit and found this:https://huggingface.co/rain1011/pyramid-flow-miniflux/discussions/7#6734f1757b5bba7729ad4846
>>103179426VERY high for consumers.
i have a nudity lora i often use for flux to add nipples. would there be any point to merging this lora with FLUX, or would it be the exact same as if i just loaded the LORA seperately?
>>103179459SEX IS FUNNY(canned laughter)*funny face*(canned laughter)
>>103179549its funny because mikuposters started the degenerative brainrot of that general so sex funny haha (canned laughter) (wilhelm scream)
>>103179554I literally don't know what Miku is. I just know it's one of the things I don't use that comes with Flux.
>you need more
>>103179554>mikuposters started the degenerative brainrot of that generalI'm only an occasional lurker in /lmg/ but this always felt forced and like this. When the general became centered around an imposed meme identity, accepted it even.
What's the best current model for anime porn?
>>103179599pony
>>103179595I don't mind Miku but they are obnoxious when they start to split the thread in two so that other retards want to force their Kurisu meme too
>>103179599>>103179606not noobAI?
>>103179622idk havent tried it
>>103179096my attempt at collage slop. Honestly, I hate it. >>103179622depend on how disgusting you want your porn.
>>103179668this one's got a real shot at making it in, I can feel it
>>103179544I'd prefer a lora.
>>103179595yeah i remember being one of the people calling it out and causing massive sperg rages that tore the threads up and split the threads too >>103179609well we see what happens when enough people dont mind it kek now the thread is useless.blows my mind that /lmg/ is the general were nobod even knows how to use the models to begin with so all the "testings" are skewed with retards that have bad configs/genuinely don't know what they're doing.can't believe i hopped from /aicg/ to that general from june to july last year, oh how the turns have tabled huh.hm, would anyone even say SD is more difficult to use than LLM's? i'd say LLM's are more difficult but not that bad.
>>103179752>how the turns have tabledNah /aicg/ is always more retarded. Bad configs/genuinely don't know what they're doing all around, they want to click one button to import a preset and do nothing more. Don't want to learn how to write system and other prompts, don't want to read the docs and test to know how to adjust the extremely limited in comparison variety of API samplers. Localfags are cool by defnition as long as they don't use ollama lol, fucking hate how to normalfags it's The way to use local.IMO SD is incomparably easier because it's momentary. You just gen a single discrete pic or animation, for llms that would be like starting a new chat after the very first response.
gonna select "random dictionary entry" until I get three nouns, either objects or concepts, which I can combine to get an idea for bakeslopping
>>103179882nah i mean the turns have tabled for if /lmg/ would stand the test of time in general, honestly im autistic i should've worded it better.finding out that /aicg/ users were drinking their own piss for leech methods q4 2023 was mind boggling i couldn't believe it but at the same time i could.
AM I IN TIME? THIS IS NOT A 1GIRL
>>103179599I wish I could say "nice gen" :(
When you do batches of more than one image in Comfy, how is the noise seeded for the images after the first one?It isn't just incrementing the seed, because if you do that manually after generating you'll see all the images are new.
>>103180043artist tags?
>>103180143Just starshadowmagician
I'm slopping hard for that collage spot rn, I suggest you anons do the same
Will they make 3dpd models based on Illustrious/Noob just like they did with pony? Yes they will.
alright, fine, I'm a horny guy, maybe I'll try this flavor-of-the-month noobAI shit. But if you fools are wrong again...
>>103178081noobai doesn't really work for me. I have all kinds of problems with it. Am I the only one? I'll be sticking with Flux.
>>103180353can illustrious do sex positions and stuff like pony? I was under the impression that it could only do 1girl-type gens
>>103180377Original v0.1 can but it's bad. NAI can and it's good.
>>103180377>like ponybetter than pony, by miles, and can do 2girls without regional prompting to a good extentwill need more updates though.so far this seems like the best one i've testedhttps://civitai.com/models/900166/illunext-noobai-illustrious?modelVersionId=1055353
Does nobody else need tiled vae? all the wf seem to use non-tiled.
>>103180460most AIs will detect and kick to tiled if needed.
>>103180476wth. I meant UIs not AIs.
>>103180495No idea why it's doing that.noobai.does sdxl work amd gpu harder than flux?
>>103180503
anon above me, add armpit hair into your prompt
anon above me, sniff your own armpits and report back
anon above me, it smells cheesy
>>103180586
It's not a character, so the face changed.
>>103180625collage worthy
>>103179344Has anyone tried it yet? And if yes, is it good?
>>103180658Neat, what are you using?
>>103180664Pixart
>>103180495I am getting some bizarre results with noobai too. Seems prompting is closer to sd1.5 style than danbooru although I am still trying to figure it out.
>>103180681Is it local yet?
>>103180709use danbooru and e621 tags only, artist tags are also required, noob doesn't have a default style. you can mix multiple artist tags at different weights to create your own style
>>103180737https://huggingface.co/PixArt-alpha/PixArt-Sigma-XL-2-1024-MS/tree/main
>>103180748Looks cool!
>>103180777book made out of sand and rock with growing oasis scenery on it, the water of the oasis pond and sand dunes flowing from the book onto the table, fantastic scenery, artistic concept, beautiful and hyperdetailed, starry night, magical
book made out of sand and rock with growing oasis scenery on it, the water of the oasis pond and sand dunes flowing from the book onto the table, fantastic scenery, artistic concept, beautiful and hyperdetailed, starry night, magical
back when we were known as the sovl general
>>103180709get the danbooru tags auto completion plugin for forge if youre using thatif youre in comfy i dont know tough titties i guess but that's the way you'll learn how to prompt it properly, it's really not complicated.>masterpiece (character) (character traits) (location/details)
>>103180830shit coffee still hasnt kicked input the action after character promptsstill noob/illust is very flexible and won't just break really badly if you don't follow that orderi'm not following that order right now kek
>>103180814this
>>103180747>artist tagsit hates my artist wildcard, going to try a new file. There seems to be a pretty large gap on the characters I tried. It got some pony didn't so that was nice. >>103180839>if you don't follow that orderwhere is this documented? I am using breaks which I also have to look it up.
>>103180881>where is this documented?i have no idea were anything is documented kek i've just learned by secondhand anon lessonsbut believe me if you're having issues is either a skill issue or a checkpoint issue, more than likely checkpoint because noob/illust is very generous about promptlet issues compared to pony.
sometimes i buy into the "auto/forge makes better outputs" meme
back in my hole I go
>>103180961gn, stay racist
>>103180881>it hates my artist wildcardtry manually experimenting with a list of artists first, when making your own style mix i recommend picking 1 main artist who's style you like at a normal weight and then adding a few supplementary artists at lower weight. so something like:>artist_1, (artist_2:0.45), (artist_3:0.5), 1 girl, (armpit hair:10000)some mixes might tend to fuck up hands and anatomy so becareful. add these quality tags to the end of your prompt as well>(masterpiece, best quality, very aesthetic, absurdres:0.4), giantess, cityscape, landscape, destructionif you haven't add some furry tags from e621 into your negatives, tends to help with color>anthro, mammal, furry, sepia, worst quality, low quality,
>try to do a gen with a 1girl laying on her stomach on her bed>forget to turn off the ass attack lora>now she's doing some kind of a yoga pose stretching forward with her huge ass the focal point>its still full bodyman noob rocks
noobai.trick to keep from crashing my computer. Flux is more gentle.
>>103180983>if you haven't add some furry tags from e621 into your negatives, tends to help with colorNTA but THANK YOU i thought something was fucked up
>>103180995np, also, if your images end up looking a bit fried try turning on rescalecfg. i use cfg 5 with rescale on 0.7. noob is very sensitive to cfg
anime sloppin'
>>103180989