[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


Janitor applications are now being accepted. Apply here.


[Advertise on 4chan]


Discussion of free and open source text-to-image models

Breviously baked bread : >>103165357

The Demon of Laplace Edition

>Beginner UI
Fooocus: https://github.com/lllyasviel/fooocus
EasyDiffusion: https://easydiffusion.github.io
Metastable: https://metastable.studio

>Advanced UI
Forge: https://github.com/lllyasviel/stable-diffusion-webui-forge
reForge: https://github.com/Panchovix/stable-diffusion-webui-reForge
ComfyUI: https://github.com/comfyanonymous/ComfyUI
InvokeAI: https://github.com/invoke-ai/InvokeAI
SD.Next: https://github.com/vladmandic/automatic
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI

>Model Ranking
https://imgsys.org/rankings

>Models, LoRAs & training
https://aitracker.art
https://huggingface.co
https://civitai.com
https://tensor.art/models
https://liblib.art
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/sd-scripts/tree/sd3

>SD3.5L/M
https://huggingface.co/stabilityai/stable-diffusion-3.5-large
https://replicate.com/stability-ai/stable-diffusion-3.5-large
https://huggingface.co/stabilityai/stable-diffusion-3.5-medium
https://huggingface.co/spaces/stabilityai/stable-diffusion-3.5-medium

>Sana
https://github.com/NVlabs/Sana
https://sana-gen.mit.edu

>Flux
https://huggingface.co/spaces/black-forest-labs/FLUX.1-schnell
https://comfyanonymous.github.io/ComfyUI_examples/flux
DeDistilled Quants: https://huggingface.co/TheYuriLover/flux-dev-de-distill-GGUF/tree/main

>Index of guides and other tools
https://rentry.org/sdg-link
https://rentry.org/rentrysd
https://rentry.org/sdvae

>Try online without registration
txt2img: https://www.mage.space
img2img: https://huggingface.co/spaces/huggingface/diffuse-the-rest

>Don't forget to maintain thread quality :)

>Related boards
>>>/aco/sdg
>>>/aco/aivg
>>>/b/degen
>>>/c/kdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/tg/slop
>>>/trash/sdg
>>>/u/udg
>>>/vt/vtai
>>
>all 1girl collage
grim
>>
File: ComfyUI_00117_.png (1.22 MB, 960x1456)
1.22 MB
1.22 MB PNG
noobai.
>>
File: 00637-3670339743.jpg (200 KB, 1024x1536)
200 KB
200 KB JPG
>>103171908
i need to JACK OFF ASAP
>>
>all 1girl collage
win
>>
>>103171908
cringe
>>103171926
based
>>
File: ComfyUI_00119_.png (1.42 MB, 960x1456)
1.42 MB
1.42 MB PNG
>>103171922
>>
>>103171908
one of them is a titan. so YOUR WRONG
>>
>noobai vpred 0.6 hf link now leads to a 404
https://huggingface.co/Laxhar/noobai-XL-Vpred-0.6
what the FUCK???
>>
File: ComfyUI_00123_.png (1.64 MB, 960x1456)
1.64 MB
1.64 MB PNG
>>103171938
>>
File: ComfyUI_00124_.png (1.58 MB, 960x1456)
1.58 MB
1.58 MB PNG
>>103171961
don't ask me why snoopy is in hell.
>>
File: ComfyUI_17807_.png (2.69 MB, 1080x1920)
2.69 MB
2.69 MB PNG
>>
File: 1000016245.jpg (67 KB, 640x763)
67 KB
67 KB JPG
>the 1girl lobby is now too powerful
i kneel
>>
File: ComfyUI_00115_.png (890 KB, 960x1456)
890 KB
890 KB PNG
>>103171982
Very polished looking. My noobai look like this.
>>
File: 2024-11-13_00012_.png (1.01 MB, 720x1280)
1.01 MB
1.01 MB PNG
Is this a 1girl?
>>
>>103171993
you need img2img upscaling anon, not that hard to do
>>
Any decent workflows for generating animations yet?
I tried the animated diffusion extension but it doesn't seem to do well with XL models.
>>
Blessed thread of frenship
>>
File: 00662-1683560962.jpg (200 KB, 1536x1024)
200 KB
200 KB JPG
>>
>>103172002
that's a 1fur
>>
/ldg/ would have been a thriving paradise if anons would just jerk off before posting
>>
File: comfyunpack.png (171 KB, 1489x1107)
171 KB
171 KB PNG
found a way to make noobai work with sdxl efficient loader and sampler, just gotta unpack, route base model thru vpred and rescaleCFG then repack
if it helps any comfychads itt tryna de-clutter their workflows
>>
>>103172302
Instead of all that crap I just load the model in forge then type words to get sexy pictures
>>
>>103171862
horniest thread in eons shameful
>>
File: Mochi_00036.webm (615 KB, 848x480)
615 KB
615 KB WEBM
MochiHD waiting room
Mochi i2v waiting room
>>
>>103172312
happy for you if you get what you want from forge anon, i'm just too used to comfy to go back now
>>
File: ComfyUI_17876_.png (2.03 MB, 1080x1920)
2.03 MB
2.03 MB PNG
>>
armpit hair gens in collage waiting room
>>
File: gard7.jpg (958 KB, 2368x2000)
958 KB
958 KB JPG
>>
File: 00290-3379768667.jpg (377 KB, 1728x1344)
377 KB
377 KB JPG
>>
File: 00675-1555710303.jpg (126 KB, 1024x1024)
126 KB
126 KB JPG
>makes you salivate
>>
>>103172571
1grill supremacy
>>
File: 00678-1467363500.jpg (95 KB, 1024x1024)
95 KB
95 KB JPG
>>
>studio Nighli
>>
>>103172538
now do that gen but she's really big and sitting on a tiny city
>>
Bigma sisters?
>>
>>103172590
died out waiting for sana
>>
>>103172538
>>103172541
nice
>>
>>103172595
:(
>>
prompting is not artisanal
>>
File: 00683-3785738397.jpg (108 KB, 1536x1024)
108 KB
108 KB JPG
>>
File: 00684-3574615794.jpg (100 KB, 1536x1024)
100 KB
100 KB JPG
>>103172614
all of my slop is highly artisinal
>>
File: 00319-3379768665.jpg (413 KB, 1344x1728)
413 KB
413 KB JPG
>>
File: wdduduhd.png (34 KB, 1775x157)
34 KB
34 KB PNG
Isn't forge supposed to completely unload models from ram when you switch to another one? It seems to throw models permanently in system ram until it overflows both the page file (30 fucking gigabytes) and available ram at the time (2 fucking gigabytes?)
this must be a setting i'm missing but i'm pretty sure i have everything right, reforge was doing this too and comfy never had this problem.
>>
>>103172692
-this is my reboot from having forge crash from OOM
>notice the fucking page file still gargantuan and how much ram its taking up just for an sdxl model
>in VRAM its only 6 gigabytes then unloads down to less than a gigabyte after the gen is done
>>
>>103172692
>>103172704
it's a pretty easy fix : use comfy
>>
File: 1711555377416635.png (190 KB, 2314x1226)
190 KB
190 KB PNG
https://github.com/chengzeyi/ParaAttention
I like those speedup ratios
>>
>>103172692
>>103172704
it's a pretty easy fix : use linux
>>
File: 00353-2724758687.jpg (506 KB, 1344x1728)
506 KB
506 KB JPG
>>
>>103172802
Noo 1girl my penis can heal you
>>
File: gard19.png (2.78 MB, 1432x1880)
2.78 MB
2.78 MB PNG
>"hey anon, how do you like your meat?"
>>
>>103172847
add giantess, cityscape, landscape, destruction to your prompt
>>
File: gard21.png (3.2 MB, 1536x2016)
3.2 MB
3.2 MB PNG
>>103172873
maybe later anon, right now she's just grilling
>>
File: Mochi_00040.webm (1.21 MB, 848x480)
1.21 MB
1.21 MB WEBM
slurrp
>>
File: 00295-3379768667.jpg (374 KB, 1728x1344)
374 KB
374 KB JPG
>>103173228
cool
>>
>>103173228
it wasnt even that long ago where will smith eating spaghetti seemed like a pipe dream for local
>>
File: 00701-3585026473.jpg (156 KB, 1024x1536)
156 KB
156 KB JPG
>>
>>103173281
to be fair it's still far from the real deal kek
>>
File: Mochi_00038.webm (1.36 MB, 848x480)
1.36 MB
1.36 MB WEBM
>>103173317
MochiHD SOON
>>
>>103173326
I really hope they'll deliver, I won't doom now because they released their V2V vae recently (that shit is useless but at least it shows they are commited on delivering the goods little by little)
>>
>>103173338
What is even the use case for video2video?
Just release img2video already
>>
>>103173317
yeah but we're not far from that original video now are we? kek
and LOOK AT THAT >>103173326
>>
File: Mochi_00042.webm (1.13 MB, 848x480)
1.13 MB
1.13 MB WEBM
>>103173406
I'd say local has easily surpassed the original Will Smith spaghetti video (if it's the same one I'm thinking of) but we're still light years off MiniMax quality
>>
File: ComfyUI_17885_.png (2.44 MB, 1080x1920)
2.44 MB
2.44 MB PNG
>>
>>103173349
>What is even the use case
What is the use case for txt2img?
>>
>>103173462
>we're still light years off MiniMax quality
I truly believe we'll be close to MiniMax with MochiHD, once we'll get that and the i2v vae it'll be a fucking renaisance in the video gen ecosystem, it's insane what happened those last 4 months, we had nothing we were making fun of SD3M and right after we got Flux (a local model that is completly competitive with the best API models) and Mochi (same story here), life is unpredictable at times and when it's on the good unpredictable light I fucking take it
>>
>>103173462
I thank the pessimism for boosting us into an insane renaissance for visual genning
but its sad we had to lose LLM's in the process and lol audio

>>103173489
i haven't seen much of MochiHD yet, been more focused on how ponyfags are getting BTFO'd by Illustrious coming out of nowhere with the explosion in quality.
>>
>>103171960
If you don't want that to happen anymore, create a new HF account, and use this:
https://huggingface.co/spaces/huggingface-projects/repo_duplicator
To create duplicate of models you care about so they stay online.
The great thing is that when a model goes missing and nobody made such a copy, it means nobody cared, so nothing of value was lost.
>>
File: 00463-2724758686.jpg (420 KB, 1344x1728)
420 KB
420 KB JPG
>>
>>103173507
the model was never available for download, but the page existed and the model was listed, which means that it exists, the page was juts removed during the night so idk what happened but it's not like i wouldn't have downloaded it if it was available
>>
>>103173498
>but its sad we had to lose LLM's in the process and lol audio
llm? A few days ago, Qwen2.5-32b-coder got released and shit shit is almost Sonnet 3.5 tier, the llm fags got a renaissance aswell, the Chinks are starting to seriously dominate the AI space, and that was expected, that's what happen when you don't give a fuck about ""safety"" and ""copyright"", you get good models because you train your shit with the best data you can find, based chinks
>>
Does anybody have the lomal Lora that was here?
https://pixai.art/model/1757417777691869282/1757417777704452195?lang=zh
Apparently it was taken down from civitai and lost to time, and it was good.
>>
>>103173595
qwen2.5 is one of the most cucked models ever released, erp is bland as fuck, if it wants to comply at all
llms are dead until further notice
>>
>>103173595
Did they get a renaissance? Anytime i check out lmg it's somehow degenerated into a even whinier, spammier, gayer ldg, hard to catch up with the news when no one wants to try out models and the few that do get spammed out with cope and seethe.
That's awesome to hear at least.
>given that's the same thread that always blasted chink models and essentially parroted MSM talking points to get people to stop talking about them
>>
>>103173614
yeah, I don't really like the /lmg/ community aswell, they can't stop whining, the worst community will always remain /sdg/ but /lmg/ isn't far behind lol
>>
>>103173568
Oh, I confused it with noobai-XL-Vpred-0.5 which is still online.
So, what was it? Did he promise a 0.6 version and this is his way to "unpromise" it?
>>
>>103173607
>llms are dead until further notice
Did the ones we had disappear or did they stop getting better like we really don't have a worthy SD1.5 successor yet?
>>
>>103173630
there was no communication at all about it, some anon posted the link a few threads back and that's all we have. Maybe this was a mistake and the'yre not ready to release this one yet, or maybe they want to wait for a further milestone like 0.75 and they're vaulting this one, never to be seen again
>>
>>103173630
0.6 was released a day ago but somehow they removed it from huggingface quickly after
>>
File: ComfyUI_02473_.jpg (1.06 MB, 1792x2304)
1.06 MB
1.06 MB JPG
>>
>>103173647
From what I've gathered we're going to get a Vpred0.6 but it's going to be rebranded.
>>
>>103173652
That anon said the release had nothing to download.
>>
File: 00460-1973257297.jpg (218 KB, 1024x1536)
218 KB
218 KB JPG
>>
>>103173652
>>103173671
what i said is that the HF page for the model was public for a day or so, we could see the model being listed and the files, but the files were not available for download
which confirms that this model exists, but eventually the page was taken private or deleted, no idea what happened
>>
>>103173686
alright im placing my bets that there'll be a big update today and 0.6 was a mistake/dry cum
>>
>>103173640
for a moment you can cope when a shitty model is a few less % shittier than its predecessor, but the copium glasses quickly wear down and you realize it's still a shitty model
but now the stream of "just a little less shitty" models is dry, the last improvement in erp was nemo, we've entered the era of ultracucked benchmarkmaxxing chinese models, and it's juts getting started
>>
>>103172692
god damn it looks like Reforge has the same issue
>>
>>103173692
hopefully, god i hope they don't fuck this up, it's the one model that will nuke pony forever if trained correctly
>>
>>103173761
glad a normal person finally chimed in, thought i was alone.

>>103173791
honestly man its already nuked pony out of orbit for me. There's just character loras i want and that's it, at which point, i can make them myself, illustrious seems even easier to train than pony ever was.
Im testing how well it knows characters now and i'm surprised how many obscure characters are at least halfway trained.
i've already deleted around 30~ish of my pony character loras because illustrious just knew them perfectly without even needing to prompt their clothing or anything.
>>
File: 00123-380710328.png (989 KB, 1024x1024)
989 KB
989 KB PNG
>>103173823
>right as i make this post i'm BTFO'd into orbit by illustrious genning mew from JSR as GUM from JSR dancing with the pokemon mew
et tu, Illustrious.
>>
>>103173823
been messing with noobai vpred 0.5 for 2 days now and yeah, pony is already beaten, now who knows how V7 will come out but to be honest i don't expect much if anything at all from it
>>103173854
at this point noobai is more advanced than illustrious, i know they have v1.0 and v2.0 in stock but right now noobai just is the better of the two
>>
>>103173823
>glad a normal person finally chimed in, thought i was alone.
I don't think it has done this before. I'm so used to shuffling trough checkpoints because of doing/testing merges. Now I have to restart reForge after 3 checkpoint switches
>>
>>103173891
so if noob is just trained off of illustrious, but its training is now more advanced than base, what do we even call it? Noob or Illustrious? I guess that's why im starting to see people call it noob now, because noob's better than its base.
this shits confusing, doesn't help it all just came into existence within the past few months and became this good within a month.
>>103173907
>3 checkpoint switches
its a single switch to another checkpoint and i get heavily degraded performance that i need to restart.
>>
>>103173932
noobai is its own thing, their computing power is even higher than what ponyv7 is currently being trained on, also a bigger dataset
maybe it'll change when illustrious 1.0 releases but before that : noobai is the more advanced model by far
>>
File: 00561-2169555733.jpg (575 KB, 1344x1728)
575 KB
575 KB JPG
>>
>>103173701
So we never got to the level of AIDungeon's Summer Dragon for local, huh?
>>
>>103171862
>Don't forget to maintain thread quality :)
>no rentry
Is that supposed to provoke schizo anon?
>>
>>103174112
baker change is subvert forceful takeover from /sdg/. It explains college quality drop. Suddenly will will start hearing about how much dick Comfy is getting and we will have another thread split.
>>
iykyk kinda thing
>>
what is this?
https://civitai.com/models/944844/2k-ultimate-xl-13gb-fp32?modelVersionId=1057826
is it any good?
>>
>>103174217
kek the nudity is worse than 1.5 how the fuck did he do that?
>>
>>103172302
How can I get an xy plot for loras?
I'm trying to get a row at which the lora is not applied at all
>>
>>103174188
Just saying. It might attract schizo anon. Carry on, fren.
>>
>>103174252

start by using <lora:whatever:0.0> in your prompt, then seek 0.0 and replace with 0.2, 0.4, etc...
>>
uh oh
>>
File: 00004-1942702141.jpg (1.18 MB, 1680x2160)
1.18 MB
1.18 MB JPG
>>
>>103174217
>SDXL
lil bro hasn't opened his computer in 5 months
>>
any tips for negatives/positives to avoid animals in my pony gens. "Zebra print" is creating a zebra. I tried animorph, animal in the negatives.

very nsfw image example. https://litter.catbox.moe/m26x16.png

>>103174268
I find that using <lora:lora_name:0.01> is better. Formatters have screw me on changing 0.0 to 0 and there are sometimes other 0s in the prompt for me.
>>
>>103174401
only humans, not furry
>>
>>103174401
source_furry, furry, anthro
In negative
>>
>>103173640
>a worthy SD1.5 successor
Noob gets VERY CLOSE stylistically imo and i think that's the only realm where 1.5 shines (but how much of that is nostalgia?). Using that old model is really fun until you remember the specificity by which you can prompt Flux.
>>
>>103174424
>>103174488
Thanks anons!

>not furry
didn't work, maybe because I have BREAKs

>only humans, furry, anthro
very helpful. I had to add some weights to avoid the basic blank background, normal pony boring

>source_furry
this destroyed all 'creativity' of the gen. Details dropped and it just looked bland. I shouldn't be surprised eliminating a large part of the model wrecks stuff, but here I am.
>>
File: garde.jpg (2.41 MB, 3584x5888)
2.41 MB
2.41 MB JPG
ok anons, the true power of NoobAI, img2img upscaling a gen up to 3584x5888
picrel is a compressed version, in the catbox is the actual image
https://files.catbox.moe/c6ynd5.png
try and zoom in, each minuscule detail, each minuscule pen stroke, pony would shit the bed hard with this test, but not NoobAI
>>
>>103174551
thats fucking insane, are you using a 3090 or a 4090?
>>
>>103171862
Any tips on how to build a dataset ? Trying to learn how to lora but the amount of cleanup necessary seems gargantuan.
I want to automate as much of it as possible
>>
>>103174584
3090
18s for base gen
34s for 1st img2img upscaling
2m47s for 2nd img2img upscaling

i'm doing lots of testing to get the process down to a science, then i think i'll release my comfy workflow to take full advantage of NoobAI
>>
>>103174026
Local models should be better than AI Dungeon ever was.
>>
>>103174539
Did you try no animals in positive? It used to work in some old SD1.5 models.
>>
>>103174488
And I didn't mean SD1.5 specifically, but its merges and finetunes.
I wasn't impressed by noob, then again, I don't care about characters or anime.
>>
>>103174594
Start with dataset of 50 images, try different training settings and captioning. Quality over quantity every time.
>>
>>103174606
what were your resolutions per step?
-asking because i wanna get a 3090 soon and that already sounds pretty impressive
>>
>>103174594
Instead of working inside of folders you should be using something like MongoDB. Download your images, put the captions, titles, metatags, etc and imagepaths into a table in a database. When you need a dataset, have a script copy, resize and format the images for the trainer you're using.
>>
>>103174675
started with 896x1472, did 2.00 upscale every time, the secret sauce is to offset the starting step the bigger your image is, this way it wont generale things like extra belly buttons or nipples everywhere
>>
File: 003962.png (1.8 MB, 1920x800)
1.8 MB
1.8 MB PNG
>>
>>103174649
What utilities/scripts do you use for bucketing and cleanup ?
>>
>>103171862
Bonus round of: >>103165357
>>
>>103174691
that's nutty i knew the 3090/4090 should be able to gen base reses really high like that but fuck that blows my 1080 out of the fucking water, very cool
trying so hard to figure out the perfect resolution for performance cost and it's tough with how much resolution affects what the model will generate
>>
>>103174695
I clean images with Topaz ai and gimp, manual crop. Not viable with large datasets.
>>
>>103174624
>no animals
I never had to do that in 1.5. My tastes could have been different though. muh look at this scenery phase.

Anyways, it didn't help much unfortunately.

>>103174551
When you get ultra res high it really loses value using a checkpoint during the value, pony or NoobAI, doesn't matter. It comes down the the upscaler choice.

real_hat_gan
https://litter.catbox.moe/rwyfts.jpg

fatal_anime
https://litter.catbox.moe/vx6b8f.jpg

anime_6b
https://litter.catbox.moe/yp4ecz.jpg

performed using a 4070. I am not retesting with 3090. I chose the weaker one on accident.
>>
File: gardak47.jpg (1.62 MB, 2888x4760)
1.62 MB
1.62 MB JPG
>>103174773
listen, i bought my 3090 for the main purpose of doing AI, i think i got it just after SDXL released. Been training a lot of loras en model finetunes with it.
What you want is raw vram, the performance increase on the 4090 is not worth the steep price increase, and you will not be able to run more things with a 4090 because the limit is the amount of vram that you have
>>103174803
not if you offset the starting step, if you're working with an anime and artowkr based model, you don't really need to go further than 4000x4000 anyway, could've added a tiled upscaling on top of that gen and boost it up to 7168x11776 but it doesn't really make any sense
still, img2img upscaling is mandatory if you want to correct the gen, it needs to be a low amount of denoise and the starting step offset by 6 to 12 (depends)
>>
>>103174863
holy wisdom, so its true.
by the way from your comparisons it seems like fatal_anime is way better than the others because it actually adds that brushstroke detail, is it slower than something like 4x ultrasharp? thinking i want to switch to something that adds stroke details like that for my style.

>actually gonna catbox what im working on so you can see what i mean and the struggle with resolutions making poses weird
https://files.catbox.moe/y7699l.png
>>
>>103174921
>https://files.catbox.moe/y7699l.png
I like the work you put into it, but god I hate the artstyle.
>>
>>103174863
>pony would shit the bed hard with this test
>it needs to be a low amount of denoise
alright fanboy.

I went up there to show that the upscaler is what matters, especially if you are dropping your denoise. First pass upscale needs a good checkpoint, but you shouldn't be relying on it for upscale after that. Everything is tiling anyways and you are probably at least VAE tiling for the last step anyways.
>>
>imagine genning above 512 pixels
>>
>>103174945
honestly it got too metallic-looking and digital, its meant to lean more painterly
>and god adetailer is hard to tardwrangle it likes to make bimbo or boney faces
>>
>>103174863
>starting step offset
Is that done using KSampler Advanced start_at_step?
>>
File: detail.jpg (136 KB, 1015x1011)
136 KB
136 KB JPG
>>103174921
all my gens use the old 4x ultramixbalanced as the way to upscale the image between img2img upscaling.
Basically what I do : do the base gen, funnel it thru the upscaling model (2.00x upscale), then i convert it to latent, i inject some noise, then into the sampler it goes for another round, i do this 2 more times and voila
>>103174952
to further prove my point, i have used just the upscaling model with the same amount than the 2 img2img upscalings combined, here's a little comparison, now humor me by telling me which side is the best
if anything, you proved that you have no idea what you're talking about
>>
File: wf.jpg (348 KB, 2229x669)
348 KB
348 KB JPG
>>103174973
this is the backbone of my workflow, been autistically testing that shit for a while now trying to wrangle pony into compliance, but with NoobAI i feel like picrel is quite close to perfection
>>
>>103175013
>how many layers of upscaling are you on
>>
>>103174976
>>103175013
i wish the majority of regular posters were like this guy, learned more in this past couple messages than i have in the past couple months of ldg/b/lmg/sdg kek its nuts. thanks for the insight man.
>insert gen of vegeta kneeling here
>>103175017
kek
>>
>>103174637
I think most anon view 1.5 rose tinted
>>
>>103174976
>upscaling model with the same amount than the 2 img2img upscalings combined
wtf are you talking about.

>if anything, you proved that you have no idea what you're talking about
no I can't discuss things with people who don't know the difference between an upscaler model and and txt2img model.

>>103175013
noise injection plus start at step. claims the model is better. I am fucking done.
>>
>>103175055
>literally unable to understand that a txt2img model uses a latent to build the image upon, the resulting image can then be upscaled and reconverted into a latent to further improve it, maybe he'll learn about img2img one day, if we're lucky
>even with the workflox, literally fucking misses the point of upscaling the image THEN doing img2img
>ignores proof that my method is better than just using an pscaler model to do the entire thing, still doesn't understand than at base genr esolution, small details are blurry or wrong and that img2img is the only way to fix that
>then has the audacity to claim i don't know the difference between an upscaler and a txt2img model
anon, this is getting really embarrassing for you
am not expecting a reply from you, dont care
>>
>>103174863
If you can afford it 4090s are literally 50% faster than 3090s which is significant for training. But at this point I'd just wait for a 5090 will again be 50% faster than a 4090 and have 32 GB of VRAM.
>>
>>103175143
>then has the audacity to claim
kek
>>
>>103175013
I see, thanks.
Though I still don't know how's KSampler Advanced different from the regular one. I thought when using start_at_step without controlling stop_at_step (which is useful for multi-stage sampling?) the only difference from denoise level is that you can have precise control (i.e. a specific step instead of a percentage of the total steps that might fall onto one or the other uncertain step). But essentially it's still the same, so I've been lowering denoise to find a spot where a second navel doesn't appear. Tell me more.
>>
>>103175027
when i am sure it's as good as it can get, i'll share my full workflow
might do a rentry if there's demand, i'm nowhere near done testing and maybe the upcoming version of NoobAI vpred work a little differently, we'll see
>>
>>103175180
well it's just that the advanced ksampler does everything in one node, it's much less mess on my workflow
you don't need a "stop at step" because it's just the difference between your total steps minus the start at step. Let's say that you gen for 15 steps and your start-at-step is at 5, 15-5 = 10 effective steps, you're basically only doing steps 5 to 15. You can verify this in the cmd window which shows only 10 steps being used to gen.
The basic rule for all this is : the bigger the image, the lesser should be the noise injection and the higher should be the starting step. also, since you remove less and less noise and you start further and further, the number of effective steps effectively diminishes the bigger the image is,see how in my workflow the 1st img2img has 8 effective steps while the 2nd img2img only has 5.
>>
File: 1633606321421.png (171 KB, 1398x1293)
171 KB
171 KB PNG
>>103175232
Right let me reword that. Is there some practical or esoteric advantage of using specifically start_at_step, or can I achieve the same result by simply lowering the denoise in picrel like I have been doing till now?
>>
>>103175321
start-at-step is a backbone of this method. If you use any type of preview that lets you see how the image is generating step by step, then you see how at first it's a vague blob then details appear. If you don't do start_at_step, you're essentially telling the ai that it should go through that "blob" part again, which it doesn't need to do, it needs to fill in finer and finer details as the process goes on.
The problem with just denoise is basically that. You need to tell the model to add details, not try and regenerate the image from the start. This will lead to unwanted element duplication or subdivision as well as other exotic problems the bigger your image is.
tl;dr : yes, start_at_step is essential, use it.
>>
Can someone turn this image of cells into a video of int morphing into a being?
I ran out of credits for runway ai and don't have time to find a different website because Im at work
>>
File: tmpmlq_4l8i.png (1.37 MB, 896x1152)
1.37 MB
1.37 MB PNG
>>
>>103175390
cool idea, wish I had the vram for it
>>
>>103175390
>>103175560
still need to make me a mochi workflow but i'd rather wait for mochiHD
>>
File: 1665916025069.png (523 KB, 1245x502)
523 KB
523 KB PNG
>>103175368
>If you use any type of preview that lets you see how the image is generating step by step, then you see how at first it's a vague blob then details appear.
But it isn't? That's my point, or at least the point I'm trying to clarify. Empirically it's exactly the same, as I lower the denoise more of the original is preserved. A vague blob only happens if denoise is too high, which would never be needed when simply upscaling.

From the community guide
>Unlike the KSampler node, this node does not have a denoise setting but this process is instead controlled by the start_at_step and end_at_step settings. This makes it possible to e.g. hand over a partially denoised latent to a separate KSampler Advanced node to finish the process.
So from my understanding the use of start_at_step and end_at_step is needed only when you chain the samplers WITHOUT finishing the image. i.e. not for upscaling "first image>second image" but for varying the parameters mid-sampling "first image>still the same image"
>>
>>103175666
these at not the same process, start_at_step works differently
>This makes it possible to e.g. hand over a partially denoised latent to a separate KSampler Advanced node to finish the process.
exactly, you inject some noise and then you can use start_at_step so that the noise is treadted diffeently
can pretty much guarantee you these don't work the same. Even extremely low denoise levels like 0.05 would result in very different images when the start_at_step was set to 0 as the image got bigger and bigger. You don't want the model to repeat the first few steps during img2img, it's as simple as that.
>>
>>103175738
you too
>>
File: 1704295804052.png (37 KB, 839x236)
37 KB
37 KB PNG
>>103175753
>the model to repeat the first few steps
There's no "first steps" from my understanding. It's just steps. If I set low denoise the sampler essentially starts from some uncertain step and that uncertainty of the exact steps*denoise-starting-point vs a specific number when start_at_step is the only difference afaik.
>You can right-click on the Ksampler Advance node and there is an option "Set Denoise" it will ask you how many steps you want, how much denoise then it will set the steps, start_at_step and end_step for you.
Which I didn't know but now I do. And it obviously sets the same value I would have set myself judging by the intuitive understanding of how steps + denoise work in the simple KSampler.
Initially I thought you really know what you're doing but now it seems you just convinced yourself of an illusory correlation.
>>
>>103175908
give him a moment. He will tell you that you are embarrassing yourself.
>>
File: 1girl (male).jpg (645 KB, 1301x1797)
645 KB
645 KB JPG
>>
>>103171908
We needed this.
>>
>>103174594
Depends on what you're trying to train.
>>
File: noise_test_01.png (312 KB, 1270x1162)
312 KB
312 KB PNG
>>103175908
anon who is embarrassed here, the one with all the audacity.

>There's no "first steps" from my understanding
kinda? Leaning on the no. First step does something different than 5th step. The injection of noise depends on the scheduler and where you are in the process. Step 1 noise injection is different than step 5 noise injection. This makes more sense because you don't want to inject a bunch of noise 1 step before you are done. I am using the term noise loosely, please don't dig into it too much.

Let me show with a few screenshots.

First one is just proving my workflow works. I can set my ksamplers to do different steps and starts and stops and get the same image. The goal is to generate the same image with starts and stops of different lengths and denoise of 1. I am using a constant input for the latent and not the default latent.
>>
File: noise_test_02.png (617 KB, 2256x904)
617 KB
617 KB PNG
>>103176290
this screen a pure yellow latent (well image with decode) was fed into the ksamplers.

So same input, mostly the same settings and the 3rd one was told not to cleanup the noise at the end. Just to say this outloud the image doesn't work because this is effectively 8 steps.

The start/stop at 0-8 is massively more noisy than the 16-24 start/stop.

The next topic would be handling the difference between determinate and non-determinate schedulers.
>>
File: noise_test_04_a.png (1004 KB, 1303x1162)
1004 KB
1004 KB PNG
>>103176362
I don't understand how these images are different when you think about it under the idea of noise injection. I get that it does different things when the steps are different. The thing I am foggy about is if the noise injection is every step and then working towards a minimum or if the scheduler/sampler are doing other things and the noise is still constant. I am leaning towards steps changing the noise injection. If I change the start/end steps to 1 then I get different images. I can't see the difference, but there is.
>>
File: noise_test_04_b.png (554 KB, 1500x1173)
554 KB
554 KB PNG
>>103176567
here is the difference. I'll stop spamming this thread now.
>>
File: tmphdb_apjx.png (1.36 MB, 896x1152)
1.36 MB
1.36 MB PNG
>>103175546
>anon, am i still cute enough for the collage?
>>
Cooking is interesting, even at quite high guidance it's not always guaranteed. When I set my guidance to 3.0 (with Flux) most gens become very cooked,* but 1 out of every 15 or so will be not that cooked, one in 30 maybe not overcooked at all. If I go down to 1.4 most gens won't overcook, but a sizable minority of them still will. So the trade-off I have to consider is: do I want more uncooked gens per minute, or fewer but with stronger guidance from my prompt? And then there's the next question: are the uncooked gens actually getting the additional guidance, or are they by some fluke not being guided as strongly,** which is why they don't overcook? If the latter then high guidance would have no or little benefit.

* I am defining cooked as meaning having that immediately apparent AI-generated look; this can manifest as sliminess, excessive contrast, buttchin, or in many other ways, but the important thing is that it's immediately discernible at a glance, it's an overall effect that the veteran prompter detects naturally without any need for close inspection or thought.

**e.g., when there is poor correspondence between the state of the latents mid-denoising and the prompt this can lead to low impact of the prompt guidance on that denoising step, because the model doesn't know a plausible way to take the conditioning into account given where the image is at, so the prompt behaves almost like it's meaningless noise.
>>
cozy bread
>>
File: 00013-2516982040.jpg (518 KB, 1344x1728)
518 KB
518 KB JPG
>>
What checkpoint would you recommend I use in stable diffusion if I want something like this? And what setup in general
>>
>>103177344
>something like this?
Bottom is Flux, not sure about top could be a Pony Realism model
>And what setup in general
Comfyui or Forge. Links in the OP
>>
>>103177481
Thanks!
>>
File: 00016-1599726001.jpg (611 KB, 1344x1728)
611 KB
611 KB JPG
what cfg scale are yall usin
>>
>>103173532
cool
>>
>>103177553
3 for Flux
4-7 for SDXL
>>
File: 28110411044_afefbf8090_h.jpg (822 KB, 1600x1082)
822 KB
822 KB JPG
!! note this image is from Flickr, NOT generated, posted because relevant to the text of the post

>>103177344
bottom gen is mine. just flux dev nf4 with t5xxl_fp16, no loras no special checkpoints etc.

I img2img something, usually a blurred social media pic but in this case picrel from flickr, at 0.90 to 0.94 denoising. My guidance is set in that picture to 1.37, sometimes as high as 1.6 if the prompt is really simple and open-ended. 44 steps sgm_uniform dpmpp_2m.

The prompt will be written to be like a social media post often about a friend or family member, sometimes a celebrity or a stranger, sprinkled with just a little bit of "and also she's hot". I seldom just go straight for "sexy lady with big breasts", I always have to establish a context for the photo so it doesn't just look like porn

In that image she was supposed to be "my sister Melanie" (I always use a name) who is "kinda thicc" wearing a jean paul gaultier bodysuit on her birthday at a chain restaurant. You can use brand names, of clothing or of places, to exert a strong influence on the subject. High fashion expensive stuff tends to make subjects look much older because young people can't afford it, so be aware of that.

"Grainy facebook photo of [such and such]" also helped set the style but that was near the end of the prompt.

I'll gen 1000-2000 images overnight, keep 50-200 of them, and post sometimes one or two, sometimes more, often none.

>>103177553
1.4 for flux, usually. (Guidance, not CFG. CFG stays at 1.)
>>
>>103177639
This is excellent advice. What's your power bill like?
>>
>>103177639
>!! note this image is from Flickr, NOT generated, posted because relevant to the text of the post
damn thats a really nice img thodesu
>>
>>103177656
I prefer not to think about it, but power is pretty cheap where I live so not too bad.
>>
>>103177214
why?
>>
Getting a new build. I gather GPU is the prime concern for performance in image gen but is the cpu a concern at all? Would I be gimping performance if I go with an x3d with less cores?
>>
File: 00053-1169301067.png (1.92 MB, 960x1440)
1.92 MB
1.92 MB PNG
>>
> genning 1girls holding weapons with pony model
> frequently fucked up, swords all over the place
> get annoyed
> switch to genning naked apron images
> almost all come out perfectly
I forget its basically a porn model
>>
>>103177904
>is the cpu a concern at all
no, not really
>>
>>103177917
Make her ride a lawnmover and hold white monster energy
>>
>>103177917
A job more suiting her talents.
>>
Actually might as well ask over here as well
For some reason regional prompter only works for me on forge, not reforged. Not the worst thing in the world at the end of the day, and I would normally just move over, but reforged has these CFG++ samplers that I kind of liked using. Is there a way to get those on forge?
>>
>noobai
good, at last we're moving from sloppy flux
>>
File: 00007-1815077297.png (2.2 MB, 1440x960)
2.2 MB
2.2 MB PNG
>>103178081
She even has a Monster branded mower
>>
>>103178603
God damn that's great!
>>
Actually just more directly, has anyone got regional prompter working on reforged?
>>
>>103178603
hahaha what the fuck
>>
File: 1642298819349.png (2.61 MB, 4092x3005)
2.61 MB
2.61 MB PNG
>>103176362
>The start/stop at 0-8 is massively more noisy than the 16-24 start/stop.
Because in regular KSampler "steps" {value} is the total amount of steps it does, but in Advanced "steps" is from 0 to {value}.
Advanced 0-8 is equivalent to 8 steps at 1.0 denoise of regular; Advanced 16-24 is regular's 8 steps at 0.33 denoise. So when in Advanced you do 0-8 you start from pure noise, and when you do 16-24 you start from 0.67 of the image. Which is why you see so much noise.

I was talking about what you shown on your pic 1, that regular KSampler with
denoise 
and Advanced KSampler with
start_at_step 
and corresponding values set via Set Denoise produce identical images.
On my picrel inside of two groups the images are the same because I set the values correctly.

>>103176567
Stochastic samplers remove some predicted noise but also add some random noise at each sampling step. So when you do more steps the due to that randomness the final image is different from what it was with fewer steps, it's not just linear denoising of deterministic samplers.
>>
Deadline's rapidly approaching, so start genning some bakerslop if you want to make it into the next collage.
>>
>>103179085
please see
>>103175143
>am not expecting a reply from you, dont care

I am done with the stupid games anon. If you want to have a conversation be a human. I am tired of explaining things to you and you losing your shit.
>>
Babe wake up, Pyramid Flow released their 768p model
https://huggingface.co/rain1011/pyramid-flow-miniflux/tree/main/diffusion_transformer_768p
>>
>>103177917
>>103178081
>>103178603
holy kino
>>
>>103179344
What is this?
>>
>>103179365
a video model
>>
>>103179374
Sounds too hard for my pc.
>>
>>103179384
as usual, kijai will make a Q8_0 version of it and it'll probably be usable for 24gb vram cards
>>
>>103179384
having you checked the folders? I am not seeing anything too crazy for requirements. I could be missing it.
>>
File: 1722527989551399.png (1.91 MB, 1280x1024)
1.91 MB
1.91 MB PNG
found this on /lmg/, why are they funni and not us? :(
>>
>>103179411
it'll probably ask for a lot of vram, 768p is a really high resolution
>>
>>103179418
/lmg/ had soul at one point that gen is like a year old brother
>>
>>103179418
that's not funny
>>
>>103179450
that's weird I was a frequent lurker in /lmg/ a year ago and I never seen that picture
>>
>>103179334
How's that relevant to what I have been talking about?
>>
>>103179463
my timeframe is probably fucked it could be from jan-march this year but i'm 99% sure the miku/teto posting started Q4 last year.
that looks like a WAY better version someone just inpainted/regenned recently for sure though.
>>
File: 1711614914934111.webm (250 KB, 1088x768)
250 KB
250 KB WEBM
>>103179344
Since they were too lazy to provide any examples, I searshed a bit and found this:
https://huggingface.co/rain1011/pyramid-flow-miniflux/discussions/7#6734f1757b5bba7729ad4846
>>
>>103179426
VERY high for consumers.
>>
i have a nudity lora i often use for flux to add nipples. would there be any point to merging this lora with FLUX, or would it be the exact same as if i just loaded the LORA seperately?
>>
>>103179459
SEX IS FUNNY
(canned laughter)
*funny face*
(canned laughter)
>>
>>103179549
its funny because mikuposters started the degenerative brainrot of that general so sex funny haha (canned laughter) (wilhelm scream)
>>
>>103179554
I literally don't know what Miku is. I just know it's one of the things I don't use that comes with Flux.
>>
File: 2024-11-13_00030_.png (1.1 MB, 720x1280)
1.1 MB
1.1 MB PNG
>you need more
>>
>>103179554
>mikuposters started the degenerative brainrot of that general
I'm only an occasional lurker in /lmg/ but this always felt forced and like this. When the general became centered around an imposed meme identity, accepted it even.
>>
File: conan4k (1).png (1.29 MB, 1525x2160)
1.29 MB
1.29 MB PNG
What's the best current model for anime porn?
>>
>>103179599
pony
>>
>>103179595
I don't mind Miku but they are obnoxious when they start to split the thread in two so that other retards want to force their Kurisu meme too
>>
>>103179599
>>103179606
not noobAI?
>>
>>103179622
idk havent tried it
>>
File: img_00012_.jpg (834 KB, 1664x2432)
834 KB
834 KB JPG
>>103179096
my attempt at collage slop. Honestly, I hate it.

>>103179622
depend on how disgusting you want your porn.
>>
>>103179668
this one's got a real shot at making it in, I can feel it
>>
>>103179544
I'd prefer a lora.
>>
>>103179595
yeah i remember being one of the people calling it out and causing massive sperg rages that tore the threads up and split the threads too >>103179609
well we see what happens when enough people dont mind it kek now the thread is useless.
blows my mind that /lmg/ is the general were nobod even knows how to use the models to begin with so all the "testings" are skewed with retards that have bad configs/genuinely don't know what they're doing.

can't believe i hopped from /aicg/ to that general from june to july last year, oh how the turns have tabled huh.

hm, would anyone even say SD is more difficult to use than LLM's? i'd say LLM's are more difficult but not that bad.
>>
>>103179752
>how the turns have tabled
Nah /aicg/ is always more retarded. Bad configs/genuinely don't know what they're doing all around, they want to click one button to import a preset and do nothing more. Don't want to learn how to write system and other prompts, don't want to read the docs and test to know how to adjust the extremely limited in comparison variety of API samplers. Localfags are cool by defnition as long as they don't use ollama lol, fucking hate how to normalfags it's The way to use local.
IMO SD is incomparably easier because it's momentary. You just gen a single discrete pic or animation, for llms that would be like starting a new chat after the very first response.
>>
File: catbox_96xzdv.jpg (1007 KB, 1536x1536)
1007 KB
1007 KB JPG
>>
gonna select "random dictionary entry" until I get three nouns, either objects or concepts, which I can combine to get an idea for bakeslopping
>>
>>103179882
nah i mean the turns have tabled for if /lmg/ would stand the test of time in general, honestly im autistic i should've worded it better.
finding out that /aicg/ users were drinking their own piss for leech methods q4 2023 was mind boggling i couldn't believe it but at the same time i could.
>>
File: file.png (1.31 MB, 1360x896)
1.31 MB
1.31 MB PNG
>>
File: 2024-11-13_00035_.png (852 KB, 720x1280)
852 KB
852 KB PNG
AM I IN TIME? THIS IS NOT A 1GIRL
>>
>>103179599
I wish I could say "nice gen" :(
>>
File: RA_NB1_00042_.jpg (1.2 MB, 1920x2808)
1.2 MB
1.2 MB JPG
>>
When you do batches of more than one image in Comfy, how is the noise seeded for the images after the first one?
It isn't just incrementing the seed, because if you do that manually after generating you'll see all the images are new.
>>
File: 2024-11-13_00036_.png (1.57 MB, 720x1280)
1.57 MB
1.57 MB PNG
>>
>>103180043
artist tags?
>>
File: RA_NB1_00043_.jpg (1.2 MB, 1920x2808)
1.2 MB
1.2 MB JPG
>>103180143
Just starshadowmagician
>>
File: fool_the_baker_00002_.png (727 KB, 768x1408)
727 KB
727 KB PNG
>>
File: fool_the_baker_00004_.png (883 KB, 960x1152)
883 KB
883 KB PNG
>>
File: fool_the_baker_00003_.png (859 KB, 960x1024)
859 KB
859 KB PNG
>>
File: fool_the_baker_00009_.png (786 KB, 768x1408)
786 KB
786 KB PNG
I'm slopping hard for that collage spot rn, I suggest you anons do the same
>>
File: fool_the_baker_00015_.png (1.48 MB, 960x1088)
1.48 MB
1.48 MB PNG
>>
File: RA_NB1_00048_.jpg (981 KB, 1920x2808)
981 KB
981 KB JPG
>>
File: 1726851400803.jpg (1.37 MB, 2304x1792)
1.37 MB
1.37 MB JPG
Will they make 3dpd models based on Illustrious/Noob just like they did with pony? Yes they will.
>>
alright, fine, I'm a horny guy, maybe I'll try this flavor-of-the-month noobAI shit. But if you fools are wrong again...
>>
>>103178081
noobai doesn't really work for me. I have all kinds of problems with it. Am I the only one? I'll be sticking with Flux.
>>
File: 1724293440095072.png (2.06 MB, 1528x968)
2.06 MB
2.06 MB PNG
>>103180353
can illustrious do sex positions and stuff like pony? I was under the impression that it could only do 1girl-type gens
>>
>>
>>103180377
Original v0.1 can but it's bad. NAI can and it's good.
>>
>>103180377
>like pony
better than pony, by miles, and can do 2girls without regional prompting to a good extent
will need more updates though.
so far this seems like the best one i've tested
https://civitai.com/models/900166/illunext-noobai-illustrious?modelVersionId=1055353
>>
Does nobody else need tiled vae? all the wf seem to use non-tiled.
>>
File: RA_NB1_00050_.jpg (481 KB, 1264x1848)
481 KB
481 KB JPG
>>
>>103180460
most AIs will detect and kick to tiled if needed.
>>
>>103180476
wth. I meant UIs not AIs.
>>
File: ComfyUI_00126_.png (1.44 MB, 1152x2016)
1.44 MB
1.44 MB PNG
>>
File: ComfyUI_00127_.png (850 KB, 1152x2016)
850 KB
850 KB PNG
>>103180495
No idea why it's doing that.

noobai.

does sdxl work amd gpu harder than flux?
>>
File: ComfyUI_00128_.png (1.34 MB, 1152x2016)
1.34 MB
1.34 MB PNG
>>103180503
>>
anon above me, add armpit hair into your prompt
>>
anon above me, sniff your own armpits and report back
>>
anon above me, it smells cheesy
>>
File: 004017.jpg (1.85 MB, 1680x2160)
1.85 MB
1.85 MB JPG
>>
File: ComfyUI_00135_.png (1.61 MB, 1152x2016)
1.61 MB
1.61 MB PNG
>>103180586
>>
It's not a character, so the face changed.
>>
>>103180625
collage worthy
>>
File: PA_0001.jpg (1.03 MB, 3328x1152)
1.03 MB
1.03 MB JPG
>>
>>103179344
Has anyone tried it yet? And if yes, is it good?
>>
>>103180658
Neat, what are you using?
>>
>>103180664
Pixart
>>
File: img_00001.jpg (1014 KB, 1664x2432)
1014 KB
1014 KB JPG
>>103180495
I am getting some bizarre results with noobai too. Seems prompting is closer to sd1.5 style than danbooru although I am still trying to figure it out.
>>
File: PA_0002.jpg (864 KB, 3328x1152)
864 KB
864 KB JPG
>>
>>103180681
Is it local yet?
>>
>>103180709
use danbooru and e621 tags only, artist tags are also required, noob doesn't have a default style. you can mix multiple artist tags at different weights to create your own style
>>
File: PA_0004.jpg (795 KB, 3328x1152)
795 KB
795 KB JPG
>>103180737
https://huggingface.co/PixArt-alpha/PixArt-Sigma-XL-2-1024-MS/tree/main
>>
File: PA_0006.jpg (638 KB, 3328x1152)
638 KB
638 KB JPG
>>
>>103180748
Looks cool!
>>
File: PA_0007.jpg (1013 KB, 3328x1152)
1013 KB
1013 KB JPG
>>
File: PA_0008.jpg (1.04 MB, 3328x1152)
1.04 MB
1.04 MB JPG
>>103180777
book made out of sand and rock with growing oasis scenery on it, the water of the oasis pond and sand dunes flowing from the book onto the table, fantastic scenery, artistic concept, beautiful and hyperdetailed, starry night, magical
>>
back when we were known as the sovl general
>>
>>103180709
get the danbooru tags auto completion plugin for forge if youre using that
if youre in comfy i dont know tough titties i guess but that's the way you'll learn how to prompt it properly, it's really not complicated.
>masterpiece (character) (character traits) (location/details)
>>
File: ignore shiny hair.png (16 KB, 1369x96)
16 KB
16 KB PNG
>>103180830
shit coffee still hasnt kicked in
put the action after character prompts
still noob/illust is very flexible and won't just break really badly if you don't follow that order
i'm not following that order right now kek
>>
>>103180814
this
>>
File: PA_0013.jpg (873 KB, 1664x2432)
873 KB
873 KB JPG
>>
File: 004029_g.png (2.42 MB, 1120x1440)
2.42 MB
2.42 MB PNG
>>
File: PA_0014.jpg (1.08 MB, 1664x2432)
1.08 MB
1.08 MB JPG
>>
File: img_00121_.jpg (886 KB, 1664x2432)
886 KB
886 KB JPG
>>103180747
>artist tags
it hates my artist wildcard, going to try a new file. There seems to be a pretty large gap on the characters I tried. It got some pony didn't so that was nice.

>>103180839
>if you don't follow that order
where is this documented? I am using breaks which I also have to look it up.
>>
File: PA_0015.jpg (1004 KB, 3328x1152)
1004 KB
1004 KB JPG
>>
>>103180881
>where is this documented?
i have no idea were anything is documented kek i've just learned by secondhand anon lessons
but believe me if you're having issues is either a skill issue or a checkpoint issue, more than likely checkpoint because noob/illust is very generous about promptlet issues compared to pony.
>>
sometimes i buy into the "auto/forge makes better outputs" meme
>>
File: PA_0017.jpg (876 KB, 3328x1152)
876 KB
876 KB JPG
back in my hole I go
>>
>>103180961
gn, stay racist
>>
>>103180881
>it hates my artist wildcard
try manually experimenting with a list of artists first, when making your own style mix i recommend picking 1 main artist who's style you like at a normal weight and then adding a few supplementary artists at lower weight. so something like:
>artist_1, (artist_2:0.45), (artist_3:0.5), 1 girl, (armpit hair:10000)
some mixes might tend to fuck up hands and anatomy so becareful. add these quality tags to the end of your prompt as well
>(masterpiece, best quality, very aesthetic, absurdres:0.4), giantess, cityscape, landscape, destruction
if you haven't add some furry tags from e621 into your negatives, tends to help with color
>anthro, mammal, furry, sepia, worst quality, low quality,
>>
>try to do a gen with a 1girl laying on her stomach on her bed
>forget to turn off the ass attack lora
>now she's doing some kind of a yoga pose stretching forward with her huge ass the focal point
>its still full body
man noob rocks
>>
File: ComfyUI_00145_.png (2.26 MB, 1152x2016)
2.26 MB
2.26 MB PNG
noobai.

trick to keep from crashing my computer. Flux is more gentle.
>>
>>103180983
>if you haven't add some furry tags from e621 into your negatives, tends to help with color
NTA but THANK YOU i thought something was fucked up
>>
>>103180995
np, also, if your images end up looking a bit fried try turning on rescalecfg. i use cfg 5 with rescale on 0.7. noob is very sensitive to cfg
>>
anime sloppin'
>>
File: ComfyUI_00149_.png (1.07 MB, 1152x2016)
1.07 MB
1.07 MB PNG
>>103180989



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.