[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/h/ - Hentai

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


🎉 Happy Birthday 4chan! 🎉


[Advertise on 4chan]


File: inpaint.jpg (1.08 MB, 2304x2304)
1.08 MB
1.08 MB JPG
Sibilant Sacrifice Edition

Previous Bake >>8730481

>LOCAL UI
reForge: https://github.com/Panchovix/stable-diffusion-webui-reForge
Comfy: https://github.com/comfyanonymous/ComfyUI

>RESOURCES
FAQ: https://rentry.org/hggFAQ
Wiki: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki | https://comfyanonymous.github.io/ComfyUI_examples
Training: https://rentry.org/59xed3 | https://github.com/derrian-distro/LoRA_Easy_Training_Scripts | https://github.com/bmaltais/kohya_ss | https://github.com/Nerogar/OneTrainer
Tags: https://danbooru.donmai.us/wiki_pages/tag_groups | https://danbooru.donmai.us/related_tag | https://tagexplorer.github.io/#/
ControlNet: https://rentry.org/dummycontrolnet (OLD) | https://civitai.com/models/136070
Inpaint: https://files.catbox.moe/fbzsxb.jpg
IOPaint (LamaCleaner): https://www.iopaint.com/install
Upscalers: https://openmodeldb.info
Booru: https://aibooru.online
4chanX Catbox/NAI prompt userscript: https://rentry.org/hdgcb
Legacy: https://rentry.org/hgglegacy

>TRAINING
Guide (WIP): https://rentry.org/yahbgtr
Anon's scripts: https://mega.nz/folder/VxYFhAYb#FQZn8iz_SxWV3x1BBaJGbw
Trainers: https://github.com/67372a/LoRA_Easy_Training_Scripts | https://github.com/kohya-ss/sd-scripts

>MODELS
https://civitai.com/models/833294/noobai-xl-nai-xl

OP Template/Logo: https://rentry.org/hggop/edit | https://files.catbox.moe/om5a99.png
>>
File: 00021-2068498532.jpg (637 KB, 2304x2304)
637 KB
637 KB JPG
>>
why is he brown
>>
>>8736897
Because I inpainted.
>>
>>8736903
then inpaint it again.
>>
>>8736904
She likes her meat well done doe??
>>
blackanon is now baking threads? What a bold newcomer, chat.
>>
I've been here since #001
>>
Same
>>
>>8735775
fun fact, you can use a normal "small" dataset of 16-30 images, increase the step count to 2500, have it save every 176 steps and save last 500 steps, test the final, step 2288 and step 2112 for comparisons and it mostly works?
or at least it worked for the one dataset I tried on it and it worked a hell of a lot better than any 1dim lora has any right to be.
>>
>>8736892
Mods are gonna nuke this to shit. Nice gen btw
>>
File: 00592.jpg (894 KB, 2688x1536)
894 KB
894 KB JPG
>>
my dick hurts
>>
>>8736959
not entirely sure what the context is here but you're actually retarded if you don't save a bunch of versions then s/y them to see which is best
>>
>>8737036
Because you're a contrarian who only likes things nobody else likes.
>>
>>8736892
>Sibilant
learned a new word today
>>
>>8737066
Forgot three other words to make room for it.
>>
>>8736963
Gorgon is not allowed? /h/ is such shit man.
>>8737066
Was working on a textgen story with her and encountered it. Good stuff.
>>
>>8737104
technically every monster girl is /d/, even cats
but the line in practice depends entirely on the people itt and whether they decide to report you, I've even posted spider girls before with no issue
>>
>>8737104
Technically all monstergirls are /d/ but sometimes it slides. In my opinion, monstergirls should only be /d/ if they are grotesque looking (obviously not Gorgon), but I don't make the rules
>>
>>8737106
>>8737108
>the rules don't matter
Anon wins again. We need a better board than /d/ for vanilla adjacent. Leave /h/ to die for the boring vanilla purists.
>>
File: Succubus Jeri Katou.jpg (458 KB, 1400x2000)
458 KB
458 KB JPG
>>8737108
>monstergirls should only be /d/ if they are grotesque looking

At what degree does a monster girl become grotesque?
Are long ears, long tongues, hooves and colored skin ok?
>>
>2 /hdg/
>2 /hgg/
we are healing
>>
File: file.jpg (15 KB, 225x225)
15 KB
15 KB JPG
>>8737117
>At what degree does a monster girl become grotesque?
My sense of taste as Lord of 4Chan, Ruler of Chinese Cartoon Porn

Jokes aside, if it's normal enough that could be an entry in Monstergirl Encolypedia, it's fine. If not, then no.
>>
>>8737117
/e/
>>
>>8736892
can i get box for this?
>>
>>8737119
>>Jokes aside, if it's normal enough that could be an entry in Monstergirl Encolypedia, it's fine. If not, then no.
Sorry, but there are definitely some girls in that that are irrefutable /d/ territory
>>
>>8737123
Atlach's are /d/ but I will accept them in the name of an objective standard. My heart is so big
>>
>>8737125
Sandworm girl and anything with equine bottom half as well
>>
File: 1612246183263-1646735108.png (1.04 MB, 832x1216)
1.04 MB
1.04 MB PNG
>>8737127
Oh true, I actually forgot how out there some are
>>
>>8737122
Only if you do something with it chat.
https://files.catbox.moe/918kes.png
>>
>>8737131
thanks
>>
How long until this safe space gets nuked?
>>
>>8737136
stop spamming, retard
>>
File: catbox_vykfdk.png (1.29 MB, 1024x1024)
1.29 MB
1.29 MB PNG
>>8737131
Thanks
>>
>>8737140
nice
>>
>>8737141
very painterly
>>
File: 1612246183380-2174194292.png (1.3 MB, 1024x1024)
1.3 MB
1.3 MB PNG
>>8737140
Eww, I forgot to fix the eyes - here's a much better one
>>
>>8737143
sadly too based for this world. see you in three days, icarus
>>
>>8737143
long shota
>>
>>8737143
Yeah snakes too. Now it's perfect.
>>
>>8737110
I was hoping the non-futa /d/ general could become this, but the amputees and vore turned people away.
>>
>>8737117
I think you're good
>>>/d/11368387
Hooves are borderline furry, same with fur on limbs like >>>/e/3006890, but again just don't trigger the wrong people and you'll be fine.
>>
>>8737110
Told you chat, we need a vanilla /d/ thread, it should be a little more hardcore and permissive than this thread but without going to any extreme
>>
>>8737193
we should establish a jewish state in the middle east
>>
>>8737190
>just don't trigger the wrong people and you'll be fine.
Ok, anything in particular I should avoid doing?
I just started posting here last week, so I have no idea what sort of thread schizos this place has.
>>
>>8737117
Box please?
>>
File: Succubus Rika Nonaka.jpg (393 KB, 1400x2000)
393 KB
393 KB JPG
>>8737243
Here you go.
https://files.catbox.moe/f8wuwv.png
>>
What are you soft inpainting settings, bros? Do you even use it at all?
>>
>>8737259
Yes I use and default values are fine
>>
>>8737254
Thanks.
>>
>>8737265
Is it a "turn on and forget about" type of deal or is there a specific usecase?
>>
>>8737336
you turn it on and never look at it again, if you ever feel it is not doing anything you need to increase your mask blur value
>>
>>8737338
Ok thank you.
>>
>>
>>8737362
That doesn't look photorealistic at all.
>>
>>8737365
it's not supposed to, it just adds a bit of shading from 60% onwards
>>
File: 00101-1952107931.png (3.86 MB, 1536x2560)
3.86 MB
3.86 MB PNG
>>8737362
Chadmix vpred when
>>
>>8737367
you cant use her
she belongs to lion optimizer anon
>>
File: 00095-3619753827.jpg (609 KB, 1536x2560)
609 KB
609 KB JPG
>>8737369
I did some hanabi gens back then, she's mine as well
>>
>>8737367
https://civitai.com/models/1068866/chadmix-finetune-extract-for-noobai-vpred-10
never bothered with releasing the checkpoint i only use it as a lora
i could refine and rebake but aint got time for that anymore
>>
>he's here
>>
>>8737371
Very interesting, if this is an extract, why you also included some trigger tags? (even if the extract seems to work just fine without any of them), just to use the meme?
>>
>>8737377
yes, it's always been a meme first and minor opportunistic aesthetic tune second
>>
Inpaint bros... I was wrong about you. It's really cool when your pic comes together after a bunch of inpainting. I fucking kneel.
>>
File: 1612246182178-432011910.png (1.06 MB, 896x1152)
1.06 MB
1.06 MB PNG
>>8737462
Yeah inpainting is actually pretty fun when you have a vision in mind (slop pic definitely not related)
>>
File: ComfyUI_30205_.png (1.1 MB, 1024x1024)
1.1 MB
1.1 MB PNG
>>8737462
There's fixing issues by hand, and then there's adding concepts you couldn't reasonably prompt for. The latter is way cooler imo.
>>
File: 00598.jpg (669 KB, 2688x1536)
669 KB
669 KB JPG
>>8737462
Welcome to the good side
>>
>>8737118
I want to ask, what the difference between /hgg/ and /hdg/? Maybe im a normie dumbass, but isn’t generation and diffusion the same thing technically?
>>
>>8737493
We generate the image rather than letting the slopmachine diffuse all the latents
>>
>>8737493
originally one whiny fag made /hgg/ as a split thread to avoid spammers and shitposters
it's just momentum at this point, the topic is the same in both but people refuse to merge back in
>>
>they fell for it AGAIN
keeeeek
>>
/hgg/ has proven to be an effective repellent against a number of schizos
>>
>>8737469
>>8737474
>>8737476
Yeah it can be pretty fun but very time consuming. I imagine I could have done this all from txt2img alone but I wanted to do something simple as proof of concept and write my findings down in my notes. It's pretty cool to go from A to B too.
https://files.catbox.moe/jpjcfr.png
https://files.catbox.moe/yjypfs.png
Still pretty rough around the edges but I'm proud of it.
>>
LORAMAKIES MAKE THE LORA
https://danbooru.donmai.us/posts?tags=tsuntsuke
>>
>>8737543
>traditional media via digital camera photo
>they didn't even lightbox them for the photo so the lighting is inconsistent
I can tell you without even doing anything that it's going to be absolute shit.
but I'm already making absolutely godawful stupid decisions testing shit so I guess a few more won't hurt
>>
File: catbox_qvr7da.png (2.6 MB, 1152x1632)
2.6 MB
2.6 MB PNG
>>8737543
>>8737554
here's a shitty lowstep lora made with a two image dataset
https://files.catbox.moe/896p4n.safetensors
I'll do a "full" one with a bit larger of a dataset later after doing other shit. It's trained on v1 vpred so if the style transfer isn't working as well as demonstrated by image related consider not using a garbage shitmix.
use jz235 as an activation token, I'd also recommend using secondary aesthetic tags like faux traditional media and painting \(medium\)

if anyone wants a working example of the kind of dataset low step bullshit takes, here's what was used https://files.catbox.moe/t6r77f.zip
>>
>>8737496
>>8737499
I see. Thank for the info. Can't we just be at peace and post anime titties and waifus getting fuck?
>>
>>8737595
unfortunately 2025 is not a year of peace
let's all work hard to make the world a better place
>>
File: 00481.png (474 KB, 1664x2432)
474 KB
474 KB PNG
>>8737595
I do, I always do, chat is the problem
>>
>>8737602
i appreciate your posts
>>
>artist draws good for exactly 6 pics
>reverts back to his generic slop for the next 200
Is it worth training on the six?
>>
>>8737609
six is too few
>>
>>8737610
Man those pics are fucking good I'm gonna do it.
>>
>>8737609
Make a shit lora with those 6 images, use it to img2img a bigger dataset, retrain
>>
>>8737614
this was lies madness
>>
way*
>>
>>
>>8737614
Unironically how people who say "We can train LLMs on synthetic data." sound
>>
>>8737652
We can. We have the technology.
>>
>>8737654
you can but you shouldn't, it looks like shit
>>
>>8737659
There has to be a way. If I could get to like 20 that would be it.
>>
>>8737661
there is.
commission it.
>>
>>8737664
Truth nuke
>>
>>8737652
Better than nothing
>>
>>8736894
>niggerdick
No thanks
>>
>>8737678
Why are you gay?
>>
>>8737678
>Sexy monster woman
>Fixates on penis
>Hyperfixates on the race of the man
This is incredibly homosexual, and I'm tired of pretending that it's not.
>>
keeeeeeek
>>
File: catbox_roczp8.png (2.6 MB, 1152x1632)
2.6 MB
2.6 MB PNG
>>8737543
https://www.mediafire.com/folder/808yxs8afpilo/tsuntsuke
>b-but it's a shitty 1dim lora
you will take what you will get and you will be happy.
also let's be real this works 10 times better than it has any right to and is significantly better than trash you'd find elsewhere. also it's still trained on vpred 1.0. Also same prompting suggestions as before applies
>>8737568
image related is 1:1 seeds/prompt to show the difference between low step bullshit and a more "full" lora bake.

random other examples
https://files.catbox.moe/ok0drr.png
https://files.catbox.moe/flszld.png
https://files.catbox.moe/uggxnl.png
https://files.catbox.moe/nmpvt4.png
i2i upscales from dramatically off-style input:
https://files.catbox.moe/gdqd2y.png
https://files.catbox.moe/vr1pg5.png
https://files.catbox.moe/vp5kt0.png
https://files.catbox.moe/cgzd6c.png

zip of the dataset+config for anyone actually interested
https://files.catbox.moe/pzg0tk.zip
>>
>>8737686
shockingly good actually
>>
Hmm yet another case of vpred not getting the style as well as illu. Where is that other anon now?
>>
>>8737686
Can you bake this if you get the chance? It's only 8 pics.
https://files.catbox.moe/1pjrb6.rar
>>
>>8737690
My cashapp is $wrdamon
>>
>>8737687
I agree. Even the low step-count, hyper tiny dataset worked fairly well all considering.
pretty much why I bothered going into presentation mode to use it as a teaching example and provided full dataset+config.
>>8737690
I can already tell you it's going to be real rough. half that dataset is really... not ideal due to secondary visual elements(depth of field/blur shit). Also sketchy digital styles usually get shitfucked for a lot of weird reasons.
I'll try and do a 3 image dataset low step count run and see where it can go from there.
>>
>>8737683
intentionally changing the skin of the male from default is extremely gay
>>
>>8737741
if it was default, you wouldn't need to put dark-skinned male in negs
>>
>>8737744
i don't, not my fault you're using refugee shitmixes
>>
File: 1730779731673794.jpg (291 KB, 1200x1600)
291 KB
291 KB JPG
Not sure if anyone here has tried anything this, but does anyone know a good model/lora that can make good images of realistic looking real dolls(pic related). I'm not looking for hyper realistic humans, I prefer the look the dolls give since they look more anime while all the realistic checkpoints are just "create a real human" type

Pic related is the look I want to get
>>
File: 00007-178339212.png (1.8 MB, 1920x1152)
1.8 MB
1.8 MB PNG
>>8737756
That sounds very specific. I'm afraid you are gonna need to make a lora for this task of yours alone.
>>
kohya or ez scripts?
>>
>>8737756
You might have better luck asking in the right place >>>/aco/8986916
>>
>>8737775
Yeah I wasn't sure which board to ask on.

>>8737773
Unfortunately I might
>>
File: 1000021141.png (1.33 MB, 1216x832)
1.33 MB
1.33 MB PNG
>>
>>8737686
>dim 1
>conv dim 12
>block dims
intredasting

i don't like ZTSNR and imo l2 loss is better than huber both from a theoretical and empirical standpoint in my tests
>>
>>8737664
>i'm going to need you to not draw like your usual garbage
>>
File: 00049-77114486.png (2.95 MB, 1344x1728)
2.95 MB
2.95 MB PNG
take this you fucking bitch
>>
>>8737839
Is that khyle?
>>
>>8737877
doesn't look that way
he doesn't even do lips, nor thick outlines
>>
File: catbox_y35zfq.png (1.72 MB, 1152x1632)
1.72 MB
1.72 MB PNG
>>8737690
well, after dicking around with it a little bit, about all I can really say about it is that it uh, sort of works. Sort of.
https://www.mediafire.com/folder/k58cm0mq3nspr/aoi_tiduru
same deal with the others with jz235 as an activation token.
it's got a lot of issues and it's mostly due to the dataset.
Issue #1 is that it isn't even very stylistically consistent. Even in the nimi image set it swaps a bit between a more sketchy style and a solid linework style. Also that image set has a different shading profile than the two images with a white background(which are super important for reasons).
Issue #2 is that that art style doesn't really like to be downscaled and you lose a bit of the linework in trying to do so, so it's better to work with crops and do minimal downscaling if possible.
issue #3 is the obvious of the dataset being really small.
issue #4 is honestly that it's fairly generic-ish digital art and that makes it really hard to get a style to express since it wants to just get lost in the model.

anyway, as it is it has issues of wanting to do a lot of closeup shots due to the uh, method I chose to append/fluff out the dataset(which you can find here https://files.catbox.moe/ufvs5f.zip ). It also has an issue of hardstuck backgrounds, also for the same reason. Though you can prompt white background and it'll at least listen and give you an empty space, even if it's sometimes off-white instead of white.
You can also try and run t2i at .6~.8 weight to try and get it over the close-up syndrome and then use full weight on upscaling. Or just do t2i in a similar but different style and i2i upscale with it enabled, idk.

>>8737818
it's using l2 loss. the huber values are there as a holdover from some random testing(the config this is based on is like a year and 8 months old) but shouldn't actually be doing anything since it isn't set to huber or smooth l1.
>>
File: ComfyUI_00003_.mp4 (609 KB, 416x608)
609 KB
609 KB MP4
Any anons experimenting with video generation (using Wan 2.2)?
>>
File: ComfyUI_00004_.mp4 (1.72 MB, 832x1216)
1.72 MB
1.72 MB MP4
>>8737945
This one came out better, but still shit
>>
>>8737948
>shit
bro have you seen what real artist upload to patreon as special video rewards?
>>
>>8737949
I'd like to think that we hold ourselves to a higher standard here... well at least most of us
>>
>>8737938
So, this very low dataset bakes work with characters as well? have you tried it?
>>
>>8737938
oh good
>>
>>8737945
i tried but i need more than 16gb ram (not vram) so it's gonna have to wait until ANOTHER pc upgrade
>>
>>8737959
I have 12GB, so I just use quantized GGUF models. Have only been genning for like an hour, so hopefully it gets better as I figure out the workflow.
>>
>>8737961
can you share the link for those? i was going off the sticky on /g/ but i don't think i saw those
>>
>>8737963
https://huggingface.co/QuantStack/Wan2.2-I2V-A14B-GGUF/tree/main
>>
>>8737965
which ones should i get and do i need to change my workflow any or just replace the models in the one i already have
>>
>>8737953
in theory it should but it'll probably fall apart if you try and style too much on top of it. And while I guess that's true of all character training, it's probably especially more-so in this case since I've noticed a very strong relationship between the "permanence"/influence a lora has on the output and its output step count.
easiest way to observe it is to i2i something off-style and compare a 512 step version vs a "full" 2xxx~ step count version. The 512 will graft to a degree but not anywhere as strong as the one that was trained longer.
>have you tried it?
Very indirectly.
https://files.catbox.moe/pmpyqp.png
was trained off of just this image of seto shouko https://files.catbox.moe/gll3os.jpg
so you can tag out all the specific character traits of a girl and have them show up in differing compositions at least.
iirc I did try a run tagging out the specific character and couldn't get that to stick, so all the character traits are pretty necessary.
>>
>>8737969
I'm actually not familiar with the /g/ workflows, but my advice should hold unless the /g/ ones are weird.

Basically you can just replace your normal model loader with a GGUF model loader - that's it. (You can download it with Comfy Manager)

For quantized models, the higher the number, the higher the load/size. A Q2 is easy to run, and a Q8 is really hard. However, the Q8 will be higher quality than the Q2. A Q4_K_M is harder to run than a Q4_K_S (M for medium, S for small).

Just pick what you can either 1. Load on your VRAM or 2. Doesn't load all the way but has a quality you like. This depends on your VRAM and preferences.
>>
>>8737972
LoRA-sensei, please make a LoRA making guide.
>>
>>8737974
ah okay thank you (my problem is RAM not VRAM but same difference)
>>
>>8737979
Do you mean that you are willing to offload to CPU and wait like an hour but don't have the RAM to do it? This won't help with that.
>>
>>8737972
>>8737977
i'm gonna agree with this guy, so far the V2 iteration of the vpred preset works MUCH better than the old, but what setting do you recommend on image pool more than 30? and other than 0.5 vpred, any other ones you tested/liked?
>>
>>8737981
No, the 2 models need to offload to RAM after you load them so then you can actually use them to do shit with your VRAM. Each are 12gb and i have 16gb total.
>>
>>8737992
Same. I'm looking to upgrade from my current 32gb, but 64 and above ddr4 kits are still unreasonably expensive
>>
>>8737977
>>8737982
ことわる。
and ignoring that dataset is probably one of the most important aspects, the majority of my "knowledge" has come from doing countless runs to tune settings, which you ultimately get the results of from my configs.
I also make a lot of concessions with the understanding that I know how to get around them in the actual image generation/post process and value certain things above others. Or to put it another way, my autism isn't likely to mesh with the autism of others.

> but what setting do you recommend on image pool more than 30?
Generally larger datasets would want more steps but it's weird, it's variable, sometimes that statement isn't even true, etc.
Ultimately 99% of the shit I do is all down to style training and you're honestly most the time you're going to struggle to find much more than 30 actually stylistically consistent images from even prolific artists with multiple hundreds of artworks. And in the events that you do you can just heavily curate it and choose stuff the model is going to have the easiest time making sense of.
>and other than 0.5 vpred, any other ones you tested/liked?
eps 0.5 is the most "relevant" eps model. Anything after that fries and will tend to have white bleeding around edges, etc.
Illustrious 0.1 and, weirdly, noob eps 0.1 were both fine for the most part but are only relevant if you're actually using those models.
I largely ignored vpred since it was obvious it was largely a failbake from the get-go but if you're using it v1 is the way to go. iirc .6 might have been okay? but I didn't do much of anything with it.
>>
>>8737953
>>8737972
slight further testing
https://files.catbox.moe/6wu0e9.png
styling requires a fair amount of cheating and ends up losing some minor details(like the direction the lines of the hair being pulled back and some random bangs getting added), but those are kind of fixable in post. The cheating comes in with the base t2i being generated with only with the low stepcount lora(acting as a "character" lora) and the manual i2i upscaling got a second 1:1 res pass to force the style to a significantly higher capacity. original upscale output for reference https://files.catbox.moe/1o7ijf.png
I'll probably try it again and change some things with a 3 image dataset later.
>>
>>8737992
Oh yeah you're right. I've never really thought about the offloading part because I have a lot of RAM and VRAM has always been the limiting factor.
>>
>>8738004
yeah I did notice too that eps has better anatomy compared to vpred, but now I also notice eps is a lot less saturated than vpred, either way I agree the old eps has better potential but I have to manually edit things to have more color and see if it's on the bridge between the two.
>>
>>8738008
oh the hair line issue is actually a tagging skill issue and can be fixed by appending with asymmetrical bangs <https://files.catbox.moe/2npgs1.png>
>I'll probably try it again and change some things with a 3 image dataset later.
later is now.
https://files.catbox.moe/2q5qv1.png
same seed/input image
v2
https://files.catbox.moe/8ggf43.png
v1
https://files.catbox.moe/ahekh8.png

v2 seems to be more adaptable to style(probably because it has 3 images of different styles on top of a different trigger tag) and it also escapes from the copy/pasted head syndrome that the v1 had.
if anyone wants to play around with it https://www.mediafire.com/folder/zma0zoy1bvk0a/seto_shouko

works a fair bit better than expected but this is also a really basic bitch tier character so I'm not really surprised.
>>
>>8738097
Finally /hgg/ is back to form, thank you lora baking anon. Making me want to start baking again as well.
>>
when should i reuse edm loss weights? can i just change the config all willy-nilly as long as the dataset stays constant?
>>
>>8738097
based dataset fixer

for me i gave up on finding dataset images for my concept and am now balls deep into editing them in PS, my test version of it worked rather well already so i'm adding more for reliability
>>
File: file.png (1.76 MB, 1536x1536)
1.76 MB
1.76 MB PNG
>it aint gonna eat itself
>>
>>8738097
what lora baker do you use? i've been using kohya gui with the preset you make, this one has it in a weird calculation of having 171 epoch and 512 steps.
>>
>>8738121
good morning saar do not redeem the 3 image dataset saar
>>
>>8738097
>this can actually even do the school uniform, too by tagging collared shirt, blue skirt, pleated skirt, red ribbon
https://files.catbox.moe/4ejw5b.png
neat.
>>8738121
it's just bmaltais(that was last pulled back in.... may).
and it's just a 3 image dataset with 1 repeat being told to stop specifically at 512 steps, so that maths out. either it's including a partial epoch or it starts at 0 and is actually 171 epochs flat(171x3 is 513).
>>
>>8738141
why did you choose that number of steps
>>
>>8738141
thanks, never tried very low images and the meta checker is very wierd sometimes, despite the same setting the format comes out different.
>>
File: 20251006155413-2287907833.png (2.34 MB, 1536x1920)
2.34 MB
2.34 MB PNG
>>
>>8738148
because that's the number it took to get things to actually consistently stick in a meaningful way.
I started at 32. then 64. then 128. then 256. then settled on 512 after finding that 768 was too much.
you can lower the step count to any of the lower step counts and be "fine" but they won't transfer as completely and will get "lost" very easily when other loras or tags from outside of its dataset are used.
you can read the comment chain from here >>8735775

tl;dr is that this is all extremely stupid shit started specifically to dunk on some dumb cumrag nerd but then taken to its absolute maximum conclusion after doing a ton of iterations on it(no point in doing stupid shit if you don't go all the way).
>>
>>8738141
Teach us how to bake anon
>>
>>8738158
that's the best kind of evidence, i am very curious to try this since this seems like it could be very applicable to simple concepts
>>
>>8738160
>i am very curious to try this
that's the fun part. It's super easy and fast to do 256/512 step runs. I can run off a 512 step in 15 minutes on a 12gb 3060 and don't even need to kill reforge to do it since it's only 4.7gb VRAM. There isn't even much of a time sink for the dataset since it's so small.
and I took the effort to tune it down to 1dim for 1. maximum stupidity and 2. to make sharing as easy as possible. and no one can complain about it being shit because it's less than 6MB.
it is by very definition engineered to be as low commitment/investment as possible.

>>8738159
just b urself
and waste a lot of time doing dumb shit because it's the only actual way of learning and getting a feel for what might or might not work, which is important because nothing is ever really going to be exactly the same because ML fucking sucks.
>>
Man, I wish there would be a decent butterchalk lora with the grainy style *wink*
>>
LoRA-King, please make a Mari Setagaya LoRA. I beg. Training Data: https://buzzheavier.com/ajgwdue4omlq
>>
File: 00041-1678133529.png (2.98 MB, 1344x1728)
2.98 MB
2.98 MB PNG
>>
>>8738168
decided to check civit to see if they were up to anything with butterchalk
and jesus christ how do they fuck up that badly.
but yeah sure would be neat if such a thing existed.
https://files.catbox.moe/vliocy.png
>>
>>8738165
yeah i have a 4090 lol
>>
>>8737959
I have 8gb of vram and 16gb of ram and managed to get WAN2.2 working. Probably not the most optimal setup but ehh.
https://files.catbox.moe/8cj2im.mp4
If you have more then 8gb of vram you can use a better GGUF or get longer and higher res gens.
>>
>>8738217
man hands
>>
>>8738231
cool thanks
>>
What possessed me to try to make a kissing hands scene. Holy shit is this thing hard to inpaint correctly.
>>
chat, is handsanon going to be alright?
>>
>>8738342
No, I fucking won't. Pray for me.
>>
>>8738347
okay
>>
File: WanVideo2_2_I2V_00009.mp4 (1.07 MB, 480x832)
1.07 MB
1.07 MB MP4
>>8738231
this is pretty fun even with the shitty models that kinda look like ass
>>
File: 00024-781882207.png (2.09 MB, 1184x1584)
2.09 MB
2.09 MB PNG
>>
>>8738482
How do you know which one is which?
>>
>>8738489
You don't, anon. Those sloots are sharing you without your consent.
>>
File: comicgenned.jpg (620 KB, 1184x1584)
620 KB
620 KB JPG
>>
>>
File: 2025-09-16T08.18.09_1.jpg (139 KB, 1216x832)
139 KB
139 KB JPG
>>8738514
I will always support Y'shtola getting railed, but I don't think that pressed against glass tag did any favors for your render.
>>
>>8738489
The one on birth control is your girlfriend. Good luck
>>
>>8738527
That's diabolical.
>>
File: 00027-1967664165.jpg (703 KB, 1184x1584)
703 KB
703 KB JPG
>>
>>8738539
Are you tagging 2koma or using a LoRA?
>>
File: WanVideo2_2_I2V_00043.mp4 (1.05 MB, 768x512)
1.05 MB
1.05 MB MP4
not really what i wanted but rorumao
>>
>>8738231
>>8737959
>>8737945

This shit has me stalking eBay for used 5090s
>>
speaking of which what is the prompt to get the girl moving up and down or side to side on a dick or whatever in wan? i can't get them to fuckin move
>>
>vidgen
o i am laffin
>>
>>8738552
it's for fun, it doesn't look very good
>>
File: WanVideo2_2_I2V_00244.mp4 (2.75 MB, 864x1280)
2.75 MB
2.75 MB MP4
>>8738552
>>8738554
If you have the hardware, it can look pretty okay
>>
Can NAI do vidgen?
>>
>>8738556
Would a 4070 TI Super and 32GB of RAM be sufficient for something that quality?
>>
>>8738542
Lora. Sometimes it almost gets there but never quite. I edit/add text in gimp
>>
>>>8669319
>>
>>8738578
Why did you link this

>>8738572
Yes
>>
>>8738584
Hm maybe I'll have to give it a shot then, I've seen some actual okay looking videos.
>>
>>8737938
Alright first of all thanks. I had to make sure to properly asses the thing by rebaking on my old dataset with your activation tag just to see if that was the issue. I'm gonna rebake without an activation tag and triple check but all in all it doesn't look as bad as I thought. Might rebake again with your dataset too.
https://files.catbox.moe/rlfmhk.png
It also made me reconsider base vpred on those really stubborn loras that I thought hadn't learned much. I think we can get away with much lower pics than I originally thought. I did 100 epochs for 880 steps just to be sure it had learned all it could.
>>
>>8738556
And this is img2vid right? You still need loras and shit for everything too? It's like SD1.5 all over again. Maybe I'll just wait a couple years for a good finetune.
>>
>>8738217
Right, it's a damn shame, maybe the good one(s) got nuked because of "loli"... It's unfair for my beloved flat girls
>>
>>8738608
This one only uses one LoRA - it's not like the SD1.5 days where every retard had his own snake oil embedding
>>
Yeah not bad at all on my old config with the improved dataset and no activation tag even though it's only 10 pics. Really good news.
https://files.catbox.moe/p4esav.png
>>
>always struggle to get nice looking fishnets
>download fishnet bodystocking lora to see if it helps
>fishnets still look like shit
fucking hell m8s
>>
>>8738627
I'm sorry anon, but I can only suggest for you to learn an image editor of your choice and add the fishnets using layers. Artists use pre-made repeatable textures.

t. retard who has been inpating all day fingernails and frills.
>>
File: catbox_yfp96p.png (1010 KB, 832x1216)
1010 KB
1010 KB PNG
>>8738627
sub to nais :D
>>
>>8738627
What model are you using, because at worst I get some bunged up spots that get easily fixed with inpainting. Also, adding see-through_legwear can help.
>>
https://files.catbox.moe/6ibfil.png
>>8738599
>I think we can get away with much lower pics than I originally thought
People really make a big deal to do with datasets and thinking that they don't have enough for them without actually bothering to try and do anything about it.
>>8738609
it's civit we're talking about. I have doubts there were any "good ones" to begin with.
either way, unfortunately, the easiest way to make me not want to share something trained on an artist is for said artist to follow me on twitter. And they follow me on twitter.
however, if a dataset were to just fall off from the back of a truck and someone were to run it through any of my configs found here https://www.mediafire.com/folder/dfjm9587kfleg/configs or even just one of their own?
well, that just couldn't be helped.
>>
a truck just flew over my house
https://files.catbox.moe/jj1cy4.zip
>>
>humble brag
kys
>>
I just hope butterchalk is a girl with thin long legs just like her art ^_^
>>
>>8738669
its hairy ugly bastard
>>
File: 145678746.png (286 KB, 562x822)
286 KB
286 KB PNG
Can someone please try animating this with Wan 2.2 14b? I need to know if a new pc is worth it for that.
>>
>>8738688
no it's not
if you want a new pc for vidya go for it but please don't waste a dime on wan
>>
File: 1759757987438247.mp4 (3.87 MB, 1620x1080)
3.87 MB
3.87 MB MP4
>>8738689
I want to believe that for my wallet's sake, but goddamn do some of those look good
>>
>>8738687
Nooooooo......
>>
>>8738653
Just wondering, how many images are required to properly train a lora for a style, and do you manually tag them? Also what's your Twitter? :)
>>
>>8738695
>how many images are required to properly train a lora for a style
depends. the lora linked out in >>8737686 was trained on 14. I normally recommend 16-32 with a heavy importance on keeping it all stylistically consistent.
>do you manually tag them?
I usually just run wd14 swinv2 with .65 threshold and min tag fraction at 0 with a set list of excluded tags then go through and manually add some where applicable and change some things if I feel it's needed.
>Also what's your Twitter?
@firedotalt
>>
>>8738627
use PS
>>
>>8738701
Thanks, also what soft and checkpoint do you use to train the lora, is it the "config" thing you sent earlier? And how can I tell what settings values to choose?
>>
>>8738705
bmaltais/kohya_ss
>checkpoint
use whatever is most prominent in whatever you're actually using. which means if you're using vpred it's probably going to be noob vpred 1.0, if it's EPS it could be fucking anything from illustrious 0.1/1/1.1/2, could be noob eps 0.5 or 1.
I'd personally recommend sticking with eps 0.5 or vpred 1.0 but I also stick entirely to using base models only.
>And how can I tell what settings values to choose?
google it/find some random shitty guide on civit or something to get a vague idea of what's what. I can't really handhold you through all of that shit
>>
damn you retards have had 25 threads but youre asking epschizo how to bake loras when he only gens 1girl white background
grim....
>>
>>8738556
prompt?
>>
>>8738711
Very high quality 4k drawn animation, nsfwsks, girl doggystyle sex from behind, he does all the work with his penis head that slides into her pussy effortlessly, she pushes her body against his cock to increase the sexual gratification, very seductive lewd erotic behavior, tightly as he is piston fucking causing her hips into a rocking motion while her breasts bounce from each thrust she turns her head away from the camera, floating hearts start appearing near the end, rat tail with red tip

(Yes Wan prompts are this gay)
nsfwsks is a lora trigger
>>
>>8738710
>only gens 1girl white background
Yeah retard, it's so he can easily demonstrate the style.
>>
>>8738713
what the fuck
>>
>>8738710
feel free to one-up, then
>>
>>8738715
no, actually,
thats all he gens
sometimes he adds splotch of cum tho!
>>
>>8738713
also, i was looking at some workflows and i don't think i have the lora that that triggers? where is that
>>
>>8738719
https://civitai.com/models/1307155?modelVersionId=2073605
>>
>>8738721
thanks
i tried some 1:1 comparisons to the lora in the /g/ sticky and this one adds some crazy distortion sometimes so i think i'll take the model author seriously when he says it isn't ready yet, lol
>>
How do I get prominent eyelids like this? Is this just some artist lora this dude is using?
>>
its wai
>>
>>8738769
My WAi gens dont even look close to this. I just slopped this out, maybe I'm just a shitty prompter
>>
Newfag here, is novelAI enough to generate some high quality images or do I have to go down the rabbit hole? I just want to emulate a specific artist's style and keep it consistent.
>>
>>8738778
WAI is just a generic insult in /h/
>>
>>8738784
I heard there's a free trial now? Should serve to answer your question.
>>
>>8738690
Issue is still that it takes 4-5 minutes for a gen, but it's still as gacha as image gen so you're looking at 40+ minutes to get a single decent gen.
>>
>>8738789
It's generating frame by frame though? Why is there still no workflow where you can stop it at any time, roll back 2 frames and continue with different seed?
>>
Newfag here, why do all my nai gens have this slop look?
>>
>>8738786
what would your noobai gens look like if you had only 20 attempts with the model
>>
>>8738821
I had 3K anlas from a guy giving out hacked accounts. My experiments were pretty awful at first, but I managed to steal someone else's metadata and restart from there. If he just wants to see what an artist looks like on NAI, a similar process should work imo.
>>
>>8738822
>If he just wants to see what an artist looks like on NAI, a similar process should work imo.
4.5 by default leans very heavily into flat jaggy sketch aesthetic so a lot of artists might look pretty fucked unless you mess around with negs/neg weights. honestly though plenty of 2.5d+ artists will look fucked no matter what with it.
>>
>>8738784
Tell which artist and I'll gen something.
>>
>>8738640
inpainting fingernails and frills is dead easy

>>8738651
im using r3mix at the moment

>>8738702
if it was only some part of it (like gloves, top or thighhighs) I wouldn't have any problem doing that but I'm trying to get an entire bodystocking fishnet
>>
>>8738827
>>8738822
I'm interested in emulating Kagami and Aoi Nagisa's art, and as the anon said above, it so far just looks sloppy and flat. Can you share how you made it work? I don't mind paying for a sub if it's worth
>>
>>8738828
wydm, get a fishnet texture then transform it so it vaguely follows the curve of the skin, takes like 5 seconds per limb
>>
>>8738829
seems overkill, kagami hirotaka worked fine even back on pony
illu/noob looks more like his earlier stuff
>>
>>8738688
It's not worth spending money for wan but that doesn't mean you shouldn't save for a new PC. No one ever thought local video gen would even get this good this time last year and everyone was shitting on the one guy posting vids. Now it's not only coherent, some of it looks pretty decent. If you can afford it, get yourself a good upgrade for vidya or to make your gens much faster and wait and see how the tech improves. I got my new 4090 back when it was $1400 with this philosophy and I've never been so grateful to have it.
>>
>>8738627
just inpaint
>>
>>8738765
>Is this just some artist lora this dude is using?
yes
>>
>>8738833
Was this genned on base noob?
>>
>>8738829
NAI:
https://files.catbox.moe/bgoec3.png
https://files.catbox.moe/vxtqs7.png
https://files.catbox.moe/ut05d8.png
Looks like dogshit to me, but I'm not familiar with this content enough to judge.
I like this local output >>8738833 much better.
>>
>>8738846
It's Pony, unironically
no loras either
https://litter.catbox.moe/61kd6x8fwfcny3pq.png
>>
File: 00042-3495290342.png (3.85 MB, 1664x2432)
3.85 MB
3.85 MB PNG
>>8738830
maybe I should try that later but for now I have to settle with this

>>8738837
I always do that but of course but I would prefer to have a good enough base gen first
>>
>>8738855
yeah on some styles it just ends up looking like a flat texture, doesn't follow her curves
well to be fair artists can't do it either
>>
File: 1759836160994306.png (1.89 MB, 4036x3077)
1.89 MB
1.89 MB PNG
>>8738847
That's closer to the recent styles, can you share the metadata?

Also how do I create image of this quality of pic related? Artist is Sian. This so far has been the most convincing to me, while the rest I can detect the sloppiness
>>
>>8738858
That artist is sloppiness exemplified; with oily skin, latext fabric and right highlights. Just use WAI with an unhealthy dose of (best quality, very awa)
>>
>>8738858
>This so far has been the most convincing to me, while the rest I can detect the sloppiness
>melting fingers and eyes
>quality
yeah alright
>>
>>8738796
It's not frame by frame, that would be insanely slow. It's generating all the frames at once in a "3D" image, where the third dimension is the time.
>>
>>8738858
prompt an artist lil bro
>>
hey guys, are those noob ai with spo merges worth using or is spo a meme?
I'm still trying to find a noob ai flavor that has decent lora support. It's a pain in the ass how some loras are made for noob eps, and many concept I want are still iLL only. Yet somehow it kinda still work, but it varies between shitmix variants, and some shitmix claim they can support everything.. ugh I'm so lost right now.
>>
no
>>
>>8738901
eps loras work fine on any vpred noob shitmix, newcutie
>>
>>8738903
no?
>>8738906
or so it seems but I get wierd results sometimes. still thanks for confirming.
>>
>>8738908
>or so it seems but I get wierd results sometimes
which lora, what do you expect and what is happening?
>>
>>8738784
What the fuck, imagine paying some motherfucker jewtier company for this.
I'm a noob too, been using noob ai vpred locally for about a month and it can replicate aoi's art perfectly especially with vpred fix
You can even upgrade his style with more realism and details and it'll keep his style consistent still. I have no experience with novelai and never will because they don't deserve getting a single cents, but as far as I can see from various scum novelai prompters, either nobody bothered trying to replicate aoi's art exactly using novelai or it really cannot do it, and it doesn't matter anyway, half decent usage of noob ai vpred will do awesome work.

Just make sure to use the proper prompt for him:
>aoi nagisa \(metalder\)
it has to be exactly this or else it'll fuck up something, even if it's not obvious at a glance at first.
If I have time later I'll try sian
>>
Is this the go-to illustrious mix?
https://civitai.com/models/827184?modelVersionId=2167369
>>
>>8738911
no one is using illust anymore and even if you still want to use it, no, the illust shitmix to go was perosnalmix
>>
>>8738912
>no one is using illust anymore
what's the current meta then?
>>
>>8738912
>no one is using illust anymore
What are they using then? Pony and Illustrious still have all the Loras.
>>
>>8738915
This
All the big lora makers that I know of are still doing mostly Illustrious
>>
>>8738914
>>8738915
>>8731237
>>
>>8738912
Wai is part noob, he just never declared it. Probably because of the licensing issues earlier on. I tested v1.3 and it (barely) knew a bunch of e621 exclusive artists, ones that were hashed on Pony.

>>8738911
It's the easiest to use for noobs, or for basic stuff. But it carries a heavy style bias of the usual AI-slop type, towards 2.5D semi-realism with shiny skin and overly detailed backgrounds. It also lost some of noob's knowledge including Danbooru concepts and low-image characters+artists, and it's biased towards simpler, more common compositions.
>>
>>8738923
I use artist loras for every gen, so that shouldn't matter I think?
>>
>>8738910
Use this
https://github.com/DominikDoom/a1111-sd-webui-tagcomplete
>>
File: 001024512.jpg (590 KB, 2016x1152)
590 KB
590 KB JPG
Skydiving
>>
>>8738932
Nice, but it gets cold up there don't you think? They should have been doing clothed sex.
>>
>>8738923
iirc someone did a model similarity check with WAI and it was mostly noob eps 0.5
which would reflect with it barely knowing e621 tags, since at that point it was introduced to the dataset but had been only been trained on the unet and not the TE.
>>
Using NAI, how do I keep details consistent such as clothing? Using Img2Img with lower strength doesn't do help.
>>
>>8738858
>can you share the metadata?
The metadata is there, user https://novelai.net/inspect or there's https://sprites.neocities.org/metadata/viewer which works for everything.
>>
>>8738927
>I use artist loras for every gen, so that shouldn't matter I think?
Then you shouldn't use WAI.
>>
>>8738959
just inpaint
>>
>>8738912
>>8738911
Yeah, to this day, my go-to model is the PersonalMerge.
>>
>>8738959
>using nai
don't
>>
File: loras.png (66 KB, 538x261)
66 KB
66 KB PNG
Can I keep a list of loras with descriptions attached in comfy like I can do in webui?
>>
File: ComfyUI_temp_cpjpv_00013_.jpg (924 KB, 3072x4300)
924 KB
924 KB JPG
>>8738927
It's like a layer applied on top of anything you do with the checkpoint, unavoidable. Thought it might actually help some styles or prompts, best to judge case by case.

Picrel, doesn't match the artists' matte shading, the pencil sketch has impossible details and shiny skin somehow, it's biased towards regular "human" 1girl going against my chibi prompt, etc. But say in Kagami's case, it actually matches his newer style better because he became shinier over time.
>>
>>8739006
>user-friendly features
>comfy
choose one
>>
File: screen-34.jpg (56 KB, 364x932)
56 KB
56 KB JPG
>>8739006
Never actually used webUI so not sure what you mean. There's this thing in the models sidebar. You have to place an image file next to the lora with the same filename (only .png). There's also a custom node in https://github.com/pythongosssss/ComfyUI-Custom-Scripts which shows the same image preview along with a sample prompt you can save, and optionally pulls down trigger tags from Civitai.
>>
>>8739018
>actively being unhelpful
>stoking the tribal war
don't get to choose
>>
>>8739020
Mostly I just want to be able to easily see the activation keyword which a lot of lora makers insist on adding for some reason.
>>
>>8739006
https://github.com/willmiao/ComfyUI-Lora-Manager/ closest you'll get, probably
>>
>>8739034
because it works better that way, usually
>>
>>8739007
102d final is better than custom anyways, and if you want even more artist accuracy use chromayume
>>
>>8739034
Well for characters and concepts it's a no-brainer. For styles it's really weird. Some bake really well without and others just refuse to fit, yet adding an activation tag and changing nothing else can save me half the bake time and not having to fry it as much. Wild guess it might depend on how close the style is to the model's existing knowledge.

Ideally you'd bake everything twice to see if it's necessary. But most Civitai folk just have their fixed workflow: throw in the dataset, autotag, take the last epoch and upload.
>>
File: file.png (166 KB, 2559x538)
166 KB
166 KB PNG
>>8739023
Rejection the binary - embrace StabilityMatrix
>>
>>8739042
Well yeah, it's always a balance. I do like having a little bit of help with "quality", and switch to base noob if it gets in the way. WAI is fine too imo if you're aware of the downsides and it fits what you're trying to do.
>>
>>8739055
Cool, now post a gen made with all of that
>>
>>8739055
>Rejection the binary - embrace StabilityMatrix
sybau lykon
>>
What model should I use? I feel like noob is too unstable with too strong artist tags effect, while WAI looks too generic
>>
>>8739119
102d has worked decently for my purposes
>>
File: 1612246183517-849871745.png (940 KB, 832x1216)
940 KB
940 KB PNG
>>8739059
>>8739070
It's just a tool that lets you host both Forge and ComfyUI. I pretty much just use Comfy for shitty vid gen.
>>
>>8739059
>>8739070

>>8737948 is the video made in Comfy
>>
Is wan high noise or low noise better?
>>
File: file.png (404 KB, 2191x823)
404 KB
404 KB PNG
>>8739184
You should run both if you're using 14B. They work together. 5B runs as a single model.
>>
>>8739184
for images low noise. for vid u need both or its slop.

High noise has the motion, low noise gives detail
>>
>>8739190
>>8739192
ayaaa, no wonder those quants were so small...
>>
>>8738931
I have this for comfy, but annoyingly doesn't show artist tags.
I know I can import better tags from elsewhere though.

>>8739006
lora manager is great
>>
>>8739199
>I have autocomplete
>it doesn't short artist tags
Comfy is so fucking atrocious it's unbelievable.
>>
>>8739007
Are you using negatives?
>>
>>8739206
i think he meant lora triggers
>>
File: 4.jpg (147 KB, 1408x896)
147 KB
147 KB JPG
>>8739119
>noob is too unstable
Use some merge of noob, they're more stable usually.
>too strong artist tags effect
reduce the weight then
my favorite is still 291h, but I'll have to try chromayume soon. 102d is cool if you like lolis..
>>
>>8739206
I thought it was like this on purpose honestly.

>>8739208
haha..
>>
>GGUFs are easier to load for some reason so I can use the biggest model
neat
>>
>>8739222
*you can user a quant of the biggest model
>>
>>8739224
yeah but it's better than nothing
>>
>>8738690
workflow for this?
>>
File: ComfyUI_temp_fdeib_00007_.png (1.23 MB, 1536x1536)
1.23 MB
1.23 MB PNG
>>
>>8738545
Pretty funny
>>
>>8738690
Wait is that a random girl or someone? Cause she look hot, but also remind me of someone
>>
>>8739235
its kawana_tsubasa from hundred line last defense
>>
>lora gives me shit hands without quality tags or nyalia boost
>crank lora strength up to 1.4
>suddenly perfect hands
wtf it was the base model that was shit all along. So many runs testing dropout and hand close ups wasted...
>>
>>8739237
now test the base model on its own
imo it's more likely some weird interaction between the two
>>
>>8739238
The base model is vpred1.0 which i already knew had shit hands. Just didn't think that they were shit enough to overpower a high rank lora
>>
>>8739237
base noob vpred is a really painful experience for everything, I had to use at least 2 loras at low weight to get something good and workable from it
>>
>>8739230
She’s melting faster than that popsicle!
>>
File: 1000021257.png (1.35 MB, 832x1216)
1.35 MB
1.35 MB PNG
Finally been able to make something based on Tsubasa Kawana from the Hundred Lines! Best girl
>>
>>8739237
Or maybe it's the lora that's shit
most loras are fucking shit
>>
File: 1000021259.png (1.39 MB, 832x1216)
1.39 MB
1.39 MB PNG
>>8739236
Fucking best girl! Nice choice
>>
File: catbox_zoxuzn.png (3.33 MB, 1920x1152)
3.33 MB
3.33 MB PNG
>>8739240
I still kinda like it raw
some styles at least, ones that aren't supposed to be shiny nyalia
>>
>trying to get going on twitter
>account always gets shadowbanned
I don't get how other people do it just fine but I always get jewed

>>8739237
>nyalia boost
explain
>>
File: 00050-3679920409.jpg (863 KB, 1664x2432)
863 KB
863 KB JPG
>>8739262
Pretty much my experience yeah but still I don't like it over my usual shitmixes
>>
>>8739265
>>nyalia boost
>explain
use nyalia or any other ai-incest trained slop lora at low weight on base vpred for better hands. Doesn't ruin style quite as much as a full blown slop checkpoint
>>
>>8739265
don't use hashtags.
also from what I can tell there's some weird minimum activity amount twitter needs from new accounts before it will actively show your stuff to other people freely. I'm assuming because it wants to assume you're a bot until you prove otherwise. Which basically means follow some artists, like some tweets, whatever.
Also if you're just posting porn be aware that that already immediately tanks your visibility and if all you post is porn your account will eventually be flagged to have all your posts be tagged sensitive/NSFW even if you were doing it properly manually from the get-go.
>>
File: 1750202136691.jpg (713 KB, 1920x2432)
713 KB
713 KB JPG
Are these actual new people and not the same four oldtroons circlejerking and having the same exact arguments every thread? In /hgg/ no less?
>>
>>8739275
/d/
>>
>>8739275
That's right chat, we have unironic newcuties
>>
>>8739274
>don't use hashtags.
How do you get people to see shit then?
>>
>>8739275
a bit of both
>>
File: WanVideo2_2_I2V_00228.mp4 (1.3 MB, 512x640)
1.3 MB
1.3 MB MP4
>>
>>8739279
Backlinking from other sites or just listing the character name without a hashtag.
People vanity search character names all the time looking for unlisted shit and you show up in search even if you don't hashtag it.

Trying to crash hashtags with porn is probably the #1 easiest way to get your shit banned.
>>
>>8739275
you really can't tell the difference between a real newbie and some liar? for shame
>>
>>8739285
bro got that queenbee framerate :skull:
>>
File: 00010-4089618496.png (1.6 MB, 1216x832)
1.6 MB
1.6 MB PNG
>>8739286
I see. I'll have to keep it in mind for my new account. By the way, in your experience, when you say porn, do you mean actual intercourse or nudity or would something like pic related count?
>>
>>8739290
If you have to tag it as NSFW/sensitive it's porn.
Anything with nudity is porn.
>>
>>8739291
Yep. Even a hint of a nipple will have whatever it is reported as porn if you don't tag it as such on pixiv.
>>
>grifter talk
it doesnt get anymore grim!
>>
>>8739294
at least no pedos so far
>>
>>8739275
I'm starting to be less of a trash noob lately so I'm posting a bit more, yeah
My gens still suck though. and I still refuse to post the metadatas (if I even post my shit gens at all)
>>
>>8739295
Yeah I wouldn't go lower than 13, so I'm clean on this front!
>>
>>8739289
i feel like i read that it only works right at 16 or 32 fps so which do you want
>>
>>8739275
That's a weirdly shaped dick. It's like Alien.
>>
>>8739299
so true king
>>
>>8738701
I've been trying to run wd14 swinv2 for 2 hours now, with no success. I want to die.
>>
File: Merged.png (3.98 MB, 1344x1728)
3.98 MB
3.98 MB PNG
>>
>>8739321
What's the tag for those hands? In this case against wall works but what about if you're on a bed?
>>
>>8739320
the fuck are you doing, just use taggui as any normal sane person and download the model from there
>>
>>8739323
>>8739320
Don't even use that shit, use this.
https://github.com/Jelosus2/DatasetEditor
>>
>>8739324
that's a new one to me, i'll have to try this
>>
>>8739218
Does 102d always refer to the custom version? Is it really better than other ones?
>>
>>8739353
Nta
They have slight variation, so best to test yourself which one works best for you. Final and custom are the most recommended I think.
>>
>>8739349
Use Eva as your tagger. Dataset Editor is far faster than any other tagger I've used.
>>
>>8739353
i like this one https://huggingface.co/minaiosu/arca3514236951/blob/main/naiXLVpred102d_final.safetensors
>>
>>8739359
the tagger is not new to me, it's clearly the best. i'm just using it manually edited in to reforge, and boorudatasettagmanager for other stuff which, so far, has done the same thing in less clicks with better UI/UX than anything else i've tried. it's also constantly getting updated for almost 3 years now i think, those russians make some good enthusiast programs
>>
>>8739364
>boorudatasettagmanager
I switched from that to this one and I prefer it better, even the UI. I feel like I can see the pics easier in it, but otherwise the things it does are the same. The tools like fast autotagging, putting white background on your pics automatically etc., are the reason I've stuck with it instead.
>>
After sex selfie with or without male in the same frame?
>>
>>8739289
Queen Bee could lay off 99% of their staff with this technology.
>>
>>8739372
>to cuck or to be cucked, that is the question
>>
>>8739388
So chat, you want to be the bull or the cuck?
>>
>>8739389
Alternating consumption of both perspectives so I don't desensitize
>>
>>8739391
Hmm yeah sure, I can do that with just a simple change on the text
>>
What does NAI use internally to be better than stable?
>>
>activity for once
>all shitposting schizo shit
Well played, chat. Thought something good had happened.
>>
Is there a name for that shiny skin effect that's usually on boobs? I don't like it
>>
File: 1000021287.jpg (67 KB, 555x1140)
67 KB
67 KB JPG
Can anyone educate me on the difference between those different stuff?
>>8739423
Boobs/breast shine? I know what you mean but idk if there a name for it
>>
>>8739424
https://stable-diffusion-art.com/samplers/#Evaluating_samplers
>>
>>8739424
this is quite literally the thread that NAI shit does not belong in at all, retard.
>>
File: 00062-2219926581.png (2.79 MB, 1536x1920)
2.79 MB
2.79 MB PNG
A bit of an unusual question, but how have you guys managed complex architecture and indoors scenes? I think that my options are reduced to controlnet over an existing image, and hope for the best.
>>
>>8739519
define complex architecture and which kind of indoors scenes
>>
Is there any japanese site that accepts selling AI stuff? Or subscriptions? Fanbox doesn't
>>
grifters are not welcome
>>
>>8739519
why vaguepost
>>
>>8739523
Why would I share my pot of gold with you?
>>
>>8739555
>pretending there's not a ton of people in the business
You ain't got shit, brokie. Merely fishing for reactions. Have a (you), it's the best you'll ever get in exchange for your work.
>>
>>8739519
(detailed background:1.5)
>>
>>8739556
My trips are worth more than your whole house, faggot. Fuck back off to civit you dumb grifter.
>>
>>8739519
The lad has a muscle on his arm I’ve never seen before
>>
File: nanpa033.gif (45 KB, 640x480)
45 KB
45 KB GIF
>>8739522
Actual buildings that are not just jumbled nonsense. Interiors that actually make sense. Depth being an actual thing.

The only way I have found to mitigate this is to use images like this one, and then modify it to suit my purposes.
>>
File: 00112-97255428.png (3.63 MB, 2560x1536)
3.63 MB
3.63 MB PNG
>>8739573
Oh I see yeah, I don't have anything like that
>>
File: 1759845834390049.jpg (845 KB, 1664x2432)
845 KB
845 KB JPG
>>8738855
>>
>>8739613
why is the editfag like this
>>
>>8739629
because he's based
>>
>>8739218
How do I start producing these kinds of images? Newfag that would like a bit of spoonfeeding please
>>
>>8739648
Regional Prompting.
Controlnet.
img2img
Inpainting.
Praying.
>>
Let's say I like one artist style but I don't like how the body shapes are, so I use another style to generate a pic and use that pic in controlnet with the style I like, would it be consistent enough?
>>
>>8739657
You could try prompt scheduling to get the body shape of artist 1 and the style of artist 2 on top.
>>
>>8739657
define "consistent enough."
and why don't you just like, actually try it.
also you'd (probably) be best off using i2i with controlnet tile.

here's a functional example.
https://files.catbox.moe/fd4hze.png
the lowres used as input for the upscale: https://files.catbox.moe/9kz4be.png
>>
>>8738708
I gotta try doing a proper Mangamaster lora with this!
>>
>>8739662
So I could use a high denoise value to make sure the colors and shading aren't exactly the same yeah?
>>
>>8739652
You don't need all that stuff for a basic bitch group sex pic.
>>
>>8739666
No, of course not. You can leave it all to luck, Satan. But if he is autistic, he is gonna want a very specific position and angle. And that's when he is gonna need all those.
>>
File: 1743618664130.jpg (521 KB, 1920x2432)
521 KB
521 KB JPG
>>
>>8739629
just inpaint
>>
>>8739665
pretty much, yeah. This sort of thing is pretty standard for my general lora testing since if a lora can't heavily influence style from something off-style at 80% denoise then there's a problem somewhere.
of course, some styles will be completely ineffective if the input is way too off-base(think going from heavy textured painterly to trying to do a flat digital style), but that just means that you start being aware of and selective on the style you gen in to transfer over.
another "trick" to make the style transfer more effective is to do a transfer, then take the output image and run it through 1:1 i2i, again, on different seeds(with the same denoise/controlnet values). Do NOT use the same seed as the input image, that fucks the noise up really bad(well, you can try just to see what happens just be aware that it is not reflective of how it would work on other seeds).
>>
>>8739613
Thanks bro, much more attractive now.
>>
New thread >>8739695
>>
I think we need a thread for every anon here.
>>
>>8739519
>A bit of an unusual question, but how have you guys managed complex architecture and indoors scenes?
I don't, it's either blurry background, halftone background or gradient background for me. Real artists do it too for a reason, and it's not because they hate architecture, it's because a complex background serves no purpose in a scene depicting characters fucking and the bare minimum of props to suggest a certain scenario will suffice.
If you really want to have a complex and coherent background, you should copy what artists do in the rare case they implement those and photobash it either with a pic you got from the Internet or with something you genned with a model that hasn't had its ability to gen scenery destroyed by training on millions of 1girl, simple background pics.
>>
>>8739713
Right, it still won't make sense but will get you more background detail at least.

You can also use canny controlnet and sketch out the walls of the room, it's like eight lines. Maybe add a few blocks for furniture. Then mask out the central area where characters are supposed to end up in.
>>
File: 00006-61576927.png (2.4 MB, 1112x1856)
2.4 MB
2.4 MB PNG
I can't believe that worked (almost).
>>
File: 00009-392437642.png (2.3 MB, 1112x1856)
2.3 MB
2.3 MB PNG
>>8739832
Very limited, but a few phrases and words are available.
>>
>>8739836
it can sort-of write anything, it's just super unreliable. I remember prompting "Justice" when inpainting Hasumi's shirt, and it actually helped.
>>
File: 00024-1745816622.png (2.18 MB, 1112x1856)
2.18 MB
2.18 MB PNG
>>8739837
Well, I'll be buggered by a baboon. You are right.
>>
>>8739652
not to say that what you are suggesting is a bad idea, but that images is kind of a perspective fail...
>>
>>8739672
Should I leave the controlnet weight at 1?
>>
>>8739911
Wouldn't it be faster to try out three values than to ask? nta I use 0.85 and an earlier end step at 0.9 otherwise it fucks up colors, but it depends on which controlnet for which checkpoint
>>
If I have a dataset with a bunch of images with transparent backgrounds, should I use the transparent background tag or black background?
>>
>>8739946
transparent can cause issues, you should fill them with something. i don't think it's a huge deal what color, personally i would do a bunch of random ones so it doesn't overfit
>>
>>8739889
It is just an example I made a long time ago to help anons who came with that kind of question...
>>
>>8739911
just try stuff.
I keep controlnet at as low a weight and as early an end step as it can withstand since that gives it more chance to transfer shit over. the example given was .35 weight and .45 end step.
>>
File: 00618.png (1.91 MB, 1664x2432)
1.91 MB
1.91 MB PNG
>>
>>8739913
nvm disregard, I thought he was talking about upscaling with tile CN

>>8740061
Honestly the worst idea for pose transfer. You want a controlnet that doesn't see colors, otherwise you'll get a ton of style leakage. Canny, depth or ideally anytest.
>>
>>8740084
canny sounds awful for pose transfer
>>
>>8740084
>Honestly the worst idea for pose transfer
good thing no one mentioned anything about that and everything in the comment chain was about style transfer.
>>
>>8740110
The original ask was to gen anatomy with one artist, then transfer that body to a new gen. >>8739657
>>
>>8740116
anatomy!=pose
also, note:
> so I use another style to generate a pic and use that pic in controlnet with the style I like
that's style transfer.
>>
>>8740083
Nice, though a true head out of frame would have been hotter imo.
>>
File: 00618_crop.jpg (639 KB, 1662x1985)
639 KB
639 KB JPG
>>8740128
Idk about this one chat
>>
>>8740136
hot. Now the tits are in the center of the screen and your mind can better imagine her lewd face. Anon's nitocris gen exploits the same principle.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.