Sibilant Sacrifice EditionPrevious Bake >>8730481>LOCAL UIreForge: https://github.com/Panchovix/stable-diffusion-webui-reForgeComfy: https://github.com/comfyanonymous/ComfyUI>RESOURCESFAQ: https://rentry.org/hggFAQWiki: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki | https://comfyanonymous.github.io/ComfyUI_examplesTraining: https://rentry.org/59xed3 | https://github.com/derrian-distro/LoRA_Easy_Training_Scripts | https://github.com/bmaltais/kohya_ss | https://github.com/Nerogar/OneTrainerTags: https://danbooru.donmai.us/wiki_pages/tag_groups | https://danbooru.donmai.us/related_tag | https://tagexplorer.github.io/#/ControlNet: https://rentry.org/dummycontrolnet (OLD) | https://civitai.com/models/136070Inpaint: https://files.catbox.moe/fbzsxb.jpgIOPaint (LamaCleaner): https://www.iopaint.com/installUpscalers: https://openmodeldb.infoBooru: https://aibooru.online4chanX Catbox/NAI prompt userscript: https://rentry.org/hdgcbLegacy: https://rentry.org/hgglegacy>TRAININGGuide (WIP): https://rentry.org/yahbgtrAnon's scripts: https://mega.nz/folder/VxYFhAYb#FQZn8iz_SxWV3x1BBaJGbwTrainers: https://github.com/67372a/LoRA_Easy_Training_Scripts | https://github.com/kohya-ss/sd-scripts>MODELShttps://civitai.com/models/833294/noobai-xl-nai-xlOP Template/Logo: https://rentry.org/hggop/edit | https://files.catbox.moe/om5a99.png
why is he brown
>>8736897Because I inpainted.
>>8736903then inpaint it again.
>>8736904She likes her meat well done doe??
blackanon is now baking threads? What a bold newcomer, chat.
I've been here since #001
Same
>>8735775fun fact, you can use a normal "small" dataset of 16-30 images, increase the step count to 2500, have it save every 176 steps and save last 500 steps, test the final, step 2288 and step 2112 for comparisons and it mostly works?or at least it worked for the one dataset I tried on it and it worked a hell of a lot better than any 1dim lora has any right to be.
>>8736892Mods are gonna nuke this to shit. Nice gen btw
my dick hurts
>>8736959not entirely sure what the context is here but you're actually retarded if you don't save a bunch of versions then s/y them to see which is best
>>8737036Because you're a contrarian who only likes things nobody else likes.
>>8736892>Sibilantlearned a new word today
>>8737066Forgot three other words to make room for it.
>>8736963Gorgon is not allowed? /h/ is such shit man.>>8737066Was working on a textgen story with her and encountered it. Good stuff.
>>8737104technically every monster girl is /d/, even catsbut the line in practice depends entirely on the people itt and whether they decide to report you, I've even posted spider girls before with no issue
>>8737104Technically all monstergirls are /d/ but sometimes it slides. In my opinion, monstergirls should only be /d/ if they are grotesque looking (obviously not Gorgon), but I don't make the rules
>>8737106>>8737108>the rules don't matterAnon wins again. We need a better board than /d/ for vanilla adjacent. Leave /h/ to die for the boring vanilla purists.
>>8737108>monstergirls should only be /d/ if they are grotesque lookingAt what degree does a monster girl become grotesque?Are long ears, long tongues, hooves and colored skin ok?
>2 /hdg/>2 /hgg/we are healing
>>8737117>At what degree does a monster girl become grotesque?My sense of taste as Lord of 4Chan, Ruler of Chinese Cartoon PornJokes aside, if it's normal enough that could be an entry in Monstergirl Encolypedia, it's fine. If not, then no.
>>8737117/e/
>>8736892can i get box for this?
>>8737119>>Jokes aside, if it's normal enough that could be an entry in Monstergirl Encolypedia, it's fine. If not, then no.Sorry, but there are definitely some girls in that that are irrefutable /d/ territory
>>8737123Atlach's are /d/ but I will accept them in the name of an objective standard. My heart is so big
>>8737125Sandworm girl and anything with equine bottom half as well
>>8737127Oh true, I actually forgot how out there some are
>>8737122Only if you do something with it chat.https://files.catbox.moe/918kes.png
>>8737131thanks
How long until this safe space gets nuked?
>>8737136stop spamming, retard
>>8737131Thanks
>>8737140nice
>>8737141very painterly
>>8737140Eww, I forgot to fix the eyes - here's a much better one
>>8737143sadly too based for this world. see you in three days, icarus
>>8737143long shota
>>8737143Yeah snakes too. Now it's perfect.
>>8737110I was hoping the non-futa /d/ general could become this, but the amputees and vore turned people away.
>>8737117I think you're good>>>/d/11368387Hooves are borderline furry, same with fur on limbs like >>>/e/3006890, but again just don't trigger the wrong people and you'll be fine.
>>8737110Told you chat, we need a vanilla /d/ thread, it should be a little more hardcore and permissive than this thread but without going to any extreme
>>8737193we should establish a jewish state in the middle east
>>8737190>just don't trigger the wrong people and you'll be fine.Ok, anything in particular I should avoid doing?I just started posting here last week, so I have no idea what sort of thread schizos this place has.
>>8737117Box please?
>>8737243Here you go.https://files.catbox.moe/f8wuwv.png
What are you soft inpainting settings, bros? Do you even use it at all?
>>8737259Yes I use and default values are fine
>>8737254Thanks.
>>8737265Is it a "turn on and forget about" type of deal or is there a specific usecase?
>>8737336you turn it on and never look at it again, if you ever feel it is not doing anything you need to increase your mask blur value
>>8737338Ok thank you.
>>8737362That doesn't look photorealistic at all.
>>8737365it's not supposed to, it just adds a bit of shading from 60% onwards
>>8737362Chadmix vpred when
>>8737367you cant use her she belongs to lion optimizer anon
>>8737369I did some hanabi gens back then, she's mine as well
>>8737367https://civitai.com/models/1068866/chadmix-finetune-extract-for-noobai-vpred-10never bothered with releasing the checkpoint i only use it as a lorai could refine and rebake but aint got time for that anymore
>he's here
>>8737371Very interesting, if this is an extract, why you also included some trigger tags? (even if the extract seems to work just fine without any of them), just to use the meme?
>>8737377yes, it's always been a meme first and minor opportunistic aesthetic tune second
Inpaint bros... I was wrong about you. It's really cool when your pic comes together after a bunch of inpainting. I fucking kneel.
>>8737462Yeah inpainting is actually pretty fun when you have a vision in mind (slop pic definitely not related)
>>8737462There's fixing issues by hand, and then there's adding concepts you couldn't reasonably prompt for. The latter is way cooler imo.
>>8737462Welcome to the good side
>>8737118I want to ask, what the difference between /hgg/ and /hdg/? Maybe im a normie dumbass, but isn’t generation and diffusion the same thing technically?
>>8737493We generate the image rather than letting the slopmachine diffuse all the latents
>>8737493originally one whiny fag made /hgg/ as a split thread to avoid spammers and shitpostersit's just momentum at this point, the topic is the same in both but people refuse to merge back in
>they fell for it AGAINkeeeeek
/hgg/ has proven to be an effective repellent against a number of schizos
>>8737469>>8737474>>8737476Yeah it can be pretty fun but very time consuming. I imagine I could have done this all from txt2img alone but I wanted to do something simple as proof of concept and write my findings down in my notes. It's pretty cool to go from A to B too.https://files.catbox.moe/jpjcfr.pnghttps://files.catbox.moe/yjypfs.pngStill pretty rough around the edges but I'm proud of it.
LORAMAKIES MAKE THE LORAhttps://danbooru.donmai.us/posts?tags=tsuntsuke
>>8737543>traditional media via digital camera photo>they didn't even lightbox them for the photo so the lighting is inconsistent I can tell you without even doing anything that it's going to be absolute shit.but I'm already making absolutely godawful stupid decisions testing shit so I guess a few more won't hurt
>>8737543>>8737554here's a shitty lowstep lora made with a two image datasethttps://files.catbox.moe/896p4n.safetensorsI'll do a "full" one with a bit larger of a dataset later after doing other shit. It's trained on v1 vpred so if the style transfer isn't working as well as demonstrated by image related consider not using a garbage shitmix.use jz235 as an activation token, I'd also recommend using secondary aesthetic tags like faux traditional media and painting \(medium\)if anyone wants a working example of the kind of dataset low step bullshit takes, here's what was used https://files.catbox.moe/t6r77f.zip
>>8737496>>8737499I see. Thank for the info. Can't we just be at peace and post anime titties and waifus getting fuck?
>>8737595unfortunately 2025 is not a year of peacelet's all work hard to make the world a better place
>>8737595I do, I always do, chat is the problem
>>8737602i appreciate your posts
>artist draws good for exactly 6 pics>reverts back to his generic slop for the next 200Is it worth training on the six?
>>8737609six is too few
>>8737610Man those pics are fucking good I'm gonna do it.
>>8737609Make a shit lora with those 6 images, use it to img2img a bigger dataset, retrain
>>8737614this was lies madness
way*
>>8737614Unironically how people who say "We can train LLMs on synthetic data." sound
>>8737652We can. We have the technology.
>>8737654you can but you shouldn't, it looks like shit
>>8737659There has to be a way. If I could get to like 20 that would be it.
>>8737661there is.commission it.
>>8737664Truth nuke
>>8737652Better than nothing
>>8736894>niggerdickNo thanks
>>8737678Why are you gay?
>>8737678>Sexy monster woman>Fixates on penis>Hyperfixates on the race of the manThis is incredibly homosexual, and I'm tired of pretending that it's not.
keeeeeeek
>>8737543https://www.mediafire.com/folder/808yxs8afpilo/tsuntsuke>b-but it's a shitty 1dim lorayou will take what you will get and you will be happy. also let's be real this works 10 times better than it has any right to and is significantly better than trash you'd find elsewhere. also it's still trained on vpred 1.0. Also same prompting suggestions as before applies>>8737568image related is 1:1 seeds/prompt to show the difference between low step bullshit and a more "full" lora bake.random other exampleshttps://files.catbox.moe/ok0drr.pnghttps://files.catbox.moe/flszld.pnghttps://files.catbox.moe/uggxnl.pnghttps://files.catbox.moe/nmpvt4.pngi2i upscales from dramatically off-style input:https://files.catbox.moe/gdqd2y.pnghttps://files.catbox.moe/vr1pg5.pnghttps://files.catbox.moe/vp5kt0.pnghttps://files.catbox.moe/cgzd6c.png zip of the dataset+config for anyone actually interestedhttps://files.catbox.moe/pzg0tk.zip
>>8737686shockingly good actually
Hmm yet another case of vpred not getting the style as well as illu. Where is that other anon now?
>>8737686Can you bake this if you get the chance? It's only 8 pics.https://files.catbox.moe/1pjrb6.rar
>>8737690My cashapp is $wrdamon
>>8737687I agree. Even the low step-count, hyper tiny dataset worked fairly well all considering.pretty much why I bothered going into presentation mode to use it as a teaching example and provided full dataset+config.>>8737690I can already tell you it's going to be real rough. half that dataset is really... not ideal due to secondary visual elements(depth of field/blur shit). Also sketchy digital styles usually get shitfucked for a lot of weird reasons.I'll try and do a 3 image dataset low step count run and see where it can go from there.
>>8737683intentionally changing the skin of the male from default is extremely gay
>>8737741if it was default, you wouldn't need to put dark-skinned male in negs
>>8737744i don't, not my fault you're using refugee shitmixes
Not sure if anyone here has tried anything this, but does anyone know a good model/lora that can make good images of realistic looking real dolls(pic related). I'm not looking for hyper realistic humans, I prefer the look the dolls give since they look more anime while all the realistic checkpoints are just "create a real human" typePic related is the look I want to get
>>8737756That sounds very specific. I'm afraid you are gonna need to make a lora for this task of yours alone.
kohya or ez scripts?
>>8737756You might have better luck asking in the right place >>>/aco/8986916
>>8737775Yeah I wasn't sure which board to ask on.>>8737773Unfortunately I might
>>8737686>dim 1>conv dim 12>block dimsintredastingi don't like ZTSNR and imo l2 loss is better than huber both from a theoretical and empirical standpoint in my tests
>>8737664>i'm going to need you to not draw like your usual garbage
take this you fucking bitch
>>8737839Is that khyle?
>>8737877doesn't look that way he doesn't even do lips, nor thick outlines
>>8737690well, after dicking around with it a little bit, about all I can really say about it is that it uh, sort of works. Sort of.https://www.mediafire.com/folder/k58cm0mq3nspr/aoi_tidurusame deal with the others with jz235 as an activation token.it's got a lot of issues and it's mostly due to the dataset.Issue #1 is that it isn't even very stylistically consistent. Even in the nimi image set it swaps a bit between a more sketchy style and a solid linework style. Also that image set has a different shading profile than the two images with a white background(which are super important for reasons).Issue #2 is that that art style doesn't really like to be downscaled and you lose a bit of the linework in trying to do so, so it's better to work with crops and do minimal downscaling if possible.issue #3 is the obvious of the dataset being really small.issue #4 is honestly that it's fairly generic-ish digital art and that makes it really hard to get a style to express since it wants to just get lost in the model.anyway, as it is it has issues of wanting to do a lot of closeup shots due to the uh, method I chose to append/fluff out the dataset(which you can find here https://files.catbox.moe/ufvs5f.zip ). It also has an issue of hardstuck backgrounds, also for the same reason. Though you can prompt white background and it'll at least listen and give you an empty space, even if it's sometimes off-white instead of white.You can also try and run t2i at .6~.8 weight to try and get it over the close-up syndrome and then use full weight on upscaling. Or just do t2i in a similar but different style and i2i upscale with it enabled, idk.>>8737818it's using l2 loss. the huber values are there as a holdover from some random testing(the config this is based on is like a year and 8 months old) but shouldn't actually be doing anything since it isn't set to huber or smooth l1.
Any anons experimenting with video generation (using Wan 2.2)?
>>8737945This one came out better, but still shit
>>8737948>shitbro have you seen what real artist upload to patreon as special video rewards?
>>8737949I'd like to think that we hold ourselves to a higher standard here... well at least most of us
>>8737938So, this very low dataset bakes work with characters as well? have you tried it?
>>8737938oh good
>>8737945i tried but i need more than 16gb ram (not vram) so it's gonna have to wait until ANOTHER pc upgrade
>>8737959I have 12GB, so I just use quantized GGUF models. Have only been genning for like an hour, so hopefully it gets better as I figure out the workflow.
>>8737961can you share the link for those? i was going off the sticky on /g/ but i don't think i saw those
>>8737963https://huggingface.co/QuantStack/Wan2.2-I2V-A14B-GGUF/tree/main
>>8737965which ones should i get and do i need to change my workflow any or just replace the models in the one i already have
>>8737953in theory it should but it'll probably fall apart if you try and style too much on top of it. And while I guess that's true of all character training, it's probably especially more-so in this case since I've noticed a very strong relationship between the "permanence"/influence a lora has on the output and its output step count.easiest way to observe it is to i2i something off-style and compare a 512 step version vs a "full" 2xxx~ step count version. The 512 will graft to a degree but not anywhere as strong as the one that was trained longer.>have you tried it?Very indirectly.https://files.catbox.moe/pmpyqp.pngwas trained off of just this image of seto shouko https://files.catbox.moe/gll3os.jpgso you can tag out all the specific character traits of a girl and have them show up in differing compositions at least.iirc I did try a run tagging out the specific character and couldn't get that to stick, so all the character traits are pretty necessary.
>>8737969I'm actually not familiar with the /g/ workflows, but my advice should hold unless the /g/ ones are weird.Basically you can just replace your normal model loader with a GGUF model loader - that's it. (You can download it with Comfy Manager)For quantized models, the higher the number, the higher the load/size. A Q2 is easy to run, and a Q8 is really hard. However, the Q8 will be higher quality than the Q2. A Q4_K_M is harder to run than a Q4_K_S (M for medium, S for small). Just pick what you can either 1. Load on your VRAM or 2. Doesn't load all the way but has a quality you like. This depends on your VRAM and preferences.
>>8737972LoRA-sensei, please make a LoRA making guide.
>>8737974ah okay thank you (my problem is RAM not VRAM but same difference)
>>8737979Do you mean that you are willing to offload to CPU and wait like an hour but don't have the RAM to do it? This won't help with that.
>>8737972>>8737977i'm gonna agree with this guy, so far the V2 iteration of the vpred preset works MUCH better than the old, but what setting do you recommend on image pool more than 30? and other than 0.5 vpred, any other ones you tested/liked?
>>8737981No, the 2 models need to offload to RAM after you load them so then you can actually use them to do shit with your VRAM. Each are 12gb and i have 16gb total.
>>8737992Same. I'm looking to upgrade from my current 32gb, but 64 and above ddr4 kits are still unreasonably expensive
>>8737977>>8737982ことわる。and ignoring that dataset is probably one of the most important aspects, the majority of my "knowledge" has come from doing countless runs to tune settings, which you ultimately get the results of from my configs.I also make a lot of concessions with the understanding that I know how to get around them in the actual image generation/post process and value certain things above others. Or to put it another way, my autism isn't likely to mesh with the autism of others. > but what setting do you recommend on image pool more than 30?Generally larger datasets would want more steps but it's weird, it's variable, sometimes that statement isn't even true, etc. Ultimately 99% of the shit I do is all down to style training and you're honestly most the time you're going to struggle to find much more than 30 actually stylistically consistent images from even prolific artists with multiple hundreds of artworks. And in the events that you do you can just heavily curate it and choose stuff the model is going to have the easiest time making sense of.>and other than 0.5 vpred, any other ones you tested/liked?eps 0.5 is the most "relevant" eps model. Anything after that fries and will tend to have white bleeding around edges, etc.Illustrious 0.1 and, weirdly, noob eps 0.1 were both fine for the most part but are only relevant if you're actually using those models.I largely ignored vpred since it was obvious it was largely a failbake from the get-go but if you're using it v1 is the way to go. iirc .6 might have been okay? but I didn't do much of anything with it.
>>8737953>>8737972slight further testinghttps://files.catbox.moe/6wu0e9.pngstyling requires a fair amount of cheating and ends up losing some minor details(like the direction the lines of the hair being pulled back and some random bangs getting added), but those are kind of fixable in post. The cheating comes in with the base t2i being generated with only with the low stepcount lora(acting as a "character" lora) and the manual i2i upscaling got a second 1:1 res pass to force the style to a significantly higher capacity. original upscale output for reference https://files.catbox.moe/1o7ijf.pngI'll probably try it again and change some things with a 3 image dataset later.
>>8737992Oh yeah you're right. I've never really thought about the offloading part because I have a lot of RAM and VRAM has always been the limiting factor.
>>8738004yeah I did notice too that eps has better anatomy compared to vpred, but now I also notice eps is a lot less saturated than vpred, either way I agree the old eps has better potential but I have to manually edit things to have more color and see if it's on the bridge between the two.
>>8738008oh the hair line issue is actually a tagging skill issue and can be fixed by appending with asymmetrical bangs <https://files.catbox.moe/2npgs1.png>>I'll probably try it again and change some things with a 3 image dataset later.later is now.https://files.catbox.moe/2q5qv1.pngsame seed/input imagev2https://files.catbox.moe/8ggf43.pngv1https://files.catbox.moe/ahekh8.pngv2 seems to be more adaptable to style(probably because it has 3 images of different styles on top of a different trigger tag) and it also escapes from the copy/pasted head syndrome that the v1 had.if anyone wants to play around with it https://www.mediafire.com/folder/zma0zoy1bvk0a/seto_shoukoworks a fair bit better than expected but this is also a really basic bitch tier character so I'm not really surprised.
>>8738097Finally /hgg/ is back to form, thank you lora baking anon. Making me want to start baking again as well.
when should i reuse edm loss weights? can i just change the config all willy-nilly as long as the dataset stays constant?
>>8738097based dataset fixerfor me i gave up on finding dataset images for my concept and am now balls deep into editing them in PS, my test version of it worked rather well already so i'm adding more for reliability
>it aint gonna eat itself
>>8738097what lora baker do you use? i've been using kohya gui with the preset you make, this one has it in a weird calculation of having 171 epoch and 512 steps.
>>8738121good morning saar do not redeem the 3 image dataset saar
>>8738097>this can actually even do the school uniform, too by tagging collared shirt, blue skirt, pleated skirt, red ribbonhttps://files.catbox.moe/4ejw5b.pngneat.>>8738121it's just bmaltais(that was last pulled back in.... may).and it's just a 3 image dataset with 1 repeat being told to stop specifically at 512 steps, so that maths out. either it's including a partial epoch or it starts at 0 and is actually 171 epochs flat(171x3 is 513).
>>8738141why did you choose that number of steps
>>8738141thanks, never tried very low images and the meta checker is very wierd sometimes, despite the same setting the format comes out different.
>>8738148because that's the number it took to get things to actually consistently stick in a meaningful way.I started at 32. then 64. then 128. then 256. then settled on 512 after finding that 768 was too much.you can lower the step count to any of the lower step counts and be "fine" but they won't transfer as completely and will get "lost" very easily when other loras or tags from outside of its dataset are used.you can read the comment chain from here >>8735775tl;dr is that this is all extremely stupid shit started specifically to dunk on some dumb cumrag nerd but then taken to its absolute maximum conclusion after doing a ton of iterations on it(no point in doing stupid shit if you don't go all the way).
>>8738141Teach us how to bake anon
>>8738158that's the best kind of evidence, i am very curious to try this since this seems like it could be very applicable to simple concepts
>>8738160>i am very curious to try this that's the fun part. It's super easy and fast to do 256/512 step runs. I can run off a 512 step in 15 minutes on a 12gb 3060 and don't even need to kill reforge to do it since it's only 4.7gb VRAM. There isn't even much of a time sink for the dataset since it's so small.and I took the effort to tune it down to 1dim for 1. maximum stupidity and 2. to make sharing as easy as possible. and no one can complain about it being shit because it's less than 6MB.it is by very definition engineered to be as low commitment/investment as possible.>>8738159just b urselfand waste a lot of time doing dumb shit because it's the only actual way of learning and getting a feel for what might or might not work, which is important because nothing is ever really going to be exactly the same because ML fucking sucks.
Man, I wish there would be a decent butterchalk lora with the grainy style *wink*
LoRA-King, please make a Mari Setagaya LoRA. I beg. Training Data: https://buzzheavier.com/ajgwdue4omlq
>>8738168decided to check civit to see if they were up to anything with butterchalkand jesus christ how do they fuck up that badly.but yeah sure would be neat if such a thing existed.https://files.catbox.moe/vliocy.png
>>8738165yeah i have a 4090 lol
>>8737959I have 8gb of vram and 16gb of ram and managed to get WAN2.2 working. Probably not the most optimal setup but ehh.https://files.catbox.moe/8cj2im.mp4If you have more then 8gb of vram you can use a better GGUF or get longer and higher res gens.
>>8738217man hands
>>8738231cool thanks
What possessed me to try to make a kissing hands scene. Holy shit is this thing hard to inpaint correctly.
chat, is handsanon going to be alright?
>>8738342No, I fucking won't. Pray for me.
>>8738347okay
>>8738231this is pretty fun even with the shitty models that kinda look like ass
>>8738482How do you know which one is which?
>>8738489You don't, anon. Those sloots are sharing you without your consent.
>>8738514I will always support Y'shtola getting railed, but I don't think that pressed against glass tag did any favors for your render.
>>8738489The one on birth control is your girlfriend. Good luck
>>8738527That's diabolical.
>>8738539Are you tagging 2koma or using a LoRA?
not really what i wanted but rorumao
>>8738231>>8737959>>8737945This shit has me stalking eBay for used 5090s
speaking of which what is the prompt to get the girl moving up and down or side to side on a dick or whatever in wan? i can't get them to fuckin move
>vidgeno i am laffin
>>8738552it's for fun, it doesn't look very good
>>8738552>>8738554If you have the hardware, it can look pretty okay
Can NAI do vidgen?
>>8738556Would a 4070 TI Super and 32GB of RAM be sufficient for something that quality?
>>8738542Lora. Sometimes it almost gets there but never quite. I edit/add text in gimp
>>>8669319
>>8738578Why did you link this>>8738572Yes
>>8738584Hm maybe I'll have to give it a shot then, I've seen some actual okay looking videos.
>>8737938Alright first of all thanks. I had to make sure to properly asses the thing by rebaking on my old dataset with your activation tag just to see if that was the issue. I'm gonna rebake without an activation tag and triple check but all in all it doesn't look as bad as I thought. Might rebake again with your dataset too.https://files.catbox.moe/rlfmhk.pngIt also made me reconsider base vpred on those really stubborn loras that I thought hadn't learned much. I think we can get away with much lower pics than I originally thought. I did 100 epochs for 880 steps just to be sure it had learned all it could.
>>8738556And this is img2vid right? You still need loras and shit for everything too? It's like SD1.5 all over again. Maybe I'll just wait a couple years for a good finetune.
>>8738217Right, it's a damn shame, maybe the good one(s) got nuked because of "loli"... It's unfair for my beloved flat girls
>>8738608This one only uses one LoRA - it's not like the SD1.5 days where every retard had his own snake oil embedding
Yeah not bad at all on my old config with the improved dataset and no activation tag even though it's only 10 pics. Really good news.https://files.catbox.moe/p4esav.png
>always struggle to get nice looking fishnets >download fishnet bodystocking lora to see if it helps>fishnets still look like shitfucking hell m8s
>>8738627I'm sorry anon, but I can only suggest for you to learn an image editor of your choice and add the fishnets using layers. Artists use pre-made repeatable textures.t. retard who has been inpating all day fingernails and frills.
>>8738627sub to nais :D
>>8738627What model are you using, because at worst I get some bunged up spots that get easily fixed with inpainting. Also, adding see-through_legwear can help.
https://files.catbox.moe/6ibfil.png>>8738599>I think we can get away with much lower pics than I originally thoughtPeople really make a big deal to do with datasets and thinking that they don't have enough for them without actually bothering to try and do anything about it.>>8738609it's civit we're talking about. I have doubts there were any "good ones" to begin with.either way, unfortunately, the easiest way to make me not want to share something trained on an artist is for said artist to follow me on twitter. And they follow me on twitter.however, if a dataset were to just fall off from the back of a truck and someone were to run it through any of my configs found here https://www.mediafire.com/folder/dfjm9587kfleg/configs or even just one of their own?well, that just couldn't be helped.
a truck just flew over my househttps://files.catbox.moe/jj1cy4.zip
>humble bragkys
I just hope butterchalk is a girl with thin long legs just like her art ^_^
>>8738669its hairy ugly bastard
Can someone please try animating this with Wan 2.2 14b? I need to know if a new pc is worth it for that.
>>8738688no it's not if you want a new pc for vidya go for it but please don't waste a dime on wan
>>8738689I want to believe that for my wallet's sake, but goddamn do some of those look good
>>8738687Nooooooo......
>>8738653Just wondering, how many images are required to properly train a lora for a style, and do you manually tag them? Also what's your Twitter? :)
>>8738695>how many images are required to properly train a lora for a styledepends. the lora linked out in >>8737686 was trained on 14. I normally recommend 16-32 with a heavy importance on keeping it all stylistically consistent.>do you manually tag them?I usually just run wd14 swinv2 with .65 threshold and min tag fraction at 0 with a set list of excluded tags then go through and manually add some where applicable and change some things if I feel it's needed.>Also what's your Twitter?@firedotalt
>>8738627use PS
>>8738701Thanks, also what soft and checkpoint do you use to train the lora, is it the "config" thing you sent earlier? And how can I tell what settings values to choose?
>>8738705bmaltais/kohya_ss>checkpointuse whatever is most prominent in whatever you're actually using. which means if you're using vpred it's probably going to be noob vpred 1.0, if it's EPS it could be fucking anything from illustrious 0.1/1/1.1/2, could be noob eps 0.5 or 1.I'd personally recommend sticking with eps 0.5 or vpred 1.0 but I also stick entirely to using base models only.>And how can I tell what settings values to choose?google it/find some random shitty guide on civit or something to get a vague idea of what's what. I can't really handhold you through all of that shit
damn you retards have had 25 threads but youre asking epschizo how to bake loras when he only gens 1girl white backgroundgrim....
>>8738556prompt?
>>8738711Very high quality 4k drawn animation, nsfwsks, girl doggystyle sex from behind, he does all the work with his penis head that slides into her pussy effortlessly, she pushes her body against his cock to increase the sexual gratification, very seductive lewd erotic behavior, tightly as he is piston fucking causing her hips into a rocking motion while her breasts bounce from each thrust she turns her head away from the camera, floating hearts start appearing near the end, rat tail with red tip(Yes Wan prompts are this gay)nsfwsks is a lora trigger
>>8738710>only gens 1girl white backgroundYeah retard, it's so he can easily demonstrate the style.
>>8738713what the fuck
>>8738710feel free to one-up, then
>>8738715no, actually,thats all he genssometimes he adds splotch of cum tho!
>>8738713also, i was looking at some workflows and i don't think i have the lora that that triggers? where is that
>>8738719https://civitai.com/models/1307155?modelVersionId=2073605
>>8738721thanksi tried some 1:1 comparisons to the lora in the /g/ sticky and this one adds some crazy distortion sometimes so i think i'll take the model author seriously when he says it isn't ready yet, lol
How do I get prominent eyelids like this? Is this just some artist lora this dude is using?
its wai
>>8738769My WAi gens dont even look close to this. I just slopped this out, maybe I'm just a shitty prompter
Newfag here, is novelAI enough to generate some high quality images or do I have to go down the rabbit hole? I just want to emulate a specific artist's style and keep it consistent.
>>8738778WAI is just a generic insult in /h/
>>8738784I heard there's a free trial now? Should serve to answer your question.
>>8738690Issue is still that it takes 4-5 minutes for a gen, but it's still as gacha as image gen so you're looking at 40+ minutes to get a single decent gen.
>>8738789It's generating frame by frame though? Why is there still no workflow where you can stop it at any time, roll back 2 frames and continue with different seed?
Newfag here, why do all my nai gens have this slop look?
>>8738786what would your noobai gens look like if you had only 20 attempts with the model
>>8738821I had 3K anlas from a guy giving out hacked accounts. My experiments were pretty awful at first, but I managed to steal someone else's metadata and restart from there. If he just wants to see what an artist looks like on NAI, a similar process should work imo.
>>8738822>If he just wants to see what an artist looks like on NAI, a similar process should work imo.4.5 by default leans very heavily into flat jaggy sketch aesthetic so a lot of artists might look pretty fucked unless you mess around with negs/neg weights. honestly though plenty of 2.5d+ artists will look fucked no matter what with it.
>>8738784Tell which artist and I'll gen something.
>>8738640inpainting fingernails and frills is dead easy>>8738651im using r3mix at the moment>>8738702if it was only some part of it (like gloves, top or thighhighs) I wouldn't have any problem doing that but I'm trying to get an entire bodystocking fishnet
>>8738827>>8738822I'm interested in emulating Kagami and Aoi Nagisa's art, and as the anon said above, it so far just looks sloppy and flat. Can you share how you made it work? I don't mind paying for a sub if it's worth
>>8738828wydm, get a fishnet texture then transform it so it vaguely follows the curve of the skin, takes like 5 seconds per limb
>>8738829seems overkill, kagami hirotaka worked fine even back on ponyillu/noob looks more like his earlier stuff
>>8738688It's not worth spending money for wan but that doesn't mean you shouldn't save for a new PC. No one ever thought local video gen would even get this good this time last year and everyone was shitting on the one guy posting vids. Now it's not only coherent, some of it looks pretty decent. If you can afford it, get yourself a good upgrade for vidya or to make your gens much faster and wait and see how the tech improves. I got my new 4090 back when it was $1400 with this philosophy and I've never been so grateful to have it.
>>8738627just inpaint
>>8738765>Is this just some artist lora this dude is using?yes
>>8738833Was this genned on base noob?
>>8738829NAI:https://files.catbox.moe/bgoec3.pnghttps://files.catbox.moe/vxtqs7.pnghttps://files.catbox.moe/ut05d8.pngLooks like dogshit to me, but I'm not familiar with this content enough to judge.I like this local output >>8738833 much better.
>>8738846It's Pony, unironicallyno loras eitherhttps://litter.catbox.moe/61kd6x8fwfcny3pq.png
>>8738830maybe I should try that later but for now I have to settle with this>>8738837I always do that but of course but I would prefer to have a good enough base gen first
>>8738855yeah on some styles it just ends up looking like a flat texture, doesn't follow her curveswell to be fair artists can't do it either
>>8738847That's closer to the recent styles, can you share the metadata?Also how do I create image of this quality of pic related? Artist is Sian. This so far has been the most convincing to me, while the rest I can detect the sloppiness
>>8738858That artist is sloppiness exemplified; with oily skin, latext fabric and right highlights. Just use WAI with an unhealthy dose of (best quality, very awa)
>>8738858>This so far has been the most convincing to me, while the rest I can detect the sloppiness>melting fingers and eyes>quality yeah alright
>>8738796It's not frame by frame, that would be insanely slow. It's generating all the frames at once in a "3D" image, where the third dimension is the time.
>>8738858prompt an artist lil bro
hey guys, are those noob ai with spo merges worth using or is spo a meme?I'm still trying to find a noob ai flavor that has decent lora support. It's a pain in the ass how some loras are made for noob eps, and many concept I want are still iLL only. Yet somehow it kinda still work, but it varies between shitmix variants, and some shitmix claim they can support everything.. ugh I'm so lost right now.
no
>>8738901eps loras work fine on any vpred noob shitmix, newcutie
>>8738903no?>>8738906or so it seems but I get wierd results sometimes. still thanks for confirming.
>>8738908>or so it seems but I get wierd results sometimeswhich lora, what do you expect and what is happening?
>>8738784What the fuck, imagine paying some motherfucker jewtier company for this.I'm a noob too, been using noob ai vpred locally for about a month and it can replicate aoi's art perfectly especially with vpred fixYou can even upgrade his style with more realism and details and it'll keep his style consistent still. I have no experience with novelai and never will because they don't deserve getting a single cents, but as far as I can see from various scum novelai prompters, either nobody bothered trying to replicate aoi's art exactly using novelai or it really cannot do it, and it doesn't matter anyway, half decent usage of noob ai vpred will do awesome work. Just make sure to use the proper prompt for him:>aoi nagisa \(metalder\)it has to be exactly this or else it'll fuck up something, even if it's not obvious at a glance at first.If I have time later I'll try sian
Is this the go-to illustrious mix?https://civitai.com/models/827184?modelVersionId=2167369
>>8738911no one is using illust anymore and even if you still want to use it, no, the illust shitmix to go was perosnalmix
>>8738912>no one is using illust anymorewhat's the current meta then?
>>8738912>no one is using illust anymoreWhat are they using then? Pony and Illustrious still have all the Loras.
>>8738915ThisAll the big lora makers that I know of are still doing mostly Illustrious
>>8738914>>8738915>>8731237
>>8738912Wai is part noob, he just never declared it. Probably because of the licensing issues earlier on. I tested v1.3 and it (barely) knew a bunch of e621 exclusive artists, ones that were hashed on Pony.>>8738911It's the easiest to use for noobs, or for basic stuff. But it carries a heavy style bias of the usual AI-slop type, towards 2.5D semi-realism with shiny skin and overly detailed backgrounds. It also lost some of noob's knowledge including Danbooru concepts and low-image characters+artists, and it's biased towards simpler, more common compositions.
>>8738923I use artist loras for every gen, so that shouldn't matter I think?
>>8738910Use this https://github.com/DominikDoom/a1111-sd-webui-tagcomplete
Skydiving
>>8738932Nice, but it gets cold up there don't you think? They should have been doing clothed sex.
>>8738923iirc someone did a model similarity check with WAI and it was mostly noob eps 0.5which would reflect with it barely knowing e621 tags, since at that point it was introduced to the dataset but had been only been trained on the unet and not the TE.
Using NAI, how do I keep details consistent such as clothing? Using Img2Img with lower strength doesn't do help.
>>8738858>can you share the metadata?The metadata is there, user https://novelai.net/inspect or there's https://sprites.neocities.org/metadata/viewer which works for everything.
>>8738927>I use artist loras for every gen, so that shouldn't matter I think?Then you shouldn't use WAI.
>>8738959just inpaint
>>8738912>>8738911Yeah, to this day, my go-to model is the PersonalMerge.
>>8738959>using naidon't
Can I keep a list of loras with descriptions attached in comfy like I can do in webui?
>>8738927It's like a layer applied on top of anything you do with the checkpoint, unavoidable. Thought it might actually help some styles or prompts, best to judge case by case. Picrel, doesn't match the artists' matte shading, the pencil sketch has impossible details and shiny skin somehow, it's biased towards regular "human" 1girl going against my chibi prompt, etc. But say in Kagami's case, it actually matches his newer style better because he became shinier over time.
>>8739006>user-friendly features>comfychoose one
>>8739006Never actually used webUI so not sure what you mean. There's this thing in the models sidebar. You have to place an image file next to the lora with the same filename (only .png). There's also a custom node in https://github.com/pythongosssss/ComfyUI-Custom-Scripts which shows the same image preview along with a sample prompt you can save, and optionally pulls down trigger tags from Civitai.
>>8739018>actively being unhelpful>stoking the tribal wardon't get to choose
>>8739020Mostly I just want to be able to easily see the activation keyword which a lot of lora makers insist on adding for some reason.
>>8739006https://github.com/willmiao/ComfyUI-Lora-Manager/ closest you'll get, probably
>>8739034because it works better that way, usually
>>8739007102d final is better than custom anyways, and if you want even more artist accuracy use chromayume
>>8739034Well for characters and concepts it's a no-brainer. For styles it's really weird. Some bake really well without and others just refuse to fit, yet adding an activation tag and changing nothing else can save me half the bake time and not having to fry it as much. Wild guess it might depend on how close the style is to the model's existing knowledge.Ideally you'd bake everything twice to see if it's necessary. But most Civitai folk just have their fixed workflow: throw in the dataset, autotag, take the last epoch and upload.
>>8739023Rejection the binary - embrace StabilityMatrix
>>8739042Well yeah, it's always a balance. I do like having a little bit of help with "quality", and switch to base noob if it gets in the way. WAI is fine too imo if you're aware of the downsides and it fits what you're trying to do.
>>8739055Cool, now post a gen made with all of that
>>8739055>Rejection the binary - embrace StabilityMatrixsybau lykon
What model should I use? I feel like noob is too unstable with too strong artist tags effect, while WAI looks too generic
>>8739119102d has worked decently for my purposes
>>8739059>>8739070It's just a tool that lets you host both Forge and ComfyUI. I pretty much just use Comfy for shitty vid gen.
>>8739059>>8739070>>8737948 is the video made in Comfy
Is wan high noise or low noise better?
>>8739184You should run both if you're using 14B. They work together. 5B runs as a single model.
>>8739184for images low noise. for vid u need both or its slop.High noise has the motion, low noise gives detail
>>8739190>>8739192ayaaa, no wonder those quants were so small...
>>8738931I have this for comfy, but annoyingly doesn't show artist tags.I know I can import better tags from elsewhere though.>>8739006lora manager is great
>>8739199>I have autocomplete>it doesn't short artist tagsComfy is so fucking atrocious it's unbelievable.
>>8739007Are you using negatives?
>>8739206i think he meant lora triggers
>>8739119>noob is too unstableUse some merge of noob, they're more stable usually.>too strong artist tags effectreduce the weight thenmy favorite is still 291h, but I'll have to try chromayume soon. 102d is cool if you like lolis..
>>8739206I thought it was like this on purpose honestly.>>8739208haha..
>GGUFs are easier to load for some reason so I can use the biggest modelneat
>>8739222*you can user a quant of the biggest model
>>8739224yeah but it's better than nothing
>>8738690workflow for this?
>>8738545Pretty funny
>>8738690Wait is that a random girl or someone? Cause she look hot, but also remind me of someone
>>8739235its kawana_tsubasa from hundred line last defense
>lora gives me shit hands without quality tags or nyalia boost>crank lora strength up to 1.4>suddenly perfect handswtf it was the base model that was shit all along. So many runs testing dropout and hand close ups wasted...
>>8739237now test the base model on its ownimo it's more likely some weird interaction between the two
>>8739238The base model is vpred1.0 which i already knew had shit hands. Just didn't think that they were shit enough to overpower a high rank lora
>>8739237base noob vpred is a really painful experience for everything, I had to use at least 2 loras at low weight to get something good and workable from it
>>8739230She’s melting faster than that popsicle!
Finally been able to make something based on Tsubasa Kawana from the Hundred Lines! Best girl
>>8739237Or maybe it's the lora that's shitmost loras are fucking shit
>>8739236Fucking best girl! Nice choice
>>8739240I still kinda like it rawsome styles at least, ones that aren't supposed to be shiny nyalia
>trying to get going on twitter>account always gets shadowbannedI don't get how other people do it just fine but I always get jewed>>8739237>nyalia boostexplain
>>8739262Pretty much my experience yeah but still I don't like it over my usual shitmixes
>>8739265>>nyalia boost>explainuse nyalia or any other ai-incest trained slop lora at low weight on base vpred for better hands. Doesn't ruin style quite as much as a full blown slop checkpoint
>>8739265don't use hashtags.also from what I can tell there's some weird minimum activity amount twitter needs from new accounts before it will actively show your stuff to other people freely. I'm assuming because it wants to assume you're a bot until you prove otherwise. Which basically means follow some artists, like some tweets, whatever.Also if you're just posting porn be aware that that already immediately tanks your visibility and if all you post is porn your account will eventually be flagged to have all your posts be tagged sensitive/NSFW even if you were doing it properly manually from the get-go.
Are these actual new people and not the same four oldtroons circlejerking and having the same exact arguments every thread? In /hgg/ no less?
>>8739275/d/
>>8739275That's right chat, we have unironic newcuties
>>8739274>don't use hashtags.How do you get people to see shit then?
>>8739275a bit of both
>>8739279Backlinking from other sites or just listing the character name without a hashtag.People vanity search character names all the time looking for unlisted shit and you show up in search even if you don't hashtag it.Trying to crash hashtags with porn is probably the #1 easiest way to get your shit banned.
>>8739275you really can't tell the difference between a real newbie and some liar? for shame
>>8739285bro got that queenbee framerate :skull:
>>8739286I see. I'll have to keep it in mind for my new account. By the way, in your experience, when you say porn, do you mean actual intercourse or nudity or would something like pic related count?
>>8739290If you have to tag it as NSFW/sensitive it's porn.Anything with nudity is porn.
>>8739291Yep. Even a hint of a nipple will have whatever it is reported as porn if you don't tag it as such on pixiv.
>grifter talkit doesnt get anymore grim!
>>8739294at least no pedos so far
>>8739275I'm starting to be less of a trash noob lately so I'm posting a bit more, yeahMy gens still suck though. and I still refuse to post the metadatas (if I even post my shit gens at all)
>>8739295Yeah I wouldn't go lower than 13, so I'm clean on this front!
>>8739289i feel like i read that it only works right at 16 or 32 fps so which do you want
>>8739275That's a weirdly shaped dick. It's like Alien.
>>8739299so true king
>>8738701I've been trying to run wd14 swinv2 for 2 hours now, with no success. I want to die.
>>8739321What's the tag for those hands? In this case against wall works but what about if you're on a bed?
>>8739320the fuck are you doing, just use taggui as any normal sane person and download the model from there
>>8739323>>8739320Don't even use that shit, use this.https://github.com/Jelosus2/DatasetEditor
>>8739324that's a new one to me, i'll have to try this
>>8739218Does 102d always refer to the custom version? Is it really better than other ones?
>>8739353NtaThey have slight variation, so best to test yourself which one works best for you. Final and custom are the most recommended I think.
>>8739349Use Eva as your tagger. Dataset Editor is far faster than any other tagger I've used.
>>8739353i like this one https://huggingface.co/minaiosu/arca3514236951/blob/main/naiXLVpred102d_final.safetensors
>>8739359the tagger is not new to me, it's clearly the best. i'm just using it manually edited in to reforge, and boorudatasettagmanager for other stuff which, so far, has done the same thing in less clicks with better UI/UX than anything else i've tried. it's also constantly getting updated for almost 3 years now i think, those russians make some good enthusiast programs
>>8739364>boorudatasettagmanagerI switched from that to this one and I prefer it better, even the UI. I feel like I can see the pics easier in it, but otherwise the things it does are the same. The tools like fast autotagging, putting white background on your pics automatically etc., are the reason I've stuck with it instead.
After sex selfie with or without male in the same frame?
>>8739289Queen Bee could lay off 99% of their staff with this technology.
>>8739372>to cuck or to be cucked, that is the question
>>8739388So chat, you want to be the bull or the cuck?
>>8739389Alternating consumption of both perspectives so I don't desensitize
>>8739391Hmm yeah sure, I can do that with just a simple change on the text
What does NAI use internally to be better than stable?
>activity for once>all shitposting schizo shitWell played, chat. Thought something good had happened.
Is there a name for that shiny skin effect that's usually on boobs? I don't like it
Can anyone educate me on the difference between those different stuff?>>8739423Boobs/breast shine? I know what you mean but idk if there a name for it
>>8739424https://stable-diffusion-art.com/samplers/#Evaluating_samplers
>>8739424this is quite literally the thread that NAI shit does not belong in at all, retard.
A bit of an unusual question, but how have you guys managed complex architecture and indoors scenes? I think that my options are reduced to controlnet over an existing image, and hope for the best.
>>8739519define complex architecture and which kind of indoors scenes
Is there any japanese site that accepts selling AI stuff? Or subscriptions? Fanbox doesn't
grifters are not welcome
>>8739519why vaguepost
>>8739523Why would I share my pot of gold with you?
>>8739555>pretending there's not a ton of people in the businessYou ain't got shit, brokie. Merely fishing for reactions. Have a (you), it's the best you'll ever get in exchange for your work.
>>8739519(detailed background:1.5)
>>8739556My trips are worth more than your whole house, faggot. Fuck back off to civit you dumb grifter.
>>8739519The lad has a muscle on his arm I’ve never seen before
>>8739522Actual buildings that are not just jumbled nonsense. Interiors that actually make sense. Depth being an actual thing.The only way I have found to mitigate this is to use images like this one, and then modify it to suit my purposes.
>>8739573Oh I see yeah, I don't have anything like that
>>8738855
>>8739613why is the editfag like this
>>8739629because he's based
>>8739218How do I start producing these kinds of images? Newfag that would like a bit of spoonfeeding please
>>8739648Regional Prompting.Controlnet.img2imgInpainting.Praying.
Let's say I like one artist style but I don't like how the body shapes are, so I use another style to generate a pic and use that pic in controlnet with the style I like, would it be consistent enough?
>>8739657You could try prompt scheduling to get the body shape of artist 1 and the style of artist 2 on top.
>>8739657define "consistent enough."and why don't you just like, actually try it.also you'd (probably) be best off using i2i with controlnet tile.here's a functional example.https://files.catbox.moe/fd4hze.pngthe lowres used as input for the upscale: https://files.catbox.moe/9kz4be.png
>>8738708I gotta try doing a proper Mangamaster lora with this!
>>8739662So I could use a high denoise value to make sure the colors and shading aren't exactly the same yeah?
>>8739652You don't need all that stuff for a basic bitch group sex pic.
>>8739666No, of course not. You can leave it all to luck, Satan. But if he is autistic, he is gonna want a very specific position and angle. And that's when he is gonna need all those.
>>8739629just inpaint
>>8739665pretty much, yeah. This sort of thing is pretty standard for my general lora testing since if a lora can't heavily influence style from something off-style at 80% denoise then there's a problem somewhere.of course, some styles will be completely ineffective if the input is way too off-base(think going from heavy textured painterly to trying to do a flat digital style), but that just means that you start being aware of and selective on the style you gen in to transfer over.another "trick" to make the style transfer more effective is to do a transfer, then take the output image and run it through 1:1 i2i, again, on different seeds(with the same denoise/controlnet values). Do NOT use the same seed as the input image, that fucks the noise up really bad(well, you can try just to see what happens just be aware that it is not reflective of how it would work on other seeds).
>>8739613Thanks bro, much more attractive now.
New thread >>8739695
I think we need a thread for every anon here.
>>8739519>A bit of an unusual question, but how have you guys managed complex architecture and indoors scenes?I don't, it's either blurry background, halftone background or gradient background for me. Real artists do it too for a reason, and it's not because they hate architecture, it's because a complex background serves no purpose in a scene depicting characters fucking and the bare minimum of props to suggest a certain scenario will suffice.If you really want to have a complex and coherent background, you should copy what artists do in the rare case they implement those and photobash it either with a pic you got from the Internet or with something you genned with a model that hasn't had its ability to gen scenery destroyed by training on millions of 1girl, simple background pics.
>>8739713Right, it still won't make sense but will get you more background detail at least. You can also use canny controlnet and sketch out the walls of the room, it's like eight lines. Maybe add a few blocks for furniture. Then mask out the central area where characters are supposed to end up in.
I can't believe that worked (almost).
>>8739832Very limited, but a few phrases and words are available.
>>8739836it can sort-of write anything, it's just super unreliable. I remember prompting "Justice" when inpainting Hasumi's shirt, and it actually helped.
>>8739837Well, I'll be buggered by a baboon. You are right.
>>8739652not to say that what you are suggesting is a bad idea, but that images is kind of a perspective fail...
>>8739672Should I leave the controlnet weight at 1?
>>8739911Wouldn't it be faster to try out three values than to ask? nta I use 0.85 and an earlier end step at 0.9 otherwise it fucks up colors, but it depends on which controlnet for which checkpoint
If I have a dataset with a bunch of images with transparent backgrounds, should I use the transparent background tag or black background?
>>8739946transparent can cause issues, you should fill them with something. i don't think it's a huge deal what color, personally i would do a bunch of random ones so it doesn't overfit
>>8739889It is just an example I made a long time ago to help anons who came with that kind of question...
>>8739911just try stuff.I keep controlnet at as low a weight and as early an end step as it can withstand since that gives it more chance to transfer shit over. the example given was .35 weight and .45 end step.
>>8739913nvm disregard, I thought he was talking about upscaling with tile CN>>8740061Honestly the worst idea for pose transfer. You want a controlnet that doesn't see colors, otherwise you'll get a ton of style leakage. Canny, depth or ideally anytest.
>>8740084canny sounds awful for pose transfer
>>8740084>Honestly the worst idea for pose transfergood thing no one mentioned anything about that and everything in the comment chain was about style transfer.
>>8740110The original ask was to gen anatomy with one artist, then transfer that body to a new gen. >>8739657
>>8740116anatomy!=posealso, note:> so I use another style to generate a pic and use that pic in controlnet with the style I likethat's style transfer.
>>8740083Nice, though a true head out of frame would have been hotter imo.
>>8740128Idk about this one chat
>>8740136hot. Now the tits are in the center of the screen and your mind can better imagine her lewd face. Anon's nitocris gen exploits the same principle.