[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: adt68.jpg (3.8 MB, 3608x2798)
3.8 MB
3.8 MB JPG
>UIs to generate anime
ComfyUI:https://github.com/comfyanonymous/ComfyUI
SwarmUI:https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic:https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassic
SD.Next:https://github.com/vladmandic/sdnext
Wan2GP:https://github.com/deepbeepmeep/Wan2GP
InvokeAI:https://www.invoke.com/

>How to Generating Anime Images
https://rentry.org/comfyui_guide_1girl
https://tagexplorer.github.io
https://making-images-great-again-library.vercel.app/
https://neta-lumina-style.tz03.xyz/

>Generating Anime Videos
Guide:
https://rentry.org/wan22ldgguide

>Anime Checkpoints, LoRAs, Upscalers, & Workflows
https://civitai.com
https://civitaiarchive.com
https://tensor.art
https://openmodeldb.info
https://openart.ai/workflows
https://www.seaart.ai
https://www.liblib.art/
https://rentry.org/adtsampler

>Anime Misc
Local Model Meta:https://rentry.org/localmodelsmeta
Share Metadata:https://catbox.moe|https://litterbox.catbox.moe
Img2Prompt:https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Samplers:https://stable-diffusion-art.com/samplers
Txt2Img Plugin:https://github.com/Acly/krita-ai-diffusion
Online metadata viewer SD/NovelAI:https://spell.novelai.dev
Catbox/Metadata Userscript: https://gist.github.com/catboxanon/ca46eb79ce55e3216aecab49d5c7a3fb

>Inpainting Guide from an Anon
https://files.catbox.moe/fbzsxb.jpg
>>106520607

>Neighbours
https://rentry.org/ldg-lazy-getting-started-guide#rentry-from-other-boards
>>>/aco/csdg
>>>/b/degen
>>>/b/realistic+parody
>>>/gif/vdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/trash/slop
>>>/vt/vtai
>>>/u/udg
>>>/r9k/aiwg/
>Local Text&Image
>>>/g/lmg
>>>/g/ldg
>>>/vp/napt
>Cloud Text&Image
>>>/g/aicg
>>>/g/sdg/

>Maintain Thread Quality https://rentry.org/debo

Prev: >>107298473
>>
File: melodykiss.png (2.43 MB, 2016x2160)
2.43 MB
2.43 MB PNG
>>107320355
ANCHOR:
>/adt/ pixiv:
BakerAnon:https://www.pixiv.net/en/users/110320313
Nekotwins:https://www.pixiv.net/en/users/93453238
RSetsuu:https://www.pixiv.net/en/users/91156181
tamzaiy:https://www.pixiv.net/en/users/38224130
匂い:https://www.pixiv.net/en/users/76374114
Otto:https://www.pixiv.net/en/users/106407853
owphoenix:https://www.pixiv.net/en/users/1661492
cornholioanon:https://www.pixiv.net/en/users/1035594
kunitsune: https://www.pixiv.net/en/users/1154481
Encolpe : https://www.pixiv.net/en/users/1528323

*Anchor your Pixiv and CivitAI here to add it in the next thread.

>/adt/ CivitAI
https://civitai.com/user/Nyan666
https://civitai.com/models/1922023/obmirsak
>>
File: file.png (5 KB, 337x84)
5 KB
5 KB PNG
Why is e621 like this?
>>
>>107320393
At least on danbooru, tags are not supposed to be subjective, because you're supposed to use pools for that.
>>
>>107320393
Sounds like something that could be a pixiv tag as well.
>>
File: Blur_1000025168.jpg (48 KB, 720x251)
48 KB
48 KB JPG
A new Invoke update is coming in late December!
Very important news: **multiple tabbed canvases!**
>>
File: 1759901379530778.jpg (120 KB, 820x1024)
120 KB
120 KB JPG
>>107320355
Is it true you dont need VRAM anymore to generate Anime images and Videos ? No wonder RAM prices had increased.
>>
>>107320561
That's more about text. For anime images it will be very slow. Videos - horrendously slow.
>>
>>107320561
No. Image models still rely on VRAM heavily. They also are pretty insignificant compared to LLMs
>>
File: ComfyUI_hgdf_03994_.jpg (891 KB, 2688x1536)
891 KB
891 KB JPG
>>
File: ComfyUI_hgdf_03995_.jpg (625 KB, 2688x1536)
625 KB
625 KB JPG
Might make a judgement call and say this merge is good to go live.
It's a vast improvement over v1 and I only replaced 1 model out of the 8 used.
>>
RTX 5060 ti 16gb.... good for sloppa ?
>>
>>107320601
>>107320613
How is the update going? Where you are going to release it?
>>
File: ComfyUI_hgdf_03996_.jpg (782 KB, 2688x1536)
782 KB
782 KB JPG
>>107320693
easy
I'm running a 3060 12gb
30 steps dpmpp sde then 12 steps on the upscale takes roughly 2 minutes
>>
File: Sirius B 01.jpg (841 KB, 2000x3000)
841 KB
841 KB JPG
>>
File: ComfyUI_hgdf_04015_.jpg (1.9 MB, 1792x2304)
1.9 MB
1.9 MB JPG
>>107320710
It took over a month of testing new "trained" illustrious models to find one model that actually improved my merge, rather than make it worse.

It will be on civit, previous version is already there anyhow.
>>
>>107320393
This is a case that justifies doing so, as it is a tag that unnecessarily bloats tokens, but there are some cases where oversimplification of tags creatively neuters the model or makes it more rigid. I think its necessary to have synonyms and a kind of thesaurus for tags. I cant remember which shitmerge it was that did nothing with the tag "slim" or "thin" but worked fine with the tag "slender"
>>
>>107320613
Nice. Post stuff in /e/ again sometime please.
>>
>>107320780
It's pretty tough to make new checkpoint merges with actual improvements, but yeah, share it when the new version release!. If I remember right I think I already tried yours that's on the anchor (Obmisark or something like that, no?) but I always test a new checkpoint per day so I couldn't dig into it much.
>>
>tfw going to commit numerous days to a single set of a minor character from a niche game because I love her
>>
File: ComfyUI_hgdf_03997_.jpg (780 KB, 2688x1536)
780 KB
780 KB JPG
>>107320816
>Post stuff in /e/ again

I've never posted in /e/ though...

>>107320830
Yea, it was. I had a whole different plan, but I couldn't find enough decent models to achieve that. I found 1 good model so I decided to go through and test all 8 models used in the merge, found the worst model of those and replaced it with new one.
>>
>>107320693
12GB is adequate for SDXL. I wouldn't get more than that just for SDXL.
>>
>>107320693
>>107320925
I think most cards is capable for SDXL Illustrious nowadays as long its 8gb minimum
>>
>>107320930
Huh, you're right. I just monitored my VRAM use while doing a 1536x1536 inpaint with WAI NSFW and it never went over ~6,500 MB.
However, doing a 1536x1536 gen with a 2x hiresfix, at one point, jumped the VRAM use to 11,842 MB.
So, 8 GB will absolutely work for popular Illustrious 1.0-based models like WAI or Noob. However, more VRAM will be necessary for big hiresfix gens.
Hiresfix is absolutely not necessary for big gens, though, as one can simply do a plain upscale then inpaint the upscale in 1536x1536 sections for extra detail.
>>
>>107321044
And for LLMs, 12 GB is kind of a sweet spot for running Nemo-based LLMs at a decent quant with full effective context. Maybe something to keep in mind.
>tfw Nemo is STILL the VRAMlet king
>>
File: ComfyUI_hgdf_04016_.jpg (680 KB, 2688x1536)
680 KB
680 KB JPG
>>
File: 137862812_p0.jpg (2.75 MB, 1536x1920)
2.75 MB
2.75 MB JPG
>>107320880
Slopping out of love is the best kind.
>>
File: ComfyUI_hgdf_04018_.jpg (620 KB, 1664x2432)
620 KB
620 KB JPG
>>107321112
nta but good to know, thanks
>>
File: 137862825_p0.jpg (2.96 MB, 1536x1920)
2.96 MB
2.96 MB JPG
>>107321044
Most modern webuis will by default start tiling once they run out of VRAM. So you can technically run SDXL on a shitty 4GB mobile GPU, even if it'll be far from the optimal experience.
>>
>>107321190
>Most modern webuis will by default start tiling once they run out of VRAM
Tiled vae, not full-on tiling
>>
File: 1752939294071962.jpg (540 KB, 2048x2048)
540 KB
540 KB JPG
>>107318417
Is it Nano Banana? With a reference image?

SDXL can't be so consistent in this position.
Qwen can do something close in t2v mode, but it obviously doesn't know Hoshikawa. In i2v mode from a standing reference image, it fails to make a natural pose.

Pic: genned a standing image in wai, edited with qwen (still failed to position knees apart naturally), ran through sdxl for upscaling.
>>
>>107321112
>Nemo
Use case for not using Opus 4.5 right now?
>>
>>107321687
I have 12gb Vram can I run qwen edit? If yes, whih version quant do you recommend?
>>
>>107321765
Use GGUFs. It'll unload as much as need to main memory. You're just trading time for memory.
I have 16gb VRAM and get this with Q6_K quants:
>loaded partially; 13946.38 MB usable, 13942.39 MB loaded, 2210.96 MB offloaded, lowvram patches: 0
>>
>>107321798
With Q8 quants:
>loaded partially; 13870.38 MB usable, 13866.42 MB loaded, 6995.06 MB offloaded, lowvram patches: 0
It's still works but 15% slower.
>>
>>107321798
Understand, and alternative models like Neta Lumina that are similar in requirements for editing? Like to have other alternatives? I'm going to try your solution
>>
>>107321826
NetaYume doesn't support editing, as far as i know. And it's severely undertrained. I genned a few images with it and haven't touched it anymore. Anyway, I think you can run anything with ggufs. It'll just be slower once you hit main memory and extremely slow if you hit ssd swap.
>>
>>107321860
Ok thanks, I will check that
>>
File: scene rotation test.jpg (882 KB, 1698x2664)
882 KB
882 KB JPG
>>107321687
Yes, Pro, and yes, just a regular standing image for the char itself. It usually makes the "shadow" wide since i avoided saying panties on lmarena due to filter. It's shitting itself btw probably from global rate limit/demand. I actually manually narrowed the shadow make it vaguely panty-like. One funny gen had a generic skin-toned region. One issue is it would depict the character sideways with knees together if I don't add the second part of "body facing us (symmetrical view)".
Honestly it doesn't really get what "knees apart" means either. To have a character manspread while sitting on a chair without pose reference you have to describe there being enough space to fit something between knees but don't actually put anything there. Even then it might mess up.
>>
is it possible to generate videos using 2 images? like, animate from image 1 to image 2?
>>
Emilia is love!
>>
>>107322284
Seems like Nano Banana is far from perfect. It also fails to preserve heterochomia from references.
>>
>>107320355
Thanks for including my Sigewinne baker anon. She's a cutie.
>>
>>107322548
None of this shit is perfect, nano banana is still pretty impressive.
>>
>>107322336
Ridiculously long legs
>>
>>107322284
Good testing, for GPUlets Nano Pro could be a fast solution to some things. Seems like it's more uncensored than the original Nano Banana which kept filtering or blocking any image with minimal lewd content
>>
File: IMG_0976.jpg (33 KB, 680x545)
33 KB
33 KB JPG
What do people here think about SPELLBRUSH?
>>
>>107324169
Never tested it
>>
File: 00060-2152957399.png (872 KB, 1216x832)
872 KB
872 KB PNG
>>
>>107324169
What the fuck is that, it's an ad portal to 4 websites, one of which is a random out of stock tabletop game with dead Amazon link, one is a video game, one is a portrait generator used by aforementioned video game. and the last one an image gen partially designed by whatever team made the portal website.
>>
View this infor from 2ch bros, https://huggingface.co/ChenkinNoob/ChenkinNoob-XL-V0.1?not-for-all-audiences=true

They're finetuning noob (apparently like epsilon) to update its knowledge. No rectified flow, nothing like that seems to be coming,
>>
>>107324395
Ewww EPS??
>>
>>107324449
I'm a total noob when it comes to NoobAI, but I know there are Noob enthusiasts here who might be interested. What's "ewww" about ESP that isn't vpred? Isn't it still Noob?
>>
>>107324470
Not enough noise to cover the image in training -> composition and colors aren't learned properly. Then you solve this with some noise offset trickery and you get even more problems. It's just obsolete, nobody should train with eps.
Then there's vpred, which is hard to get right due to how easy it is to fry composition steps, so bakers may not be inclined to try, but even failed noob vpred bake >>>>>>>> shitty noob eps.
Finally there's rectified flow, which doesn't have any of the problems of both eps and vpred, but the bakas at chenkin have probably never even heard of it.
Whatever, not my resources to burn. Having an updated dataset is cool, but eps? It's probably not gonna be very useful.
>>
>>107324675
Maybe they're playing it safe, considering how many issues Noob team encountered with training. I agree though, it's really hard to come back to eps after using proper models.
>>
>>107324675
Good explanation,<3
>>
>>107324395
>>107324449
It says it's trained on top of Laxhar/noobai-XL-1.1, isn't that vpred?
>>
>>107324882
Last vpred is 1.0, the only 1.1 is eps.
>>
>>107324882
noob 1.1 is eps
>>
>>107324893
Oh, weird. Is 1.1 based on 0.5 then? Because I think after that they switched to vpred.
>>
>>107324910
No, 1.1 continued training off of noob 1.0 eps. 1.0 eps was also converted to v-pred 1.0
>>
>>107324943
I see, decisions were certainly made there.
>>
>>107324943
>>107324981
and from my experience and other anons', 1.1 isn't great. if they were afraid of vpred, they probably should have continued from 1.0 or even 0.5
>>
File: untitled.png (3.12 MB, 2520x5000)
3.12 MB
3.12 MB PNG
Regrding this posts >>107304739
And anon's model recommendation,
>>107305311
>>107306061
>>107306088
>>107306123
sharing personal test. Still don't know what the is causing the blue gradients in the result, but satisfied and happy with the new models anons recommended.
>>
>>107324325
>>107324192
Evidently they make and support the models for niji journey
>>
>flux.2 it's out
What do we do with Chroma bros?? it had so much promise!
>>
>>107325499
>24b text encoder
its dead on release lmao
>>
>>107325547
Built for Big API Nodes Cock
>>
>>107325547
Supposedly it fits on 24gb vram and 64ram.
>>
File: 00008-1095945952.png (2.43 MB, 2016x1344)
2.43 MB
2.43 MB PNG
>so many shitmerges offering the same slop on Civit
wake me up when checkpoint makers unfiy their resources and money to bake one giant unified model or start making decent inpaint and outpaint models...
>>
>>107325728
It's kinda sad having hardware of that caliber to produce this kind of anime slop. pic related is flux 2 anime. I'd much rather roll seeds at 0.5 seconds at high resolution with SDXL and that hardware than sit around waiting for Flux
>>
>>107325929
You shouldn't expect good anime from base local models honestly. It's always shitty.
>>
>>107325061
could you post the comfy workflow? i understand what you're doing but i don't know how to use the models to do it
>>
>>107325929
I said it before to someone, the model is too large and has worse license than qwen image. So i see no point in using it.
>>
another sloppy model release, yet /adt/ remains unaffected, as the peak of anime/2d models already exists.

genuinely, if you could improve anything out of noobai, what would it be?
>>
>>107326177
feet and hands definitely need to be improved
also proportions being improved in general would be nice
>>
File: Chen candy.jpg (577 KB, 1408x2048)
577 KB
577 KB JPG
>>
>>107326177
its still a pretty simplistic model when it comes to compositions. its great for 1girl and simple 1girl 1boy coomslop which is honestly most of anime arts, but sometimes it would be cool if it understood prompts better, like those proprietary models do.
>>
>>107326177
> genuinely, if you could improve anything out of noobai, what would it be?
updated dataset
handle higher res better
works with other samplers
100k USS in training or using an LLM encoder for better prompt understanding
>>
>>107325885
Shitmergers don't have resources they could pool.
>>
>>107326177
Better prompt adherence, get rid of clip and the shitty sdxl vae.
>>
>>107326177
I'd make Laxhar and Neta Lumina have a kid together
>>
File: peeb.jpg (2.6 MB, 1792x2304)
2.6 MB
2.6 MB JPG
>>107326177
Need to replace CLIP with a better text encoder
Rouwei was able to do it, but Rouwei is also ugly as fuck

>>107326248
>handle higher res better
Noob v-pred can already do high-res if you train it at high-res because of ztsnr. Look at Seele for example
>>
File: Untitled.jpg (202 KB, 1905x1536)
202 KB
202 KB JPG
>>107325977
I'm using Forge, not Comfy. I placed all the models from the example into the models\ControlNet folder. After that, I opened ControlNet, selected the Canny preprocessor, and clicked the refresh icon in the model options to make the new models appear.
Someone in the last thread suggested not using the preprocessor, which I tried and it worked fine but that approach didn't suit my needs because I wanted to preserve only the lineart and replacing everything else, so for my specific case, I needed to use the Canny preprocessor.
>>
>>107326285
I gave up on sdxl and switched to netayume-lumina lol.
>>
>>107326331
ah i see. i'll try to figure it out. thanks anyhow!
>>
>>107326339
I haven't tried neta that much since I don't want to have to write entire essays just to prompt it. Also haven't seen that many great neta gens as well, a lot of them look very compressed and artifacty
>>
>>107326404
I usually do 28 steps with euler and and linear quadratic. Also the compressed thing is because you need a lot of negative prompts, plus the quality seems better with natural language than booru tags.
>>
Yes!
>>
>>107320576
not even true. don't try to be an expert on things you know nothing about
>>
>>107322327
https://firstframego.github.io/
>>
File: 00071-3506188030.png (2.43 MB, 1856x1280)
2.43 MB
2.43 MB PNG
>>
>>107326593
are they gonna BUMP POOSIES?
>>
>>107326603
Yes, after they date a bit.
>>
File: 00004-186946236.png (2.13 MB, 2528x1696)
2.13 MB
2.13 MB PNG
Lazy to inpaint right now, but liked how this gen turned out while testing raehoshiIllustXL_v80
>>
File: 128338560_p2.png (962 KB, 768x1122)
962 KB
962 KB PNG
>>
File: 1759126108304125.png (3.42 MB, 1248x1824)
3.42 MB
3.42 MB PNG
Flux 2. Decent prompt adherence, i'd say around as good as Qwen.
>>
>>107327282
It better be, the text encoder is at least 3x larger.
>>
Any Wan'ers here?

Do any of you know how to prevent unwanted mouth flapping in wan2.2? Especially for anime gens. I've had so many good gens ruined by unwanted bad mouth movement.
Don't say negative conditioning because that doesn't work. The chinks have baked constant talking into the model.

I can't believe nobody has made a lora to fix this issue.
>>
>>107327418
i've seen it explained this way, it can't differentiate training data that has mouth movement and what doesn't, and doesn't have the ability to pick whether or not a character should be trying to speak or not. its just the way the chinese fortune cookie crumbles.
>>
>>107327282
This is very cute, but it requires significantly more VRAM and RAM. Imagine if an anime Iodestone invested what they put into Chroma into Anime instead, and now Flux 2 released. All that money and effort is partially wasted or obsolete now.
>>
>>107327436
Its not obsolete, flux 2 cant do nsfw. It's even more cucked than flux 1.
>>
We need a proxy, a GPU proxy that connects us to some GPU from one of those chinese corpo labs. I deserve free 1024GB VRAM so I can gen 6k resolution anime in 0.2 seconds flat because I said so.
>>
File: aaaiiieee.gif (456 KB, 750x609)
456 KB
456 KB GIF
>>107327418
i've started testing it today. but i only have 16gb vram so i'm really limited. i've tried really hard to make this girl spank this boy with the left hand (her right) and it just doesn't obey. as far as i can tell, the model will simply refuse to do some things, maybe because making her lift her right hand would cause the bodies to move in a way it can't predict? she does it with the other one. on top of that, they also don't do slapping/hard hitting, the girl just gropes or gently touches the region after the hand reaches it. i tried with another image too

i also had bad experience with penetration and handjobs (the hand moves while holding the penis, instead of sliding over it). i tried multiple things, penetration maybe works if you describe it enough but it just seems that the model lacks proper porn training AND it ignores the textual orders and does whatever it wants. maybe if you can generate 300 videos, you can find one that does exactly what you want but at that point it becomes too resource intensive

also my best gen for this scene in particular was done with q5 even though i've also tried q8 for many prompts afterwards

futacons really are the most oppressed race widepeeposad

tl;dr - i think videogen has too many flaws and/or requires too many resources to use

i also tried generating with a starting image and ending image. but it diffs the images and only animates the difference. but that's probably the best way to animate something in a particular way

here's the video i was talking about: https://litter.catbox.moe/wd2sd0e6cl4vvjyr.mp4

have you tried wangp also? it seems like it might be better for videogen but i only used comfy so far
>>
brb downloading some more ram
>>
>>107327622
>Nico smugly knowing that she is the superior idol to noted buttslut Iori
>>
File: ComfyUI_11224_.png (3.12 MB, 1232x1800)
3.12 MB
3.12 MB PNG
Love Livers get out
>>
>>107327889
Her eyebrow position suggests that her buttplug is uncomfortable.
>>
File: Powerfull.jpg (362 KB, 688x847)
362 KB
362 KB JPG
Powerfull...
>>
File: media_1764119801.png (1.53 MB, 768x1280)
1.53 MB
1.53 MB PNG
>>
File: fixed.png (774 KB, 688x847)
774 KB
774 KB PNG
>>107328198
>>
>>107328261
ESL here, what is 'chirp'?
>>
>>107328287
The sound a smoke detector makes when its battery needs to be changed.
>>
>>107328287
https://files.catbox.moe/m5oa7r.mp4
>>
>>107328300
Kek
>>
Might sound retarded but if Anytest is such a good controlnet then why the hell isn't it on Civit? Why do you have to download it from some scuffed github repo instead?
>>
the little amount of hype or promise for this new model is bleak. diffusion really fell off
>>
>>107328897
Also what even is ControlNet? Each model is like 2gb so is it basically a checkpoint without VAE and CLIP since those already come from your checkpoint? If so why doesn't the checkpoint just do both jobs at once? Why need a separate model? And if not, how do you train a ControlNet model? How do you build the dataset?

Someone spoonfeed me, links are welcone if you don't want to typing on the keyboard
>>
>>107328907
You mean Flux 2? Thing is it's a resource hog and that Qwen it's very good model. Flux 2 is just more of the same and the demo images look like ass, not just for anime but every other style too.
>>
>>107328897
Nip reasons.
Prease undelstand.
>>
>>107328897
I would consider civit to be the scuffed site tbqh famalam
>>
>>107320902
>I've never posted in /e/ though...
Really? Then someone else with that surreal style has been posting in /e/ from time to time.
>>
File: 00032-966029978.jpg (695 KB, 1600x2048)
695 KB
695 KB JPG
>>
File: 00030-3755661425.jpg (761 KB, 1600x2048)
761 KB
761 KB JPG
>>
File: media_1763632550.png (1.13 MB, 768x1280)
1.13 MB
1.13 MB PNG
>>
File: 00036-3941037282.jpg (683 KB, 1600x2048)
683 KB
683 KB JPG
>>107329542
>>
>>107329115
How would you search for models in github without running into tons of unrelated stuff?
>>
File: 00001-3624863143.png (1.23 MB, 1024x1024)
1.23 MB
1.23 MB PNG
>>
File: 00047-1543546224.png (2.23 MB, 1280x1856)
2.23 MB
2.23 MB PNG
>>
File: 00141-2502830925.png (1.43 MB, 1024x1024)
1.43 MB
1.43 MB PNG
>>107331293
Fuck, wish I could do that.
>>
File: 00176-979101957.png (1.62 MB, 1024x1024)
1.62 MB
1.62 MB PNG
>>
>>107331159
You don't, lurk these threads and wait for anons to link, thr github search is unusable for this.
>>107326339
How long are your prompts? Do you use some kind of LLM? Keep hearing you need to write novels for it
>>
>>107326177
Body composition tags pic rel, body fat tags, more specific age tags for at least lolis and mature woman.

Happens to me all the time that when I put "x character as mature woman" the age and body composition vary from seed to seed. Same thing if I put for example the tag "adolescent" or "loli" the age varies from like 11 to thin adult woman body or it associates being young or teenage with being flat chest and long legs.
>>
>>107331492
Most of the tags are there. And I mean, do you really want to see girls with "inverted triangle" body shape? Pear-shaped figure, petite, skinny, loli, narrow waist, curvy, wide hips, bunch of e621 shit too. also using negpip/negs is very strong as well if you get something you don't like
Yeah consistency can an issue for sure, but there's plenty enough of the tags for this imo. Thing is, different artists are gonna draw mature females (or lolis or whatever) very differently from one another, it's not as simple as it's with realistic pics
>>
Are there any recent resources on controlnets? I think that's what I need anyway.
Trying to make some RPG portraits but my autism demands they all be facing the same direction so it lines up nicely on my UI, but the output varies wildly and getting them into a three-quarters view while not looking at the camera is impossible since apparently that tag just doesn't exist.
>>
>>107331744
Uh, looking_to_the_side ?
looking_at_viewer in negatives?
>>
>>107331744
>>107331764
Maybe looking_ahead instead of looking_to_the_side.
>>
>>107331764
>>107331774
looking_away is the tag for that, but the three quarters view is still a gamble.
>>
>>107331744
I'd just use Anytest and start with the strength at 0.3 to 0.4 and crank it down until it stops reliably genning the pose you want, then crank it up juuuust a little more so it reliably generates that three quarters view while not super closely copying the lineart/shape.
>>
>>107331744
Sdxl doesnt know up down left or right or directions inside the gen
>>
File: 1750252260941959.jpg (900 KB, 1248x1824)
900 KB
900 KB JPG
>>107331403
>How long are your prompts
Just a few sentences.
Neta is not as good as the noob when it comes to the sheer variety of styles or nsfw, but it handles several subjects extremely well.
>>
>>107331529
>inverted triangle" body shape?
Yeah, girls who swim competitively have broad shoulders, or like those stoic knight mommy types who trained their bodies for combat. The typical cutesy peitr body type gets boring sometimes.
>>
>>107331529
>Thing is, different artists are gonna draw mature females (or lolis or whatever) very differently from one another,
Ok but don't you think it would be fair, intuitive and universal to standardize those types of physical attributes without relying on the artist? Because with that same logic instead of putting "sailor uniform" you'd have to put the Sailor Moon artist and reroll seeds until it gives you a sailor uniform. Plus that helps us separate physical attributes from the artist's exclusive traits giving us way more control. Like in BakerAnon's case who gets feet that he personally doesn't like but they come in the package with the artist tag
>>
File: stellaris portrait.png (2 KB, 512x512)
2 KB
2 KB PNG
>>107331744
openpose controlnet should work well for that.

xinsir's openpose or union-promax controlnet works well for most SDXL, though I use laxhar/noob_openpose one for NoobAI/Illustrious which seems to work better maybe.
>>
>>107332033
There are Flux and Qwen ControlNets, can I use those with SDXL?
>>
>>107332014
Yes, and it can be really frustrating. Like the face/shading of the artist? Cool but now it also changes the anatomy in a way that I don't like. And the solutions I know of is either to inpaint and edit every time (fuck that), train some ultra-autistic lora (theoretically) or to throw shit at wall and hope that after some weight adjustments and adding some other artists, the artist mix will work the way I like.
Still, making them completely standardized might end up being too boring, having low variety. For me, prompting it like "shading in style of x, eyes in style of y, anatomy in style of z" or something like that would be perfect. But yeah I don't think that's ever gonna happen.
>Because with that same logic instead of putting "sailor uniform" you'd have to put the Sailor Moon artist and reroll seeds until it gives you a sailor uniform
Not sure if I fully understand, but depending on the model, that sorta happens. The way uniforms (including serafuku) are genned often depends on the artists you prompt if you aren't prompting characters. Expressions and hair styles are affected too.
>>
Grok status ?
>>
File: ComfyUI_temp_afhmj_00029_.png (1.79 MB, 1024x1024)
1.79 MB
1.79 MB PNG
>>107332073
no it has to be for the same model architecture, though different fine tunes of the same model type (like base SDXL vs Pony/Illustrious/Noob) are usually OK.
>>
File: latest.jpg (54 KB, 700x394)
54 KB
54 KB JPG
>>107332149
>making them completely standardized might end up being too boring, having low variety.
Problem is they're already standardized since you can't control bodies unless you artist tag hunt. Would be KINO if NoobAI 2.0 let us deconstruct artist tags and pick what we want, like taking Violet Evergarden aesthetics but ditching those long fat faces.
>>
>>107332202
Nice XYZ, never could do that
>>
File: isnotflux.png (855 KB, 993x1509)
855 KB
855 KB PNG
Well...it's not Flux2 but a new finetune from NetaLumina...why do 3d sloppers have way more news and advancement than us...
>>
File: iwantobeleive.jpg (2.17 MB, 3264x8876)
2.17 MB
2.17 MB JPG
Aaaaand this model:

NEWBIE-AI/NEWBIE-IMAGE-V0.1-EXP-MODEL-REPO
Basic Information:
NewBie image v0.1 exp is a new architecture model that uses the Lumina image 2.0 arch as its research target.
Anime Type (we chose a neta lu2 model trained on lumina arch to further research this type of task)
>Text Encoder:
Google/Gemma3 4b it
Jina Ai/Jina Clip v2
>Networks:
Next DIT 3.5b (modified from 26 layers to 36 layers)
>VAE:
Flux 1 Dev VAE
Pretrain Dataset: full dan + 1m e621
Used 8x h200 GPUs training for four months (23000 h200 hours total)
Restructured text data using XML format
The model is currently in the training phase (60% complete).
Overall progress is approximately 80%.

THE MODEL IS EXPECTED TO BE OPEN SOURCED ON HUGGING FACE ON DECEMBER 31, 2025.
https://huggingface.co/NewBie-AI/NewBie-image-v0.1-exp-model-repo

>My schizo take:
release around the same time as latest NoobAI, name is sus similar. Did the diffusion gods answer my prayers? NoobAI x Lumina merge?
>>
>>107332486
Fun fact:LAXHAR is in charge of supporting a lora trainer for it
>>
>>107332486
>>107332605
First promising local model in a year.
>>
File: file.png (171 KB, 679x894)
171 KB
171 KB PNG
>try googling for depth controlnet, since that seems to be the level of detail im after
>no actual explanation on how to use it, just example images
>no explanation on how to create the depth image, just "here download this example image"
>also each page is around 2 years old and still referencing sd1.5
Finding any info about literally anything AI related is like reading a fucking Fandom wiki for an obscure h-game that only has a half finished list of items and nothing else

>>107332033
Boutta just try this or go with my initial idea which is a 3D model from Koikatsu and just mspainting over it what I want then i2i-ing this bitch.
>>
File: image(1).jpg (29 KB, 222x212)
29 KB
29 KB JPG
>>107332616
According to what they're saying, it understands natural language at a greater level than the Neta, but each gen in a 12gb VRAM takes arround 200seconds
pic not mine,
>>
File: moreshilling.jpg (1012 KB, 1914x1616)
1012 KB
1012 KB JPG
>>107332745
More shilling because it's threapeutic
>>
>>107332721
Depth + Lineart combo never misses , you need to run 2 CNs but ProMax lets you reuse the same model without doubling your VRAM usage so its whatever
>>
>[DEPRECATION WARNING] Detected import of deprecated legacy API: /scripts/ui/components/button.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
this is why open sores always loses
the level of churn changing perfectly fine APIs just because fuck you, always destroying parts of the ecosystem because nobody has the time to waste chasing changes and doing work that adds zero functionality to their extension
I can't wait for an actually good, proprietary UI to come out for local diffusion models, the Microsoft Windows of inference if you will.
>>
>>107332780
>Depth + Lineart combo never misses
Yeah that's great buddy but you're doing the exact thing i'm complaining about.
>>
>>107332721
>>107332780
Also if you're using WebUI or Swarm you can always preview the result of each filter, so you know what the model is actually interpreting when you send Lineart filter and Depth filter
>>
>>107332801
see >>107332795
>>
>>107332795
Sorry, I'm ESL, hope it helps another anon then
>>
>>107332721
https://rentry.org/comfyui_guide_1girl#controlnet-pose-transfer
>>
Nano Banana is amazing because you can make niche shit like this
>>
>>107332886
Yeah, and less robotic, generally SDXL gets the hands on the belly right but everything else ends up neglected making a rigid figure with hands on the stomach but the angle of the neck how much the torso bends and the legs bend are left unfinished. Its a really good model for doing SFW controlnet
>>
>>107332486
Clip? are they retarded? just keep gemma 3 4B, it's more than enough.
>>
>>107332886
You leave Kaho alone.
>>
>>107332984
That gen looks pretty stiff to me.
>>
>mistakenly let ComfyUI update
>no longer starts
ah


its going to be one of those days
>>
>>107333369
my routine for updating comfy:
1/ only do it once in a blue moon because it sucks
2/ run a script to backup workflow folder and some custom config in custom nodes and save a list of the currently used custom nodes
3/ delete everything (comfy, venv etc)
4/ reinstall from scratch
python is a massive piece of shit and fuck every person involved in making it become a popular platform
not all woes, but some of the woes of all ML programs are uniquely python, like how custom nodes can have conflicts against one another because of requiring different, incompatible dependencies/subdependencies
in javascript each dep has its own isolated dep, thing a requires A 1.1, thing b requires A 1.2, they both pack their own and doesn't affect the other.
in rust the compiler would rename symbols for you
fuck python yo
>>
>>107333369
Comfy portable? I lost my Gallery button 2 months ago in an update, I miss it :(
>>
>ENSD 31337
What is /adt/ verdict on this? Meme or not?
>>
>>107333479
No whatever disaster I did on Linux over a year ago without being familiar with how Linux worked in the first place. I just never bothered to fuck with it or fix it because it still worked. At least until I accidentally hit update.

>>107333456
Python is 99% of my woes at the moment I have no idea how i'm going to fix this shit through pyenv or pytorch or whatever all this bullshit is. Fuck Python
>>
>>107333498
Use Comfy portable, Comfy Anon recommend this version also
>>
>>107333506
Is there a linux portable?
>>
>>107333456
Python programmers just don't know any better than flailing around and throwing shit at the wall until the app stops crashing. They don't know there's a better way. Same reason why they choose a dynamically typed language for large projects. I love Comfy but geez.
>>
>>107333523
Oh, shit anon, checked and there is no Linux portable version of Comfy,
>>
>>107331159
It's Huggingface not Github.
>>
>>107333554
>I love Comfy but geez.
I used to love comfy because comfyanon showed restraint. I hate what its become
>>
>>107333891
>because comfyanon showed restraint
????
when was that? I remember being an obnoxious autist ever since he started shilling his ui on sdg
>>
Looks like Ani awoke from his shota-induced coma.
>>
>>107333989
got a link to the collection?
>>
>>107333855
Shhh!!! you ruined the joke!
>>
>>107333989
iirc he forgot to remove his trip before schizoposting
>>
>>107333554
Had the same issue, bit the bullet and moved all my models to an external drive and rerouted paths. Took a weekend but now I'm not scared of delete Comfy and re install it
>>
>>107332745
>12gb VRAM takes arround 200seconds
Christ... glad local is getting new shit but 30 secs a gen already feels slow. Unless the images are actually nearly perfect every gen it's gonna be really frustrating.
>>
>>107333492
Meme, sometimes helps with ancestral samples to give more randomness with the same seed but it's not a universal improvement, try it if you're feeling lucky, but don't expect miracles
>>
>>107333492
Complete meme. Was initially suggested to replicate NAI 1 ancestral sampler behavior, completely irrelevant now
>>
>>107332745
How much slower is it compared to normal lumina?
>>
>>107334385
This could be the SDXL 2.0 we've been waiting for and *inhales* IF IT ACTUALLY DELIVERS AND ISN'T SNAKE OIL LIKE PAST MODELS
>>
>>107334428
They are around the same, I have the same GPU as the example pic and it's between those seconds
>>
>>107334429
it won't be.
it's going to be underbaked to shit and there is going to be next to no one doing any post-training(read: loras) on it because it's just too fucking heavy.
SDXL isn't the most used because it's the current "best," it's the current "best" because it has an acceptable compute cost and is fairly light as long as you have a 30 series or newer.
We're not getting off of SDXL until nvidia stops being jews and that isn't going to happen because OAI just pulled probably the worst market manipulation tactic ever seen in any market with any material with RAM supply.
>>
>>107334473
>t's going to be underbaked to shit
What makes you think that?
>>
File: QWEN-Anime-AIO-L_00014.jpg (382 KB, 832x1216)
382 KB
382 KB JPG
>>107334429
https://civitai.com/models/2122738?modelVersionId=2440915 this is an update of some qwen finetuning at anime but the results are sub SDXL tier (pic related, I cant run this shit), idk if the solution is waiting for some model to fall from the sky that fixes all our problems, or doing what they did with SDXL, marry a model and everyone agree to build on that model
>>
>>107334522
Because everything always is. Sometimes by design(it's a lot easier to get a desirable result when undertrained compared to overtrained since you can always just train it more but you can't just train something less), sometimes by running out of budget and sometimes(well a lot of the time) by being inept(which can manifest in all sorts of forms, the most common of which is dropout schemes).
again though, next to none of this really matters since if you're not using an xx90 card you're not going to be able to use the model as anything more than a curiosity that you give up on because it just won't actually give anywhere near enough benefit for the additional compute cost.
>>
>>107334446
So will i get similar speeds to normal one on my 6gb gpu? Which was 4s/it.
>>
>>107334647
To be able to have a model with the versatility of SDXL I mean that model would have to come out and we wouldbe waiting at least 2 or 3 years.
>>
>>107332333
*yawn*
Do these people have zero self awareness about the shit they upload? How is this supposed to be appealing enough to make anyone want to download it?
>>
>>107334681
There were viable SDXL finetunes within a few months. naiv3, while SaaS, only took 4 or 5 months. PDXL, while being fried garbage, took 6 months. Illustrious 0.1/1/1.1/2/2.1/3 were trained somewhere in spring 2024, so under a year.
Again. This isn't a matter of a model being competent or not. It's a matter of common household compute being able to run it in any acceptable manner. We are largely beholden to nVidia for a new model to actually get fleshed out. Technically it's possible someone could actually release a very middled architecture that is a bit heavier than SDXL but better at everything that still runs acceptably on 30 series and up but everyone is trying to grift off SaaS and they have actual real GPU's so there's no reason for anyone to bother.
>>
File: 03808-1450525165.jpg (744 KB, 1792x2304)
744 KB
744 KB JPG
>>
File: 03782-3975516334.jpg (750 KB, 1792x2304)
750 KB
750 KB JPG
>>
>>107334948
She can show her feet while eating, can't she? It's not like they are busy.
>>
>>107334793
>>107334948
My man, long time since I saw your gens. Were you banned again? Curious what BakerAnon did to get jannied.
>>
>>107333369
Almost 3 hours later, i'm finally back to being able to slop. Still have to reinstall all my custom nodes.
FUCK.
PYTHON.
>>
>>107335012
Cograts anon, Ok so I see a man and a snake, and my slop brain immediately thinks thats Snake from Metal Gear, right?
>>
>>107335056
I just lazily prompted solid snake biting a python just to see if I kept getting cuda errors, so yaknow, no effort
>>
>>107335078
There isnt the equivalent for linux of WinPython? A portable python something that you rerpute inside your venv and Comfy always use that?
>>
>>107335103
Theres miniconda I think but I just use pyenv, but because of comfy updating and breaking on launch I had to update my python environment which caused a cascade of everything else breaking and needing to be reinstalled on top of comfy.
>>
>>107335134
Sorry to hear that, anon, well good that you can gen again!
>>
File: 03806-2021756503.jpg (778 KB, 1792x2304)
778 KB
778 KB JPG
>>107335007
the usual three day
>>
>>107334754
It would be cheaper and easy to hire a bunch of hackers to leak NAI 4.5 and its Vibe Transfer model than to rebuild everything that's been put together in SDXL all this time, imo
>>
>>
>>107335298
What series is she from? Are you the same anon who genned it all this days? Is she your waifu? Really curious
>>
>>107335311
Not that dude but Emilia from Re:Zero, she's a half elf who looks similar to a witch who almost destroyed the world centuries ago so nobody likes her except the protagonist.
https://rezero.fandom.com/wiki/Emilia
>>
>>107332486
Announcing a model release in advance is peak hubris, It's going to be obsolete before it even uploads.
>>
>>107334793
welcome baker, the thread was getting too muh tech talk without your gens
>>
File: 03787-2129222857.jpg (689 KB, 1792x2304)
689 KB
689 KB JPG
>>107335519
>too muh tech talk
>/g/
kek
>>
what's the best model for controlnet sketching? i'm curious how far i can go with extreme poses using it. i've heard illustrious is the best for inpainting, does that apply to controlnets?
>>
File: 03814-1509235238.jpg (707 KB, 1792x2304)
707 KB
707 KB JPG
>>
>>107335609
Yeah but in fact a lot of times controlnet doesn't cut it when a model is hardwired, so maybe you have everything legal in terms of best subjective controlnet model but if the SDXL checkpoint is overtrained to interpret the tags in a certain way, there's no model that's gonna reach it.
Regarding the models try them out and pick whichever, if you have time check out how each one reacts because each one interprets differently.
For the problem of checkpoints that override controlnets because of their overbaked stuff the solution is to lower the CFG for better prompt strength.

Start with Xinsir pytorch promax
>>
>>107333506
comfy portable is just a venv + python exe bundled together
venvs do not fix some of the things that break comfy on update because what breaks comfy is your extensions (custom nodes) wanting different versions of the same dependencies and causing breakage for one another. It's a uniquely python problem, it doesn't exist in javascript (you import a lib in your node_modules, it comes with its own node_modules, its sub-dependencies do not interact with the rest of your program), it doesn't exist in rust (symbol renaming) or go (when people break API there they tend to create new module paths), it's python, it's dogshit, never write extensible software in python, it sucks, it's cursed, if you write software in python it should be fully featured from day one and not require extensions to be usable (I can't believe anyone could live in comfy without something like at least the impact pack, the tag complete extensions etc)
>>
>>107335950
ani has been saying this for years and he was spurned for it. be careful what you say or the python mafia will silence you
>>
I'm having a similar issue with EzratForge, a hyper bloated fork of ReForge. I can't get the Python environment to function properly and I managed to get NeoForge working almost like miracle thanks to GPT
>>
The model is finally out! https://huggingface.co/Tongyi-MAI/Z-Image-Turbo
>>
>>107336678
>yet another photoreal model
ok i guess
>>
>>107336777
It can do anime, and seems to know a few characters out of the box
>>
File: 20251114_173750.jpg (281 KB, 1200x830)
281 KB
281 KB JPG
Having a hard time genning anything that features a couple.
Concepts from one character keep bleeding into the other character and vice-versa... I tried using "BREAK" but it doesn't seem to do much
>>
>>
>>107336678
Could be potential SDXL replacement for anime. Just needs a danbooru finetune on top of it
>>
>>107336021
>Ani
>silenced
Why are you lying? I wish it was true.
>>
>>107336903
Its apparently faster than sdxl as well. 2-3s per image on a 3090.
>>
File: sus.png (929 KB, 1088x1400)
929 KB
929 KB PNG
>>
File: 1747322049183951.png (1.4 MB, 896x1152)
1.4 MB
1.4 MB PNG
>>107336958
I'm getting 3.86it/s on my 5090. Very fucking fast
>>
>>
File: 1764190196543.png (1.22 MB, 872x1384)
1.22 MB
1.22 MB PNG
Is there any reason NoobAI just produces these weird explosions? Am I using it wrong?
>>
>>107337501
still have to wait for the danbooru tune but it's already GOAT as a base model
>>
>>107337574
That isn't even the base, its a distilled turbo ver which usually is worse.
>>
>>107337542
which ba girl is that? those bleeding halos are cool
>>
>>107337589
true but that also means it isn't quite faster than sdxl base.
>>
>>107337620
At least its better than chroma
>>
File: 00053-2254528395.png (1.92 MB, 1536x1536)
1.92 MB
1.92 MB PNG
>>
>>107337657
Niger
>>
File: 1755944365629706.png (1.95 MB, 1008x1504)
1.95 MB
1.95 MB PNG
Freedom Gundam on z-image. Sort of accurate
>>
>>107337619
Tsurugi
FYI - 'blood halo', 'liquid halo', and 'melting halo' are all booru tags you can use.
>>
>>107337731
Thanks
>>
>>107337856
for me? it's the pencil i found along the way.
>>
Has anyone tested the character knowledge of z-model yet? Or how censored it is?
>>
>>107338180
ANOTHER model?! ANOTHER ONE?! Listen here, youngster back in MY day, we had SDXL and we were GRATEFUL for it! We didn't need these fancy "Z models" or whatever snake oil they're peddling this week!
>>
File: images (3).jpg (18 KB, 576x324)
18 KB
18 KB JPG
NoobAI and Illustrious? NOW we're talking! Those are REAL models, built on the FOUNDATION of good old fashioned SDXL architecture! They've got SOUL! They've got CHARACTER! They understand what QUALITY anime generation means!
>>
File: 1+yoshihiro+shimazu.jpg (26 KB, 500x281)
26 KB
26 KB JPG
But THIS? *waves hand dismissively* "Z model" , probably trained on 47 images scraped from someone's DeviantArt account! Probably can't even do hands properly! Probably gives everyone the same soulless AI-generated face!
>>
File: images (5).jpg (21 KB, 481x636)
21 KB
21 KB JPG
>>107336903
>>107336678
You young prompt engineers are always chasing the NEXT SHINY THING! "Ooh, new model!" "Ooh, different architecture!" Back in my day, we MASTERED our tools! We knew every quirk of SDXL! We could coax a masterpiece out of it with our EYES CLOSED!
>>
File: zomfyui.png (2.8 MB, 1080x1920)
2.8 MB
2.8 MB PNG
>>
File: images (4).jpg (23 KB, 738x415)
23 KB
23 KB JPG
Kids these days... no respect for the classics...
AND DON'T EVEN GET ME STARTED ON COMPATIBILITY
All my LORAs? My embeddings? My crafted workflows? You expect me to throw that ALL AWAY for your experimental "Z model"?!
TO THE TRASH IT GOES!
>>
I was BORN WITH SDXL AND I WILL DIE WITH SDXL! They'll pry my 1024x1024 base resolution from my COLD, DEAD HANDS! When I'm six feet under, they better bury me with a hard drive full of SDXL checkpoints, because that's ALL I'M TAKING TO THE AFTERLIFE!
>>
this is a bit chat, but I agree with thats.
>>
Mark my words , in six months, NOBODY will remember "Z model"! But SDXL? SDXL is ETERNAL! SDXL is LEGEND!

Now get off my lawn before I generate a 30 step Euler A gen of you running away!
>>
what caused this?
>>
real women
>>
>>107336958
FASTER?! Since when did SPEED matter?! Art takes TIME! I let my SDXL renders cook for 45 steps with DPM++ 2M Karras and I SAVOR every second! You zoomers want everything INSTANT!
>>
And another thing, you're ALL going to come CRAWLING back when you realize "Z model" can't do [insert specific anime style] properly! Meanwhile, I'll be sitting pretty with my NoobAI workflow that I've PERFECTED over months!
>>
>>107338679
Have fun being stuck with clip and a 4channel vae. Kek
>>
>>107338707
>clip
you don't need more
>4channel vae
just upscale bwo
>>
>>107338707
Go ahead, chase your shiny new toy. We'll see who's laughing when your "Z model" gets abandoned next month!
SDXL it's a WAY OF LIFE!, a PHILOSOPHY!
I WILL STICK TO THE CLASSICS!
>>
File: media_1762574545.png (1.32 MB, 768x1280)
1.32 MB
1.32 MB PNG
>>
File: 00003-3094481985.jpg (795 KB, 1600x2048)
795 KB
795 KB JPG
>>
>>107338751
B-but anon, it's got better prompt adherence!
>>
File: ComfyUI_temp_ozggz_00005_.png (3.42 MB, 1248x1824)
3.42 MB
3.42 MB PNG
>>
File: media_1762054606.png (3.03 MB, 1152x1920)
3.03 MB
3.03 MB PNG
>>107335311
Emilia from Re:Zero. I am generating her a lot. She is in my top 3 waifu. First is my wife Megami Yokatta (oc), 2nd is Isis Remnant.
>>
adhere to this
*grabs crotch*
>>
>>107338838
NEW ARCHITECTURE MEANS NOTHING IF THE FOUNDATION IS ROTTEN!
>>
File: ComfyUI_temp_ozggz_00008_.png (3.01 MB, 1248x1824)
3.01 MB
3.01 MB PNG
>>
>>107338838
SKILL ISSUE! Learn to prompt properly!
>>
>>107338868
This quote is so fukin good!
It annihilates jewish bible xD
>>
>>107338861
GOTEM
>>
Some of you need to REMEMBER YOUR ROOTS! We didn't spend YEARS training LoRAs, building datasets, and optimizing workflows just to abandon ship the moment some research lab drops a new toy!
>>
You want to know what TRUE innovation looks like? It's called Illustrious! It's called NoobAI! BUILT ON THE SHOULDERS OF GIANTS! Built on SDXL! Not this experimental model nonsense!
>>
>>
>>107338679
nevermind specific anime style, it can't even gen reimu
another meh release only targeting photoreal slop
>>
File: nbp-logt.jpg (1.01 MB, 1544x1144)
1.01 MB
1.01 MB JPG
>>107338902
nice
>>
File: ComfyUI_temp_ozggz_00006_.png (3.24 MB, 1248x1824)
3.24 MB
3.24 MB PNG
>>107339049
>>
>>107339047
Just tested it... character knowledge is absolute dogshit. It can do nsfw at least.
>>
>>
>>107339430
So it's still in "just one more finetune bro" status?
>>
This is some kamikaze stuff, what happens to poor NetaYume who was about to release v4 of Neta Lumina now with this model? Months of cash and work down the drain? How ungrateful this tech stuff is
>>
>>107339819
some of the members that trained it made z. it's whatever
>>
>>107339819
NetaYume is legit better than the z model, lmao.
>>
>>
>>107339049
Did you go for that crossover or was that just the AI gacha doing it's thing?

>>107338831
>>107337720
Nice.
>>
File: 03811-1347627240.jpg (707 KB, 1792x2304)
707 KB
707 KB JPG
>>
File: 03816-3391475475.png (3.62 MB, 1792x2304)
3.62 MB
3.62 MB PNG
>>
>>
I haven't checked in here in awhile
is the poop dick schizo still at large?
>>
File: 1738551093252650.jpg (209 KB, 1360x768)
209 KB
209 KB JPG
>>107341390
>the poop dick schizo
He's behind me... isn't he...
>>
>>107320355
I just started reading about this yesterday
I'm using ComfyUI, so Illustrious, NoobAI, Pony are all built on SDXL, so what's the big difference between them, can they share the same Loras or are they aimed at one checkpoint specifically. Currently I've downloaded mostly Loras labeled for Illustrious and I'm using the checkpoint WAI-Illustrious-SDXL. What do you people use.
Also a question, are local models the standard or is there another API ecosystem, because for LLM no one uses the local ones anymore
>>
>>107341494
illustrious and noobai are close enough for mostly compatible loras, noobai was trained from illustrious
novelai is the major paid service for anime and anime smut
>>
>>107341494
NoobAI + WAI are products of illustrious
Pony is different and can only generate horses
hope this helps
>>
File: 03803-395919035.jpg (832 KB, 1792x2304)
832 KB
832 KB JPG
https://files.catbox.moe/hp2or7.png
https://files.catbox.moe/izz9p2.png
>>
>>107341494
Some Pony loras work with Illu checkpoints, so don't dismiss them outright. I don't know what the underlying rules are or what % of Pony loras work, but I can confirm some do.

>>107340551
Cute Lanlan.
>>
>>
>>
>>107335621
so cute T_T
box please
>>
File: cui__14942_.png (1.17 MB, 832x1216)
1.17 MB
1.17 MB PNG
>>
File: 03801-4133232063.png (3.72 MB, 1792x2304)
3.72 MB
3.72 MB PNG
>>107342068
https://files.catbox.moe/grm32c.png
>>
>>107320355
does anyone have another img2prompt?
The one in OP's is 404
>>
>>107343075
>The one in OP's is 404
no?
>>
>>107343075
This is better for booru tags
https://huggingface.co/spaces/SmilingWolf/wd-tagger
>>
>>107343089
y-yes..?
>>
Asking here also...
Have any of you had this sort of issue with artifacting before? I'm on comfy UI, using 'illustriousXLPersonalMerge_v30Noob10based' as the model with '4x_foolhardy_Remacri' for upscaling in case that's relevant info....
It's really annoying me since everything else about it seems to work super well. Though, I wish there was a way to run isolated sections of my workflow so I didn't have to keep swapping tabs to use the tag info loader when trying diff loras
>>
>>107343169
not well known but you can also use lsnet to get artist tag information, something wd14 can't do
https://huggingface.co/heathcliff01/Kaloscope2.0
https://github.com/spawner1145/comfyui-lsnet
there's also a nice huggingface space here if you don't want to install it and fear le chinese:
https://huggingface.co/spaces/DraconicDragon/Kaloscope-artist-style-classifier
it works strangely well and is very useful if you want to know what artist tag from danbooru is most similar to a picture, If it's from an actual danbooru artist (or genned from their tag in noobai) it will correctly identify them.
>>
File: ComfyUI_01698_.png (1.42 MB, 1024x1280)
1.42 MB
1.42 MB PNG
>>107338679
They said this about SDXL in 2023, it was worse because SD1.5 had all the loras and shitmixes. But things change...
>>
>>107331744
>>107340242
Catbox
>>
Shhhh, thread is going to sleep...
>>107343455
You didn't attach the image bro.
>>
File: ComfyUI_00025_.png (414 KB, 716x594)
414 KB
414 KB PNG
>>107343535
>>107343455
i knew i forgot something
anyway now my ui is spitting out 'reconnecting' and 'failed to fetch server logs' despite running literally fine last night and the only thing i changed was adding the >Custom-Scripts extension for the show text node so I could load the tags from lora metadata and not need to remember the tags all the time
>>
Thread #69:

>>107343546
>>107343546
>>107343546
>>
>>107343559
I had splotches like this when I was using dpm++ 3m with too low step count. What sampler/scheduler/step count are you using?
>>
>>107343581
dpmpp_2m_sde, 'normal' scheduler, 28 steps
Copied off another anon's catboxed image
>>
>>107343600
These settings should be fine. If you catbox any problem image (along with base image you used for upscaling), maybe someone can dig deeper idk.
>>
>>107340494
that was prompted
>>
>>107344616
OK that makes more sense.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.