[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


Discussion of Free and Open Source Diffusion Models

Prev: >>108085473

https://rentry.org/ldg-lazy-getting-started-guide

>UI
ComfyUI: https://github.com/comfyanonymous/ComfyUI
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneo
SD.Next: https://github.com/vladmandic/sdnext
Wan2GP: https://github.com/deepbeepmeep/Wan2GP

>Checkpoints, LoRAs, Upscalers, & Workflows
https://civitai.com
https://civitaiarchive.com/
https://openmodeldb.info
https://openart.ai/workflows

>Tuning
https://github.com/spacepxl/demystifying-sd-finetuning
https://github.com/ostris/ai-toolkit
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/musubi-tuner
https://github.com/tdrussell/diffusion-pipe

>Z
https://huggingface.co/Tongyi-MAI/Z-Image
https://huggingface.co/Tongyi-MAI/Z-Image-Turbo

>Anima
https://huggingface.co/circlestone-labs/Anima

>Klein
https://huggingface.co/collections/black-forest-labs/flux2

>LTX-2
https://huggingface.co/Lightricks/LTX-2

>Wan
https://github.com/Wan-Video/Wan2.2

>Chroma
https://huggingface.co/lodestones/Chroma1-Base
https://rentry.org/mvu52t46

>Illustrious
https://rentry.org/comfyui_guide_1girl
https://tagexplorer.github.io/

>Misc
Local Model Meta: https://rentry.org/localmodelsmeta
Share Metadata: https://catbox.moe | https://litterbox.catbox.moe/
GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/
Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Txt2Img Plugin: https://github.com/Acly/krita-ai-diffusion
Archive: https://rentry.org/sdg-link
Bakery: https://rentry.org/ldgcollage

>Neighbors
>>>/aco/csdg
>>>/b/degen
>>>/r/realistic+parody
>>>/gif/vdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/trash/slop
>>>/vt/vtai
>>>/u/udg

>Local Text
>>>/g/lmg

>Maintain Thread Quality
https://rentry.org/debo
https://rentry.org/animanon
>>
>>108090101
>Anima
>No Qwen Image
Troll bake. I shan't be posting.
>>
>>108090111
I mean, yeah but the obvious tell is the mental melty rentries. of course he'd fuck with the model links too
>>
https://edit-yourself.github.io

Bruh..
>>
File: anima.png (3.54 MB, 1216x1824)
3.54 MB
3.54 MB PNG
>>
>>108090111
>I shan't be posting.
Hah, if only.
>>
>>
>>108090111
>>108090121
>>
What settings do you anons use with anima? I feel like I'm doing something wrong... The anatomy is iffy.
>>
>>108090323
pretty much the same as what i used with illust and noob desu
>>
File: ComfyUI_01090_.png (933 KB, 1024x1024)
933 KB
933 KB PNG
>>
I have: 4080 (16 GB VRAM) + 32 GB DD5 RAM
What am I missing out from not getting a 5090 (32 GB VRAM) + 64 GB DDR5 RAM? Besides faster speed?
>>
>>108090101
mid fagollage
too few 1girls
>>
File: Anima_00015_.png (1.69 MB, 832x1216)
1.69 MB
1.69 MB PNG
>>
File: 1745522707096187.jpg (682 KB, 1280x1792)
682 KB
682 KB JPG
>>
File: 1761381143308019.png (3.42 MB, 1536x1536)
3.42 MB
3.42 MB PNG
1girl, standing
>>
File: 1756109804427719.png (3.44 MB, 1536x1536)
3.44 MB
3.44 MB PNG
i love 1girl standing
>>
Alas... there are but few places for the 1girl to stand
>>
File: 1739381800806146.png (3.58 MB, 2016x1120)
3.58 MB
3.58 MB PNG
>>
File: 1743696931967674.png (3.15 MB, 2016x1120)
3.15 MB
3.15 MB PNG
>>
File: 1757558989741826.png (3.79 MB, 1536x1536)
3.79 MB
3.79 MB PNG
>when the 1girl standing gets mad
>>
>>108090277
lainbox?
>>
File: 1girlsitting.png (1.12 MB, 896x1152)
1.12 MB
1.12 MB PNG
>>
>>108090554
No.
>>
is chroma klein usable yet
>>
File: Anima_00035_.png (3.17 MB, 1248x1824)
3.17 MB
3.17 MB PNG
>>
>>108090595
why would it be?
>>
File: Anima_00039_.png (1.33 MB, 832x1216)
1.33 MB
1.33 MB PNG
>>
File: anima.png (3.22 MB, 1216x1824)
3.22 MB
3.22 MB PNG
>>
>>108090595
The furtroon cult in chromacord love to shill their shit aggressively. You would be hearing about it when it's ready. No need to ask.
>>
File: 1744174116287568.png (2.91 MB, 1216x1824)
2.91 MB
2.91 MB PNG
>>
File: 1758777431055528.png (3.35 MB, 1216x1824)
3.35 MB
3.35 MB PNG
>>
What's with the slop spam? Is this a raid?
>>
>>108090685
>>108090707
>>108090725
like the style
>>
>>108090781
Most posters are either schizos or trolls (sometimes both), no need for an external raid.
>>
File: anima.png (810 KB, 832x1216)
810 KB
810 KB PNG
>>
>>108090781
it's C-level Openai and Google employees coming here to shit the thread because they fear its power
>>
File: 1754177393635435.png (3.11 MB, 1216x1824)
3.11 MB
3.11 MB PNG
>>108090785
ty anon
>>
>>108090794
>catbox down
curse this Wired
litterbox?
>>
>>108090822
>catbox down
It's like every two days, what the hell is going on.
>>
File: Anima_00041_.png (2.05 MB, 1645x1600)
2.05 MB
2.05 MB PNG
>>
chan come back
>>
File: out.png (893 KB, 768x1344)
893 KB
893 KB PNG
>>108090929
>>
Anyone know where I can get the flux2 vae without having to sign up for it? Im trying to train a klein lora but the vae from comfy is giving errors

>>108090250
Nice
>>
>>108091023
https://huggingface.co/Comfy-Org/vae-text-encorder-for-flux-klein-9b/blob/main/split_files/vae/flux2-vae.safetensors
>>
File: 1436713903844.jpg (24 KB, 250x241)
24 KB
24 KB JPG
>check out lora on civitai
>check the user submissions at the bottom to get an impression
>literally 1000+ submissions by the same single user (who isn't even the author)
>mfw
>>
>>108091033
Sank you
>>
>>108091034
>casually browsing loras on civitai
>T8STAR
>SARA PETERSON
>T8STAR
>SARA PETERSON
>T8STAR
>SARA PETERSON
>>
>check out lora on civitai
First mistake.
>>
File: ComfyUI_10800.png (2.89 MB, 1440x2160)
2.89 MB
2.89 MB PNG
>>108090528
Just get a 6000 Pro instead of a 5090 at scalper prices. It's 3x the VRAM for twice the price.
>>
But seriously, why does he upload literally every single one of his gens to civitai? You're only supposed to upload the good ones, not every single seed.

https://civitai.com/models/1967399/school-days-hq-style-il
>>
File: QwenImg_00021_.png (2.63 MB, 1920x1440)
2.63 MB
2.63 MB PNG
had a crack of genning the same person over and over, but got bored within 30 minutes
dunno how you freaks do it, are you dead inside?
>>
>>108091102
>had a crack of genning the same person over and over, but got bored within 30 minutes
>dunno how you freaks do it, are you dead inside?
The pedophilia helps
>>
>>108091096
It's just jeets uploading shit to farm buzz.
>>
>>108091076
>T8STAR
no idea who that is

>SARA PETERSON
blocked for a while, who the hell views his loras as anything but spam
>>
>>108090528
Pretty much nothing. The ram can help for video gen, but that's kind of it.
>>
>>108091236
What kind of help? Like, quality?
>>
>>108091251
Better quality, longer videos.
Not worth it, wait for next gen.
>>
>>108091076
>>108091213
>(((SARAH)))
>(((PETERSON)))
lmao, 0% subtlety. Dude might as well use a merchant pfp while he's at it
>>
>>108091213
T8star spams workflows with little information that mainly links to his youtube channel.
>>
>jeetivai
>>
>>108091291
I see, a member of the furk cult
>>
File: 462901971177229.png (1.39 MB, 1216x848)
1.39 MB
1.39 MB PNG
>>
cursed thread of benis
>>
>>108091342
I miss final fantasy games with romance and hot girls
>>
File: 442949241395236.png (1.63 MB, 1216x832)
1.63 MB
1.63 MB PNG
>>
>>108090781
>10 images in an hour is spam
>>
File: 823695210007777.png (1.62 MB, 880x1168)
1.62 MB
1.62 MB PNG
>>108091348
Well the girls are still hot.
>>
>>108090101
I'm trying anima, the results are interesting but it's my first time using comfy. Can I do img2img upscale with comfy? The base output of anima are lacking some details and can be a little melted, I want to try upscaling to see if it fixies the problem.
>>
>>108091439
Yes. Get denoised latents from first sampler, use upscale latent by node, pass that to a second sampler (can use anima again or a different model), use < 1.0 denoising strength.
>>
>>108091387
FF hags aren't hot.
>>
>>108091387
>>108091342
Can you share a box of that painting style?
>>
>>108091387
maybe but the lack of romance/harem makes the sex appeal dry
>>
>>108091091
That's wonderful! Now, can you go fuck yourself somewhere else?
You fucking waste of space.
>>
>>108091594
you remind me of old grandmas behind windows looking at younger people all day every day and bitching about them
>>
File: Klein_Edit_00212.png (2.28 MB, 2048x864)
2.28 MB
2.28 MB PNG
>>108091606
Oh... let him be mad, it don't bother me any.
>>
>>108091439
Anima has issues upscaling above 1.5k I believe
>>
1girls that aren't anime almost always have huge hands and feet, I'm starting to think all these models were trained on trannies
>>
I wouldn't choose Anima as the only model for my gens. I would put SDXL in the HiResfix and Adetailer pass, taking advantage of the fact that they share the same booru language.
>>
I need controlnets for Anima.
>>
>>108091728
literally why. just type shit out
>>
I need controlnet for Klein
>>
>>108091731
Tiled upscaling, img2img edit.
>>
>>108091756
>tiled upscaling
Why not SeedVR?
>>
just wait for full release
>>
>>108091762
For extra details.

>>108091772
When? At least tongyui said when to expect baze.
>>
>>108091642
noticed that as well
>>
File: test_03.jpg (921 KB, 2160x2688)
921 KB
921 KB JPG
>>108091642
>>108091818
You can always use a hand detailer/stop prompting for trannies.
>>
File: o_00281_.png (1.53 MB, 896x1152)
1.53 MB
1.53 MB PNG
>>
>>108084746
>>108090039
I am continuing my blog about anima lora training.
So I trained again with adamw (optimi version), but that also came out shit, having next to no effect on images. This prompted me to mess with prompting, try different prompts and turns out prodigy did work on my initial attempt (well not really but kinda). It seems that perhaps due to my captions only being gemini natural language ones, the lora works iffy with sdxl style tag only prompts at times (My initial test prompt was like that), natural language prompts, or supplementing tags with NLP seems more reliable. (But again I had some tag only prompts work fine later on so yeah I am not 100% sure what exactly is going on.)
And as for kinda part, despite running 4k steps (2k with batch size 2, which usually was enough for me in SDXL loras) it comes off like it is very underbaked/underfitting, it certainly has elements from source style, but it is not there yet. Looking at epochs they don't look like it is converging into anything imo, so I think more steps are needed? Maybe I will try 8k or even 10k and leave it running overnight. Another alternative explanation, which is not really likely but maybe possible is that 0.05 weight decay was too much? Then again I don't think 0.05 is such an extreme value and I set that because the dataset has some unusual stuff in it that I don't want the AI to pick up. Maybe I'll lower it to 0.02 but I don't know.
1/2
>>
File: anima lora epochs.jpg (3.21 MB, 9856x1152)
3.21 MB
3.21 MB JPG
>>108091971
Another thing is that apparently I fucked up and put lr_scheduler = 'linear' into wrong place in my toml, it's possible that it didn't take effect and it trained with the default one, constant? (Can't verify for 100% but it's likely) I usually train prodigy with cosine and diffusion-pipe doesn't have cosine so the next closest thing is linear. Maybe that will help to pick up finer details with low lr.
Haven't run a training run with compile but it seems to run 30% faster with it, in my brief testing, no idea about the quality impact though. I am considering using it for the next run but it feels like I am starting to introduce too many independent variables. I am going to state again that it's twice as slow as SDXL training per step. Also the quality of the prodigy outputs were a lot closer to source style than adamw optimi ones, possibly noteworthy since some people hate prodigy.
Also here are the epochs of the lora. First one is without lora.
2/2
>>
What accounts shoul you block to unslop your your civitai experience? I've blocked Sarah Peterson so far
>>
>>108091976
>prodigy
i always see people claim this is the best optimizer all the time but it gave me nightmarish results for flux klein
whats up with that
>>
>>108092048
>people claim this is the best optimizer all the time
retards who don't know how to tune the optimizer, its the best when you don't know what you are doing because it automatically scales the learning rate
>>
>>108092048
same, maybe I'm just rarted but the results I got with it were always shit
>>
>>108091971
>Another alternative explanation, which is not really likely but maybe possible is that 0.05 weight decay was too much? Then again I don't think 0.05 is such an extreme value and I set that because the dataset has some unusual stuff in it that I don't want the AI to pick up. Maybe I'll lower it to 0.02 but I don't know.
0.05 is huge. A "normal" value is 0.01

>>108091976
>I usually train prodigy with cosine and diffusion-pipe doesn't have cosine
Prodigy adapts lr and expects a constant schedule without warmup. Anything else is screwing with expectations built into it
>>
File: 1712086406949577.jpg (1003 KB, 1280x1731)
1003 KB
1003 KB JPG
Can someone give me the final redpill on generating original manga? Is it still just not practical at the moment?

Assuming I want to use original characters (not characters from existing series), would I have to draw (or model) them myself before they could be useful in a generative pipeline? I would imagine there is still no way text2image can maintain consistency for a character it doesn't know with just tags.
>>
What trainer supports muon optimizer? Has anyone tried it? The one Kimi is trained with.
>>
Has anyone had any luck with genning using the klein 9b BASE model? I just trained a couple of loras on the base and the likeness is like 80% there using the distilled model. i wanted to try the base instead and see if i could find those missing 20%, but i just cant seem to find the proper combinations of nodes and settings to get a clean gen.

if someone has a klein 9b base workflow i'd appreciate it.
>>
>>108092048
>>108092063
>>108092064
I didn't claim that it's the best optimizer ever at all but I said it worked better than adamw in this specific case (neither really hit the mark though). I also had good experiences with it back in SDXL days.
I haven't trained any klein loras yet. But I can say it is usually worth messing with prodigy parameters. These are what I typically use:
lr = 1
lr_scheduler = 'cosine'
betas = [0.9, 0.999]
weight_decay = 0.1 - 0.01 (depends on dataset)
decouple = true
d_coef = 1
use_bias_correction = true
safeguard_warmup = true
and slice_p = 11 if you need to lower vram use with minimal optimal lr calculation accuracy hit.
Also don't use warmup steps with prodigy.
>>108092067
>0.05 is huge.
I have seen successful prodigy loras being trained all the way to 0.5. No that wasn't a typo. 0.05 isn't huge at all.
>Prodigy adapts lr and expects a constant schedule without warmup.
The without warmup part is correct but constant part is false.
>https://github.com/konstmish/prodigy
>As a rule of thumb, we recommend either using no scheduler or using cosine annealing...
>>
>>108091971
Re: Anima captions, I saw this post from the model creator the other day, FYI:
https://huggingface.co/circlestone-labs/Anima/discussions/9#69812bd9511f2d67952084ae
>For the captioning, every image has multiple caption variants and it trains on all of them. Full tag list, tag list with dropout, tags followed by caption, caption followed by tags, short caption only, long caption only.
>Loras are usually very light finetunes so probably a wide variety of captioning styles could work. It's not like you have to exactly follow how the base model was trained.
I've never touched training myself though.
>>
>>108092078
There was a paper but can't find it.
>>
>>108092121
>it worked better than adamw
It didn't work better than adamw, it worked better than UNTUNED adamw, you also may want to decrease beta2 ([0.9, 0.999]->[0.9, 0.995] or [0.9, 0.99]
>>
>>108092078
It's over, local is not ready for that nor Nano Banana Pro nor NovelAI, maybe the cloud services they can achieve that somehow but you wil have to edit a lot yourself
>>
>>108087770
Also found this from an Nvidia user:
https://huggingface.co/circlestone-labs/Anima/discussions/15#6980829da76c6621376c3035
>On a bf16 supported GPU (4060Ti 16GB), it's only about 3x slower compared to SDXL and is expected as this model is more compute intensive.
Which matches my experience, so thankfully not just an AMD thing.
>>
>>108091971
>>108091976
i would try updating https://github.com/tdrussell/diffusion-pipe/issues/507
>>
>>108092126
I am considering changing captions in the dataset, enhancing it with tag only captions and mixed captions.
But I don't think a rank 32 lora would affect the fundamental prompting meta of the model that much, no? (Also bearing in mind that I am not training the text encoder.) I might get around to testing this but doesn't seem like a high priority issue. Useful to keep in mind for future loras though. I captioned this dataset for ZIT lora training so that's why its natural language only. (Couldn't train anything besides deformed slop for ZIT but that's a different story).
>>108092146
Well it's possible that tuned Adamw would beat it. Though each run takes 4 hours, so I am not sure if I want to spend a day or two minmaxing parameters.
>you also may want to decrease beta2 ([0.9, 0.999]->[0.9, 0.995] or [0.9, 0.99]
0.999 comes from older SDXL runs where I found it worked better for what I was training. What you are saying is possible, I might change it.
>>108092180
On 3060 it's only slightly slower.
Wth is going on?
>>
>>108092185
I know about that lol I set it up and started messing with it before that commit so had to fix it on my own.
>>
>>108092190
For completeness sake these are the adamw_optimi parameters I used:
[optimizer]
type = 'adamw_optimi'
lr = 2e-5
betas = [0.9, 0.99]
weight_decay = 0.05
eps = 1e-8
Copied from main_example.toml.
>>
>>108092190
>I am not sure if I want to spend a day or two minimaxing parameters
You can also just look at what LR prodigy picks and use that on adam (for that just set the scheduler to constant and wait for it to stabilize)
>>108092190
>0.999 comes from older SDXL runs where I found it worked better for what I was training
were you doing that long of training runs? reducing beta2 speeds up training because lora training runs are too short
You can also get a llm to add cosine scheduler to the train.py code, its a pretty small and simple addition
>>
>>108092121
>I have seen successful prodigy loras being trained all the way to 0.5. No that wasn't a typo. 0.05 isn't huge at all.
You only increase weight decay if it's having trouble generalizing. Read up on what weight decay does

>The without warmup part is correct but constant part is false.
>no scheduler
What do you think constant means?
>>
Kinda sad. /sdg/ is using 1girl in the OP to pull new users. Hope it works, but the genners there are wildcard sloppy and lack taste. Probably just bumping, here they wouldn’t need wildcards tho.
>>
>>108092078
>here is still no way text2image can maintain consistency for a character it doesn't know with just tags
The consistency is good with illustrious or noob but it's complet ass at complex settings and understanding anything else than keywords.
>>
>>108092175
I don't have any problem with editing, in fact I enjoy that. I'm just trying to figure out what the best practice is for sustaining character consistency between panels.
I'm wondering if I could create my characters and backgrounds in 3D (like in Blender), then I could use that for posing and composition, then pipe it to image2image in ComfyUI for the manga aesthetic. I'm just still not sure if it would be consistent enough.
>>
File: 972963063.png (1.48 MB, 784x1472)
1.48 MB
1.48 MB PNG
>>108092254
>1girl in the OP to pull new users
you mean indians?
>>
Can I use a 5070ti?
>>
>>108092306
sure, go ahead
>>
>>108092285
Can you post some examples? Specifically of an original character in multiple poses, compositions and angles, and the character maintains his/her proportions, style, clothes, etc.

If WAI can do this, I would be amazed.
>>
>>108092306
no it's forbidden
>>
File: 78.jpg (2.04 MB, 2360x3600)
2.04 MB
2.04 MB JPG
>>108092220
>You can also just look at what LR prodigy picks and use that on adam (for that just set the scheduler to constant and wait for it to stabilize)
So the problem is that the terminal logs show learning rate as 1.0. (It's clearly not as it's not fried to oblivion.) I think it's bugged? I don't know what learning rate prodigy picked here.
>were you doing that long of training runs? reducing beta2 speeds up training because lora training runs are too short
Well no, not long runs. 0.99 vs 0.999 comes from A/B testing where it performed better. Could be just luck of the draw I suppose. Maybe it was a mistake to roll with it but I guess the effect wasn't that drastic?
>You can also get a llm to add cosine scheduler to the train.py code, its a pretty small and simple addition
I though about that too but since prodigy only displays 1.0 regardless of what learning rate it picks it would be hard to tell if it is functioning properly until after training.
Maybe I can just run Adam for a while too observe if it works properly though.
>>108092246
>You only increase weight decay if it's having trouble generalizing.
Fail to generalize by overlearning undesired details, yes. The dataset has unusual images like this, that's why I am afraid of lowering it. Maybe I am excessively paranoid from SDXL days and anima would cope better with this kind of stuff, but I don't know.
>What do you think constant means?
I know what it means, did you read what they wrote about cosine? It is constant OR cosine.
>>
File: help.png (33 KB, 581x693)
33 KB
33 KB PNG
HELP!
I already,
updated the node,
re cloned the repo,
opened the UI in both Chrome and Firefox,
updated ComfyUI,
updated the Python dependencies in the Comfy portable update folder.

Why can’t I see anything inside the Resolution Master node? I love this node :(
>>
>>108092078
Continuing >>108092285 problem is the model being too consistent and tending to do same faces with just different combo of haircut, hair color and eye color. You can limit the problem with some tricks like makeup, special traits (good luck with mole rng), lip thickness tend to change style way too much, having different eye shape like in your picture is a nightmare. Also some mix will refuse to make some haircuts.
Outfit consistency is 25%, you will have to fight details constantly. Background can be rough and multiple character with regional prompting will drive you mad as it fucks with quality. Forget about prompting specific stuff in multiple koma.

>>108092316
you are ok with nsfw?
>>
>>108092324
>I don't know what learning rate prodigy picked here.
I'm almost sure the training code outputs a file that can be visualized with tensorflow that shows the learning rate it picked, either way you can also ask a llm to add a print to the prodigy code so prodigy itself prints the learning rate it estimated
>>
>>108092316
https://files.catbox.moe/yp105n.jpg
>>
>>108092114
i trained 100 loras for klein and only one was usable (with base and distill)
something is seriosuly wrong with that model
>>
>>108092391
yeah i think the base -> pipeline has been fucked with. but i just realized that using lora + reference image fixes all issues with likeness. however, it seems to fuck with anatomy a lot. would probably work better with a lora with lesser steps but i cant be assed to test it out before pressing submit.
>>
Anyone trained useable loras for Z Image yet? My first attempt with stock settings kinda failed(likness not comign out well in ZIM, but for soem reason the lora works MUCH better when you use the turbo distill lora with it). I'm now training with a much lwoer learning rate as suggested and am at 7k steps without the samples showing any signs of collapsing, and the samples look pretty good which says nothign because tehy also looked good in the first failed attempt. Weird model
>>
>>108092338
Turn off nodes2 in settings.
>>
>>108092423
>Anyone trained useable loras for Z Image yet?
they came out borked for me, for datasets that worked fine on zit
>lora works MUCH better when you use the turbo distill lora with it
have not tested this yet, cant really believe it
>I'm now training with a much lwoer learning rate as suggested
by whom?
>>
is there even a core difference for all those trainers out there? i dont mean the GUIs or how the configs look like, but the actual training. shouldnt it be the same for all of them when you pick the same optimizer, lr, resolution etc?
>>
we all agree klein is king right?
>>
>>108092472
for editing, yes, no doubt.
>>
>>108092459
>have not tested this yet, cant really believe it
I've just doen this one lora so far, but it happens in that case. With the turbo lora the output looks more slopped, but the likeness is much closer, idk why as I'm a techlet
>by whom?
onetrainer discussion page, and at least one anon in an earlier thread said that Z Image seems to prefer lower learning rates, but idk yet since the lora is just training atm
>>
>>108092471
>but the actual training
Yes they are all the same, the differences are only in the options you have (like focal frequency loss, EDM2) and how they handle stuff like bucketing, but yes in the end they all just do a forward pass and propagate the gradient of the loss in the backward pass, doesn't mean that the stuff some offer (like EDM2) doesn't improve the final result, they do, which should be the reason on why one would pick a trainer over the other, these kind of features that improve results.
>>
File: 1768705512746987.png (3.54 MB, 2016x1120)
3.54 MB
3.54 MB PNG
>when uire waifu is sexting
>>
>>108092534
Holy shit a woman. Respectfully thank you for the confirmation of the information.
>>
>>108092538
>Holy shit a woman
are you sure?
>>
>>108090291
this
>>108089616
is peak, and I would love MOAR
>>
>>108092527
hmm i see. is one of the trainers clearly better/worse than the others in that regard? i dont know which one i should stick with to get the best results
>>
>>108092445
Modern Node Design (Nodes 2.0) , thanks I desactivated it and now works!
>>
>>108092369
Yeah that seems possible. It ends around 3.3030093618435785e-05. I am seeing values change throughout the training though. Starts around 4.7568460104230326e-06, then 1.2724359294224996e-05 and then 2.4597773517598398e-05 before 3.30...e-05.
Not sure which one I am supposed to plug into Adam.
>>108092556
OneTrainer and musubi are more feature rich than others. OneTrainer has actual UI so you may want that.
Ai-toolkit is popular since it is simple but pretty barebones functionality wise imo and I dislike it due to that.
>>
File: 1761926207361679.png (116 KB, 1124x945)
116 KB
116 KB PNG
does a node exist that is able to check if other nodes are bypassed and is able to change the input accordingly?
>>
>>108092630
yes
>>
sorry, is this /adt/? why all the anime spam? go to your own general
>>
>>108092637
This is my general however, I own it.
>>
>>108092623
>I am seeing values change throughout the training though
you need to set scheduler to constant, else it will show a wrong estimate
>>
>>108092644
this
>>
File: rosa02.png (1.04 MB, 1152x896)
1.04 MB
1.04 MB PNG
>>
File: ComfyUI.png (829 KB, 1917x1008)
829 KB
829 KB PNG
ComfyUI:
Settings Keybinding Simple Mode
Holy f*ck, now Comfy has a Forge like UI.
Please add an inpaint or img2img UI!
Thank you, Comfy!
>>
>I love idiotisation of ui :D
absolute state
>>
>>108092679
lucky boy
>>
>>108092658
It's increasing though, which is what puzzled me. Linear is supposed to decrease it (assuming it took effect) so something weird is going on.
>>
File: o_00302_.png (1.51 MB, 832x1216)
1.51 MB
1.51 MB PNG
>>
>>108092680
huh
looks comfortable
>>
File: 1747653580216176.jpg (464 KB, 1536x1536)
464 KB
464 KB JPG
>>
how tf is video getting better so much faster than image models?
https://litter.catbox.moe/jqlbv23jc8o1w11t.mp4
https://litter.catbox.moe/zdynkggghf20jiko.mp4
>>
>>108092752
local?
>>
>>108092689
No, this is a real improvement, not having to build nodes anymore. Now the only thing left is for Comfy and I’m mentioning it now, @ComfyAnonymous to make ready to use templates for txt2img, hires fix, and ADetailer in ComfyUI, plus inpaint, img2img, and ControlNet.
You know how good it is to finally move away from Haoming02, which still struggles with memory management, slow new model updates, and extensions are half broken on Neo and Classic.

Please @ComfyAnonymous keep updating this Simple Mode please! I don't want to return to *orge Neo!
>>
>>108092759
this is for sure a giant 100B+ model. So hopefully in 10 years
>>
>>108092768
8GB will still be the VRAM standard in 10 years
>>
>>108092693
>Linear is supposed to decrease it
yes, and if prodigy feels the resulting learning rate is too small it will increase it, the learning rate number is a multiplier, if you set lr to 1 and prodigy estimates 1e-4 it stays there, if you didnt train enough and set the lr to 0.5 prodigy will re-estimate the lr to 2e-4 to compensate, which is why you need to set it to constant to get a estimated lr to use on adam
>>
>>108092680
>thumbnails broken
>>
>>108092775
if consumer hardware still exists
>>
>>108092779
also, to add to that, you should do this on multiple seeds, prodigy will estimate a different value each time but they will be close enough so you will have a better estimate/range of learning rates to use, you don't need to wait the whole training, only wait for it to stabilize, then get the value, stop training, and start again on a different seed to get a new estimate
>>
yes actually holy fucking shit. train a klein lora of a character and then combine it with a reference photo of the same person. i've never seen likeness this good before.
>>
>>108092680
how do i get this shit?
>>
>>108092807
pay for the Creamulum package
>>
>>108092802
>combine it with a reference photo of the same person
you mean telling it to put replace the face in the image generated with the lora with the face in the reference?
>>
>>108092807
>Settings Keybinding Simple Mode
Go to Settings
Go to Keybinding
Search for "Simple Mode"
Create a Keybinding for Simple Mode
Activate it!
Say goodbye to *orge Neo!
>>
>>108092680
Test it right now, this is so good!
>>
>>108092819
>use the person from image 1, but do not copy her pose.
and then just write your prompt.
>>
>>108092680
if this gets an inpaint UI as good as the forge one we're so back
>>
>>108092842
why would they make another one? they would just reuse the one they already have which sucks
>>
>>108092779
>>108092794
I see. I might switch to AdamW. (And will use the normal one rather than optimi version, maybe that one has something suboptimal going on, not trusting it). Would you suggest any other parameter changes >>108092213 for Adam? Also would constant , linear or (slopped in) cosine work best with AdamW here?
>>
File: w26-bekw.jpg (744 KB, 1600x1600)
744 KB
744 KB JPG
>>
>>108092921
's good
>>
>>108092759
Local 8xH200 computer.
>>
>>108092802
do you need solid likeness with the lora to begin with or does it also work when the results with lora look borked? could you post a comparison?
>>
File: 1765235990710943.jpg (1.62 MB, 2496x1824)
1.62 MB
1.62 MB JPG
I fucking hate Anima!
It's good enough to be interesting. It gives nice compositions.
But it sucks at details and hires passes. HiRes passes introduce banding and hurt details as much as it helps. Upscaling 1M images with seedvr might be even better.
Guess which is upscale and which is hires pass.
>>
>>108092918
always cosine for scheduler (preferably with a 10% warmup), never constant, worst case scenario (can't add cosine) use linear.
Is that default lr for batch 1? if you are doing batch >1 you should also scale the lr accordingly, by sqrt(batch size), so it becomes lr*sqrt(batch size), 1e-4 for batch 1 = 1e-4*sqrt(8) for batch 8, 0.00028 or 2.8e-4
>>
>>108092951
Have you tried tiled?
>>
>>108092951
left is upscale because its not COOKED like right side
>>
>>108092951
the dataset is good but the arch is shit. A proper klein finetune will beat it
>>
Was lora training also such a crapshoot with no one knowing good settings back when XL came out?
>>
>>108092951
>it sucks at details
It has seen little 1024p epochs. Chill, it's just a preview, those will most likely improve in final release.
>>
>>108092951
nigger you're using a 512px model. wait for full release
>>
Why are there three /ldg/ threads right now? Neither is on image limit or bump limit. And is there any local generator that can approach fantasy realism of Sora?
>>
>>108092680
Now Comfy is comfy? Cool. Add an img2img inpaint ControlNet window when right clicking an assett with canvas like Invoke, Swarm is bloated and its prompt syntax sucks.
>>108092912
>>108092842
Better if they did it like InvokeAI, they have the SOTA img2img inpaint Controlnet UI.
>>
>>108092962
Wrong. See how black edge lines are noisy and uneven everywhere? That's artifacts introduced by hires pass.
>>
>>108092969
more or less, the real problem is that people don't know how stuff works, once you get good hyperparameters you shouldn't have to change it for different datasets or concepts, but people don't know that and once they feed a shitty dataset or do some weird crap and it doesn't work they start to fiddle with hyperparameters
>>
used int w8a8 for the first time for my training. normally use fp8. its twice as fast but results look solid. huh
>>
>>108092979
>>108092982
I don't want to wait. I hate it right now for teasing me with possibilities.
>>
>>108092951
Make a second pass or high res (not sure what the difference is in Comfy) with SDXL.
>>
>>108092956
Seems to be for batch size 1, yes. I am doing batch 2 though I intend to get lr from averaged out prodigy runs.
I was planning a 5 % warmup.
>>108093008
GPU?
Sounds normal if it's 3000 series or earlier.
>>
>>108093024
I tried. It kinda ruins the "feel." Maybe I should try with lower denoising.
>>
>>108092630
There are tons of custom "switch input" nodes with different logic, be it a manual switch or something else. I've only ever used rgthree's "switch any" node, it uses the first non-muted/non-bypassed input
>>
>>108092950
i'm training right now, i'll post some comparisons later tonight. the lora was "good" and trained with excellent images, but since it's klein the likeness still wasnt to my satisfaction. the reference photo just seems to fill in blanks and work as a static reminder for the model to accurately draw the face and body shape etc. i used a naked full body image as the reference, so it didnt just polish the face, it also polished nipple and genital detail perfectly.
>>
>>108093038
Yes, it ruines expression and feel of the init image. In Swarm sometimes I used a 0.2–0.4 denoise with SDXL and a 1.5× upscale with Omni upscaler or latent but values lower than 0.4 produce blurry images in latent. Also, Anima sometimes generates better scenes at 1536×1024, but it can clone characters or cause body horror.
>>
File: 1754705978075976.jpg (586 KB, 1536x1536)
586 KB
586 KB JPG
when ure maid grows a horn
>>
>>108093071
For whatever it is worth, Noob can go lower denoising strength than other SDXL checkpoints without blurring the image as much for upscaling in my experience.
Speaking in general, haven't upscaled any anima images yet.
>>
>>108093028
5070 ti honestly

fp8 at 1024 takes fucking hours for zit/zim/klein
>>
>>108093065
interesting, will test when i'm back home
>>
>>108093100
That's indeed surprising. Blackwell has great fp8 acceleration so int8 beating it is interesting.
>>
Rate my Snake Oil Workflow
1st pass Anima
2nd pass NoobAI-Flux2VAE-RectifiedFlow
(¬‿¬)
>>
>>108093141
Vpred? Radio company?
>>
>>108093127
maybe because its partly spilling into ram for fp8 which slows down training? int fits comfortably into 16gb even at 1024. just guessing...
>>
>>108092078
>>108092175
I know this is /ldg/, but I've actually had success with making a test original webcomic with NBP. The trick is feeding in a good character reference and optionally, the previous panel. The bigger problem is that while NBP can stick to style & character design most of the time, it can be frustratingly dumb/stiff on understanding certain parts of the prompt and sometimes with certain character aspects (e.g., huge wings, exact heights, etc.). Safety cuckery also gets in the way at weird tomes. This is compounded with the fact that iterated edits introduce noise and Gemini can be prone to playing games with aspect ratio & framing which make it difficult to composite in changes—you need an unlimited source of API NBP, preferably >1K res, to get anything done and that's harder to come by. Ultimately it was too frustrating for me but YMMV.
>will you post it
No.
>>
File: o_00307_.png (2.25 MB, 1152x896)
2.25 MB
2.25 MB PNG
>>
>>108093170
Both are 8 bit data, the size difference should be minimal. Minuscule amount of offloading shouldn't halve speed. It's either that fast despite the intuition or something else is going on.
Maybe activations being kept in most presumably bf16 hurts fp8, but I am surprised the margin is by that much.
>>
i just want to remind everyone that you can plug in your monitors into your motherboard and save precious vram by running your DE on your cpu and ram
>>
>>108093231
pretty sure it was like 2 gb difference
have to double check
>>
File: 1769879944245407.png (149 KB, 1041x946)
149 KB
149 KB PNG
why does my workflow crash after a couple gens even if I have nodes for clearing vram
>>
>>108093234
what will this extra 400mb enable me to do
>>
>>108093264
all you need is clip, the model, vae and a sampler.
>>
>>108093264
do you sell that?
>>
>>108093264
Because you downloaded a jeetflow from civit
>>
>>108093274
gpu will work faster
>>
>>108093274
it will give you 4% more vram for free
>>
>>108093277
Not really...
>>108093281
I made it from scratch and I'm white so die.
>>
File: 1511667108879.png (298 KB, 512x512)
298 KB
298 KB PNG
>>108093234
My MB has DP connector but my cpu has no igpu. Bummer.
>>
>>108093264
frontend issue
>>
File: 1099492998961170.png (1.3 MB, 1248x1824)
1.3 MB
1.3 MB PNG
>>
>>108093310
I wonder if I can separate the workflow into multiple and then have them run sequentially but pass variables through but I somehow doubt it would fix it.
>>
>>108093264
your comfy is likely getting OOM killed because of system RAM. comfy will try to offload models to RAM when it needs to make room in VRAM and it's really bad at making sure you actually have enough RAM to offload.
>>
>>108093264
looks comfy
>>
>>108093264
post your gen
>>
>>108092767
bro we have enough advertisement itt already. you are being a useful idiot shill and comfyorg needs to die
>>
>>108092752
diminishing returns

>>108093264
can't figure anything out of that screenshot
but try running the comfy launcher with "--disable-pinned-memory" to see if it fixes it
>>
>>108092680
still looks way less comfy than auto. Now you can tell this is an ad even without them but 2 things here REALLY give it away.
>>
File: o_00312_.png (2.31 MB, 1152x896)
2.31 MB
2.31 MB PNG
>>
>>108093343
Do these nodes not do anything to fix it? I've got 32gb system ram and 12gb vram. Thing is, I can just restart comfy after every gen and it runs fine, so it must be keeping stuff loaded. I just don't know why it would with those nodes. Maybe a launch argument could fix it but none that I've tried.
>>108093350
They're...not amazing, but it's hard to troubleshoot and make things better when i have to restart constantly.
>>>/gif/30237957
>>108093378
I think I tried it before but I'll try it again
>>
File: 1755296935014888.png (33 KB, 439x317)
33 KB
33 KB PNG
>>108093478
>Do these nodes not do anything to fix it?
oops image
>>
>>108093478
Monitor memory use, don't blindly trust that anon's claim that it's an OOM.
I presume you are running it from terminal, if not do that and see if it outputs anything useful before crash.
>>
File: 1671413801458.jpg (3.54 MB, 7488x1924)
3.54 MB
3.54 MB JPG
Retarded, comparing loras on heavily quantized vs bf16, but nonetheless Zbase feels totally better than K9base in adhering to the desired style. Might be not the best example, but I tested more.
Interestingly Zbase lora doesn't fuck up the Turbo gen, although doesn't do anything either. Can't be assed to train a separate Turbo lora anymore.
>>
>>108093478
>>108093488
>>108093494
If this is wan it will use around 30 gigs of swap with 32gb sys memory.
If you are a w-ndows user make sure there is no cap on pagefile and enough space on storage.
>>
File: mmm.jpg (244 KB, 880x862)
244 KB
244 KB JPG
>>108090101
I'll be blunt, any scat models worth a shit?
>>
>>108093521
no, photograph your turds and train a lora on it
>>
>>108093512
which style do you want to emulate? I'm liking more the ZiT gen and Klein 9b fp8, althought is more standard/slopper
>>
File: IMG_3693.jpg (41 KB, 532x231)
41 KB
41 KB JPG
>>
>>108093264
>>108093478
vidya genning with 32gb ram is painful, use --cache-none
Pros: everything immediately unloads upon completion, including models, text encoders and vae so no need for custom nodes mumbo-jumbo
Cons: it'll re-load clip and re-encode your prompt every single time, even if you don't change it
>>
File: 1740530904687.jpg (3.29 MB, 4403x3648)
3.29 MB
3.29 MB JPG
>>108093507
>A blonde elf girl with short hair, black headband, and glasses, wearing a tight blue high-slit bodysuit with gold trim, white thigh-high gloves and boots, holding a wooden staff topped with a glowing blue crystal orb; she’s posing playfully with one hand on her chin, smiling slyly, full-body portrait
>>
File: 476457738029262.png (1.53 MB, 912x1136)
1.53 MB
1.53 MB PNG
>>108092752
What's the model called?
>>
>>108093592
coming to society near you
>>
File: asdfasdf.png (20 KB, 480x254)
20 KB
20 KB PNG
Can someone point me to the best nsfw img to vid model with these specs? Yeah my shits pretty ancient but I don't really game anymore
>>
File: 1125169746243249.png (3.16 MB, 2496x1216)
3.16 MB
3.16 MB PNG
>>108093507
>>108093644
I find Klein 9B loras pick up on style really well actually. Maybe it's just that particular style or dataset that's acting weird.
>>
>>108093747
Video gen with 3GB of VRAM is not really an option.
>>
>>108093747
probably only left with wan 2.2 5b at a low quant. not sure if it will work though
>>
>>108093770
rip. and wan is censored right? sorry new to this. I've been using sites like eternalai and a2e but its not good enough
>>
>>108090101
you didn't include ace step 1.5.

Do I need to start a separate thread for the only at home music model that works?
>>
>>108093747
You can't really use your gpu to gen images, can you?
>>
>>108093787
>wan is censored right?
yes but the least censored. people made loras already to overcome it so it's #1 at porn
>>
>>108093800
I can do EternalAI, Envision, and A2E. A2E is the best by far and I've used WAN 2.6 on there but its too censored. I just use the standard A2E model on there. Can do 15 sec but it ain't perfect.

>>108093806
And loras are like set video templates right?
>>
>>108093850
>And loras are like set video templates right?
no, it's like unofficial patches on the model like dlc or a mod
>>
>>108093752
>maybe
Probably yeah.
>>
File: 59073465029.png (739 KB, 1653x767)
739 KB
739 KB PNG
#1: txt2img Anima
#2: tag image to send to positive prompt
#3: refine with SDXL using positive tags from #2
Doubts:
Is the tagger correctly routed in step #2?
Why, when the gen finishes, does the positive prompt appear empty even though everything is routed and the tagger worked?
>>
>>108093906
turn off nodes 2.0. this is sickening to look at
>>
>>108093885
Thank you very much. So I should just go with wan and use a mod or whatever? lol
>>
File: Flux2-Klein_01275_.png (1.8 MB, 1024x1024)
1.8 MB
1.8 MB PNG
burn your dung cakes, saars
>>
>>108093933
Did this happen because of Nodes 2.0?
>>
>>108093953
you can find a lot of the deleted ones from civit on civit archive. look around with the filters set to wan. 2.1 loras are compatible with 2.2. civit has online generations too but you have buzz tokens and it refuses NSFW stuff in general
>>
>>108093957
the prompt input isn't using the string. find a more explicit string to conditioning node
>>
>>108093953
your system is not powerful enough to generate videos, don't waist your thyme
>>
>>108093995
You might be right... When I do wan on a website it works but when I try wan on huggingface it doesn't. At least not 2.2 animate
>>
>>108094033
>You might be right
not might, he's right. even if you somehow managed to get the model loaded in system ram. it would take you something like an hour just to gen 5 seconds of video.
>>
>>108093592
>microsoft joins the trani bullying
Based
>>
>>108093478
Get this node so you don't have to restart comfy with different args everytime you switch to a workflow that's not RAM hungry.
>>
>>108094126
to basically never cache, put how much RAM you have as your threshold so it always cleanups.
>>
File: 1758583728276793.png (885 KB, 763x1023)
885 KB
885 KB PNG
Kinda funny what happens when my prompt rewriting node fails and outputs an error
>>
File: images.jpg (11 KB, 225x225)
11 KB
11 KB JPG
Randomized Regional Prompt, does this exist? Instead of manually painting or positioning masks, you jst give each region a percentage threshold of the canvas area eg, Region A = 30-40%, Region B = 35% and the placement gets randomized automatically each gen, so you control how much space but the algorithm decides where to place them, my main idea is to batch out tons of varied compositions and fighting the SDXL centered character syndrome.
>>
>>108094239
there's 1000 ways to skin a cat anon
>>
>>108094239
Here is a crazy idea, just use anima with character position as {wildcard}.
>>
>>108094239
Use math nodes.
>>
>>108092752
Local models have been abandoned and are actively fought against, saas models miss 80% of the tools because no fun allowed and saas providers are afraid to scare away the pay pig with stuff like negative prompt, diffusers and manual cfg
>>
>>108094278
buy comfy credits. subscribe to comfycloud
>>
File: 381234942130169.jpg (2.56 MB, 1664x2464)
2.56 MB
2.56 MB JPG
>>
>>108094293
NBP?
>>
>>108094252
>skin a cat anon
How
>use anima with character position as {wildcard}.
about
>nodes
Nyo?
>>
>>108094292
>comfycloud
I wasn't aware that existed, so we now have both models and front ends as a service... With nvidia delaying the next consumer gpu for 2030 the future is grim.
>>
>>108094293
pretty
>>
>>108094220
>>
>>108094351
this is what you get for supporting the fennec faggot. he is also in on the circlestone licence so he can make revenue off of other people's fine-tune and loras. the schizo saying it's going to fuck over grift jeets is retarded too so don't pay him any mind
>>
File: 415953884318829.jpg (2.12 MB, 1664x2432)
2.12 MB
2.12 MB JPG
>>108094300
Klein
>>108094352
thx
>>
>>108094430
FIX HER FEET I WANT TO GOON
>>
>>108094430
big ass birds
>>
>>108094421
But I'm still using a forge version that haven't received update for 8 months
>>
>>108093789
>Do I need to start a separate thread for the only at home music model that works?
Yes please. That would be extremely comfy (in the original meaning of the word)
>>
>>108094371
reminds me of winamp
>>
>>108094442
based retard
>>
honestly? klein might be the ultimate model
>>
>>108094483
they just need to fix the anatomy.
>>
>>108094483
Fully expect nothing to beat it for 2+ years kek, specially if all the ai hype goes to shit
>>
>>108094503
while something might beat it, people wont have computers able to run whatever comes next.
>>
File: file.png (31 KB, 894x285)
31 KB
31 KB PNG
>>108090101
>https://tagexplorer.github.io/
An OpenAI employee made tagexplorer...
>>
>>108094421
>train a good anime base model
>release with a noncommercial license so people have to pay to use it commercially
>NNOOOOO YOU CANT DO THAT IT HAS TO BE APACHE 2 LICENSE
>ok sure, i'll change it to apache 2
>uno reverse card
>original model dev now makes no money, since everyone can run it for free
>lora trainers and slopmergers lock down their finetunes and shitmixes and charge money to access them
>some of them end up making money
>base model dev is now broke
>grifters everywhere constantly trying to sell their custom shitmix because they think they can get rich off it
people are unironically advocating for this like it makes any sense
>>
File: 469813591623725.jpg (2.37 MB, 1664x2432)
2.37 MB
2.37 MB JPG
>>108094439
disgusting, but also understandable
>>108094441
actually they're normal sized birds and small ass people
>>
>>108094508
Which is effectively nothing beating it, the chinese released a model that requires 160gb vram to run, it certainly shits on klein, so what? I can't run it = it effectively doesn't exist.
>>
>>108094525
are you the schizo?
>>
>>108090323
anatomy breaks down if you go above 1mp, so use a hiresfix pass with latent upscale if you want higher res images.

>>108090572
the shitbox site is down or something.

style: @abe yoshitoshi, 1990s \(style\), serial experiments lain, official wallpaper,original, visual novel, game cg, official art, ethereal, dark, gloomy.
subject:
a girl iwakura lain \(serial experiments lain\) who has pale skin, wearing schoolgirl costume with dark grey jacket, white shirt, red bowtie, pleated green tartan skirt, long black socks, brown leather shoes,backpack, headphones. she's walking on the sidewalk in her neighborhood. one hand up, middle finger, smug. speech bubble next to her head saying "FUCK NORMIES".

background:
ethereal, ghostly japanese suburb street,white sky, white theme, with sidewalk, houses, utility poles with power lines and transformers. the colors of the background are partially inverted, pale and unnatural. colored shadows, red shadows.
>>
>>108094525
when has this happened before? there have been plenty of open source licensed models and less than 0.5% of people in the community try aggressive grifting
>>
>>108094508
You can run bloatshit like Chroma on 12GB card. Klein is basically a nothingburger to run.
>>
>>108094525
so how is it better if people have a price tag slapped onto their lora without their consent?
>>
>>108092078
I was unironically thinking of making compositions with anima and then using klein to edit it into manga panels with dialogue as an experiment. Not sure how well that would work though.
>>
File: 19903607364708.png (1.38 MB, 832x1232)
1.38 MB
1.38 MB PNG
>>
File: 540481782259532.png (1.8 MB, 1024x1024)
1.8 MB
1.8 MB PNG
>>
>>108094525
>people working on local are trying to scam each other while giant corpo lock the market for ever
it's over
>>
File: Flux2-Klein_01319_.png (551 KB, 704x768)
551 KB
551 KB PNG
>>
>>108094646
They don't have a "price tag slapped onto their lora". They have to release the lora with the same noncommercial license. Then any platform with commercial use rights (lets say Civit for example) can also allow using the loras. It's already like this with other models, for example you can use Flux with Civit on-site generation and it lets you select any of the loras to use with it.

I have literally never seen people say "I can't believe Black Forest Labs is stealing people's loras and selling them".

BTW PonyV7 and Newbie have basically the same kind of noncommercial license, it's just that they were complete flops so nobody even bothered complaining about it.
>>
>>108094729
>noncommercial
if somebody has to pay it's not a noncommercial licence. the non commercial portion is to do the work for the author while only he benefits from the profit. it's disgusting.
>>
Fresh when ready

>>108094778
>>108094778
>>108094778
>>108094778
>>
>>108094784
thanks for the steamy shit
>>
>>108094525
>people
no its unironically one guy hes just a schizo who samefags
>>
>>108094770
Ok so you're just retarded. The owner of any work released publicly under a noncommercial license, can also grant licenses to specific entities with whatever terms they like (e.g. commercial use rights, but you have to pay). This is how fucking everything works, from open source software under copyleft licenses, to AI models with open-weights noncommercial licenses.
>do the work for the author while only he benefits from the profit
Congrats you've learned how these things work. GPL is a de facto noncommercial license, OSS projects using that license benefit from community contributions, but then the owner of the software can sell commercial use to companies in order to make money.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.