[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


Trying to Piss You Off Edition

Discussion and development of local image and video models and UI

Prev: >>106575437

https://rentry.org/ldg-lazy-getting-started-guide

>UI
ComfyUI: https://github.com/comfyanonymous/ComfyUI
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneo
SD.Next: https://github.com/vladmandic/sdnext
Wan2GP: https://github.com/deepbeepmeep/Wan2GP

>Checkpoints, LoRAs, Upscalers, & Workflows
https://civitai.com
https://civitaiarchive.com/
https://openmodeldb.info
https://openart.ai/workflows

>Tuning
https://github.com/spacepxl/demystifying-sd-finetuning
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/sd-scripts/tree/sd3
https://github.com/derrian-distro/LoRA_Easy_Training_Scripts
https://github.com/tdrussell/diffusion-pipe

>WanX
https://comfyanonymous.github.io/ComfyUI_examples/wan22/
https://github.com/Wan-Video

>Chroma
https://huggingface.co/lodestones/Chroma1-Base
Training: https://rentry.org/mvu52t46

>Neta Lumina
https://huggingface.co/neta-art/Neta-Lumina
https://civitai.com/models/1790792?modelVersionId=2122326
https://neta-lumina-style.tz03.xyz/

>Illustrious
1girl and Beyond: https://rentry.org/comfyui_guide_1girl
Tag Explorer: https://tagexplorer.github.io/

>Misc
Local Model Meta: https://rentry.org/localmodelsmeta
Share Metadata: https://catbox.moe | https://litterbox.catbox.moe/
GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/
Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Txt2Img Plugin: https://github.com/Acly/krita-ai-diffusion
Archive: https://rentry.org/sdg-link
Bakery: https://rentry.org/ldgcollage

>Neighbours
>>>/aco/csdg
>>>/b/degen
>>>/b/realistic+parody
>>>/gif/vdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/trash/slop
>>>/vt/vtai
>>>/u/udg

>Local Text
>>>/g/lmg

>Maintain Thread Quality
https://rentry.org/debo
>>
File: 00201-1863256780.png (3.64 MB, 2048x2480)
3.64 MB
3.64 MB PNG
I have stabilized a style most likely
>>
>>
>>106577885
>You don't seem to understand what a base model is, it's a model made to have as little bias as possible and know of as many concepts as possible so it can be used as a BASE for further finetuning
can someone put the post of the anti chroma anon saying "you are here" and at the end it was, "we can save it with finetune though" or something like that
>>
>>106577893
so just slopstyle?
>>
https://www.alphaxiv.org/overview/2509.07295v1
>>
File: 1751741009725793.png (3.82 MB, 1536x1536)
3.82 MB
3.82 MB PNG
>>106577796
>NOOOO YOU CAN'T JUST TELL THE MODEL WHAT YOU WANT AND GET WHAT YOU WANT USING A HIGHLY CONCISE FORMAT!!! YOU HAVE TO BOOMERPROMPT!!!

>>106577809
I find chroma most useful for actual artistic stuff (which none of the chromaschizos can even conceive of, all they do is generate asian waifus and feet), it's just disappointing that it's so much weaker than it should be. The ROI on prompting effort is much lower than Noob or Qwen, but it is true it can do things no other local model can do at the moment.

>pic
The truth is that we're going to free ourselves from this shit by baking our own models from scratch, using techniques like those in https://huggingface.co/KBlueLeaf/HDM-xut-340M-anime. Architectural changes and training optimizations are going to make it possible to train fully unslopped, uncensored, DEBLOATED models with Qwen-level comprehension and far fewer parameters, with local or rented GPUs on a budget <$10k very soon. It may already be possible.

>>106577820
>>106577885
>Pony and Noob are large finetunes with TONS of sexual positions.
>Chroma is a base model, like SDXL, Flux, QWen, as such it's not specifically focused on anything but knows some of practically everything
This makes no sense whatsoever. It was trained on e621 data right? That data has sex position tags does it not?? So why doesn't chroma?

>>106577829
Nothing wrong with that, but it should have been trained on tags alongside/interchanged with the captions. Seems it wasn't.
>>
>mat1 and mat2 shapes cannot be multiplied
>mat1 and mat2 shapes cannot be multiplied
>mat1 and mat2 shapes cannot be multiplied
>mat1 and mat2 shapes cannot be multiplied
>mat1 and mat2 shapes cannot be multiplied
AAAAAAAAAAAAAAAAAAAAa
>>
>>106577943
>another wall of text
what's wrong with you dude?
>>
Once again there's never been a successful model that's retagged *booru images with NLP slop.
>>
>>106577906
Desperately moving the goal post again

It was always presented as a base model, a de-distilled and de-slopped uncensored version of Flux Schnell, which is also a base model, just like SD1.5, SDXL, SD3, Flux, Qwen, Wan

And just like with all these models, if you want it to be really good for a specific concept, you need finetunes/loras
>>
>>106577941
> A 1.5B-parameter model with RecA achieved state-of-the-art results on image generation benchmarks like GenEval (0.86) and DPGBench (87.21) with only 27 A100 GPU-hours
sounds impressive? they show 0 images though, wtf?
>>
File: stop.png (72 KB, 216x234)
72 KB
72 KB PNG
>>106577960
>just 2 more finetunes bro
just let it go, it's over
>>
>>106577948
> he can't matmul
ngmi
>>
File: file.jpg (3.37 MB, 4096x4096)
3.37 MB
3.37 MB JPG
>>106577897
giwtwm

>>106577878
thanks but it came out a turd

>>106577886
yeah forgot what the prompt was but i think it did have it in there
>>
File: file.png (1.44 MB, 816x1232)
1.44 MB
1.44 MB PNG
>>
>>106577991
>it came out a turd
yeah, the proportions are all fucked up lol
>>
>>106577979
He always post off topic images in the thread, same pattern no deviation and you people keep fucking replying to him
>>
>>106578000
>same pattern no deviation
you mean the wall of texts? + the overusage of the world "shill"?
>>106577943
>>106577527
>>106577373
>>
>>106577943
>This makes no sense whatsoever. It was trained on e621 data right? That data has sex position tags does it not?? So why doesn't chroma?
What part of focus don't you understand ?

If you train on a shit ton of images with no particular biases, then it will not learn particular biases as well as if you train on a shit ton of images with particular biases

It's not rocket science, the model learns through pattern recognition and repeats, if one training has 50k images of fetish X and 5 million images of other stuff, and the other training has 200k images of fetish X and 1 million images of other stuff, the latter model will learn fetish X much better
>>
>>106578000
:c

what do you mean "YOU people"???
>>
File: 1750492740684917.png (2.89 MB, 1344x1728)
2.89 MB
2.89 MB PNG
>>106577959
>>106577943
I'm not a baker, but here's what seems obvious to me:
Make three caption sets:
>Pure NL captions
>NL captions that are infused with tag keywords by telling the VLLM to make sure to include the image's tags in the NL description
>plain tags
Then when training, just include each image three times, one time for each version of the caption. Or concatenate the caption sets in a randomized order.

Why wouldn't this work? Why don't bakers do this?
>>
local
chads
eatin
gud
>>
there is no excuse for a model as big as chroma to not fit in every drop of knowledge from every booru out there with plenty of room to spare. it could fit 4 SDXLs in it. it's simply bad
>>
still down to train an interesting lora to post here

>>106577496
as long as the artist style is not shit, sure
>>
>>106578006
Autistic people connect the problem is the one shilling for non free shit has no reason to be in this thread. He's just rattling the cage hoping to trigger people
>>
>>106578006
>He always post off topic images in the thread
kek, never posted anything not AI generated here

>the overusage of the world "shill"?
can't be overuse when it's 100% correct
>>
>>106578014
Yet chroma learned a ton of concepts that weren't in Schnell. One obvious example, it can do genitals. How is learning basic sex positions harder than learning genitals, which weren't in Schnell at all??
>>
I can't believe there is still someone here desperately trying to convince people that Chroma is the future. It's done. It's shit.
>>
File: G0wjL1vXEAAt-a_.jpg (1.1 MB, 2048x2048)
1.1 MB
1.1 MB JPG
>>
>>106577967
I shared the overview link. click Paper up at the top.
>>
>>106578034
see, you can argue without making a wall of text, that's way more pleasing to the eyes, thank you
>>
>>106578027
80s, 90s, 2000s movie / tv show styles are always appreciated, also not that hard to caption
>>
File: 1730803754554847.png (1.82 MB, 3419x1041)
1.82 MB
1.82 MB PNG
>>106578045
oh ok, thanks
https://arxiv.org/pdf/2509.07295
>>
>>106578027
im uploading a couple of datasets if you give me a sec ill post em
>>
>>106578026
You are so retarded it's not even funny

Sad that you comment so much but know absolutely nothing about AI training
>>
File: facts.png (812 KB, 944x1337)
812 KB
812 KB PNG
>>106578065
>90s, tv show styles are always appreciated
>>
>>106577998
>>106577943
Please post in /adt/ we need you
>>106577893
you to also!
>>106578022
please post your good gens in /adt/!
>>
>>106578027
https://www.mediafire.com/folder/enj83lxxnq1ih/datasets
>>
File: WHERE ARE THEY.jpg (142 KB, 931x484)
142 KB
142 KB JPG
>>106578026
>there is no excuse for a model as big as chroma to not fit in every drop of knowledge from every booru out there
he said on reddit that the artist tags were gonna be there, it didn't happen
>>
rocketjeet is so desperate, it's pathetic
>>
File: 1742907517617710.jpg (692 KB, 1344x1728)
692 KB
692 KB JPG
>>106578095
/adt/ endorses API models, so no.
>>
>>106578074
>nooo, you don't understand, SDXL (3.5b) had no issue getting all the booru tags, but Chroma (8.9b) just can't do itttttttt
or else this anon has 50 of IQ, or else we're dealing with lodestone there, there's no way he's not trolling right? I refuse to believe someone can be this dumb
>>
>>106578035
NTA but just with that example almost every image depicting sex is gonna have genitals while a small subset might depict a particular sex act. Anyway I think a lot of chroma's issues are the unconventional way it was trained more than the dataset.
>>
>>106578065
https://huggingface.co/silveroxides/Chroma-LoRA-Experiments/tree/main
has 1980s and 2000s lora that work

>>106578096
dataset usually means with captions but i'll try. they seem a little too abstract for even chroma but i guess we'll see
>>
>>106578142
>dataset usually means with captions
i can upload mine if youd like but theyre really just shitty joycaption "danbooru-like" tags. i wouldnt think theyd do well with chroma
>>
>>106578128
>SDXL (3.5b) had no issue getting all the booru tags
Just stop lying, they didn't get all the booru tags, the FINETUNES of SDXL did, because they were FOCUSED on booru content

This is so tiresome
>>
>>106578156
i trained a booru only dataset and it worked fine. send em
>>
>>106578160
Chroma doesn't know a single booru tag, and you find this normal? this motherfucker trained the model with booru images
>>
>>106578142
>has 1980s and 2000s lora that work
I'm sure they work but how effective are they given how old they are with all the training that has happened since ?

I mean initially Flux loras worked well, but that quickly changed as training progressed and the underlying models diverged
>>
File: lodestone booru.png (27 KB, 767x204)
27 KB
27 KB PNG
why do chromakeks not even understand their own model? the fact that it was trained on booru images/tags but hasn't learned any artist tags or characters is quite concerning. perhaps anti-chroma 'schizos' were right that it was trained wrong and schnell is a mess of a base model
>>
>>106578168
>trained the model with booru images
Captioned by Gemini

Please stop being so goddamn retarded
>>
>>
>seedream paid jeet force gets clowned on in the thread
>unprompted random seethe spam about local models and chroma in the next thread
ooooooooooooooooooooo im nooooticiiing
>>
File: read retard.png (204 KB, 1722x898)
204 KB
204 KB PNG
>>106578183
you are so fucking retarded it's exhausting
https://www.reddit.com/r/StableDiffusion/comments/1j4biel/comment/mg81j11/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button
>>
>chroma is shit
>t-the seedream shills caused this!
LMAO!
>>
>>106578176
>>106578209
ok, so the furry did train on tags? then what went wrong??
>>
>>106578162
okay all are uploaded now, the garmash ones are fresh
>they seem a little too abstract for even chroma
i was surprised at how well noob took to them so i can only hope chroma will be absolute kino, if its anywhere as good as what i saw anon post when flux dev loras were new
looking forward to seeing yours
>>
>>106578184
>visual representation of all the errors holding me down from training flux

y-you too..
>>
chroma can't do artist tags:
>IT WASNT TRAINED LIKE THAT ITS A BASE MODEL NOT A BOORU FINETUNE!!
it was trained on booru art:
>WELL IT WAS RECAPTIONED TO BE MORE ACCURATE TO NATURAL LANGUAGE
he said himself he preserved the tags:
>SEEDREAM SHILL!!!!
>>
>>106577883
>Trying to Piss You Off Edition
Anons are still falling for it
>>
>>106578210
This goy knows whats up
>>
>>106578209
>noob and illustrious above pony because of "thousands of artist" tags
absolute bullshit
>>
>>106578210
kek, I blame Comfy a little bit though, he didn't implement it right and his ego is to fragile to admit it
https://github.com/comfyanonymous/ComfyUI/pull/7965
>>
>>106578252
do you still use the weird chink tagmine spreadsheet website kek does anon still update it?
>>
File: reani.jpg (169 KB, 832x1216)
169 KB
169 KB JPG
one eight four

i got banned so many times for showing just a BIT of panties, just to clear the air, to announce to the room so to speak :p

aaaaaaaaaaaaaaaaaaaanyways
>>
>it was unironically the pony baker defending chroma's lack of artist tag this entire thread
hooollyyy shit that's pathetic
>>
>>106578224
>he said himself he preserved the tags
Well he clearly didn't, or he preserved very selectively

Go ask him in the Discord channel or on reddit if you are considering suicide because your favorite booru tags aren't in Chroma
>>
Speaking of chroma, Chroma radiance is now officially on ComfyUi
https://github.com/comfyanonymous/ComfyUI/pull/9682
>>
>>106578267
>comfy can't fix the ~100 or so lines of code for chroma
>can implement 1200 lines for Seedream 4 overnight
maybe seedream DID cause this after all
>>
>>106578275
>Well he clearly didn't
he promised he would've preserved them though ;-; >>106578209
>>
>>106578278
how do we use it? i have the gguf already downloaded but it errors in nag workflow
>>
>the absolute backpedaling
massive defeat for chromashills today
>>
>>106578270
Didn't think the mods where that anal, particularly for stylistic stuff

I mean practically all of /a/ would be banned otherwise
>>
>>106578269
yes, i still use it. I don't know if it's still being updated
>>
>>106578278
Is it good doe?
>>
>>106578119
But your electricity and internet are also pay as you go API SaaS services. Stop with the dumb ideology and post your exelent gens in /adt/.
We need you!
>>
>>106578278
Is there quants available yet?
>>
>>106578303
catbox next time bb <3
+post part of the gen cropped or somethin hehe
>>
>>106577983
>mat1 and mat2 shapes cannot be multiplied (2x2304 and 2816x1280)
I hate image generation
>>
>>106578288
He broke some promise made in some reddit post half a year ago, you should sue him for releasing this free model without your favorite booru tags!
>>
>>106578269
>>106578305
I actually made a wildcards file with them in it so I can roll different pony styles with my seed.
>>
>>106578278
and we train it ... how?
Chroma is shit without training it.
>>
>>106578285
Shut the fuck up already chud
>>
>>106578325
>in some reddit post
you mean the official chroma announcement post? he only made 2 posts, one where he announced chroma, and the 2nd one is when he finished it
>>
File: 1737270552163287.png (2 KB, 129x71)
2 KB
2 KB PNG
>>106577948
just do the calc manually bro
>>
>>106578328
By ignoring it. Radiance is a meme for now. Also normal loras work on it anyway. So train on base and use later on any other chroma version.
>>
>>106578339
OMG! He can't get away with this!
>>
File: ComfyUI_14369.png (2.43 MB, 1440x1080)
2.43 MB
2.43 MB PNG
>>106576546
>a whole dataset based on Flux gens
Yikes! I added about 20 synthetic images to my dataset and it slopped up my LoRA something fierce. Too bad they didn't share any prompts/settings for their images, I want to see how they were using Flux. It can either look really good with the right settings, or be very limited with just the basic options in use.
>>
>>106578278
Has it finished training or is it still ongoing ?

Also what's with the Chroma 2k thing, what's that about ?
>>
File: based.png (396 KB, 526x526)
396 KB
396 KB PNG
>>106578352
>He can't get away with this!
based
>>
File: ComfyUI_00008_.png (3.63 MB, 1336x1952)
3.63 MB
3.63 MB PNG
>>
File: hmmm.png (38 KB, 220x221)
38 KB
38 KB PNG
>>106578325
>He broke some promise made in some reddit post half a year ago
when will he refund the guys who supported him on ko-fi because they believed there will be artist tags though? that's false advertisment
>>
God bless ComfyUI API nodes
>>
File: 00252-213288062-ad-before.jpg (356 KB, 2048x2480)
356 KB
356 KB JPG
>>
>>106578355
The only legitimate use case is if there aren't enough real images of for example a specific object, that being said it's still not worth it due to the massive slopping
>>
File: 00246-213288060-ad-before.jpg (619 KB, 2048x2480)
619 KB
619 KB JPG
>>
File: 00241-213288058.png (3.63 MB, 2048x2480)
3.63 MB
3.63 MB PNG
>>
>>106578437
It should read
>don't get mad
>get better
prompt adherence issue?
>>
>>106578476
Seems that way, they can't all be winners plus this prompt is ass going to make something else
>>
>>106578400
Perhaps if they ask for refunds

You didn't pay shit so why are you complaining ? Get a life
>>
>>106578168
Point me to a single Flux-based model that can do artists.
>>
Is there an "official" chroma 2k workflow. Using the original chroma offical workflow on it show awful artifacts when I got to higher resolution (2k by 1k for instance).
>>
>>106578482
>You didn't pay shit
how do you know that? you saw it in your dream?
>>
>>106578493
How many steps?
>>
>>106578489
Are you assuming that flux can't learn artists?
>>
>>106578499
Tried 26 and 50.
>>
>>106578498
>saw it in your dream
dont fall for the subtle shill tactics, they are trying to plant a seed. stay vigilant, localbros
>>
>>106578489
that's basically the problem with all modern models. They're all empty. If you wanna prompt something you have to train it yourself.
They're shit.
>>
>>106578512
kek, that wasn't intended I swear!
>>
>>106578515
Seedream just works
>>
File: facts.png (1.37 MB, 1024x1024)
1.37 MB
1.37 MB PNG
>>106578515
>that's basically the problem with all modern models. They're all empty. If you wanna prompt something you have to train it yourself.
>They're shit.
amen
>>
File: file.png (101 KB, 841x485)
101 KB
101 KB PNG
>>106578172
the loras work fine. i use them with flash and HD chroma.

only some flux loras work but most do indeed not work properly anymore even if they do influence the output.

>>106578219
shit hoster, see image. use gofile or catbox. the fact anyone still uses mediafire in 2025 is surprising
>>
>>106578525
for your generic stock images sure, but let's see some art by greg rutkowski
>>
>>106578498
>how do you know that?
lel
>>
>>106578499
Never mind I think it was the lora I was using.
>>
>>106577787
Imagine being so retarded you can't even understand the advantages of boomer prompting. It doesn't fit into your tiny head that the model that can only understand basic tags is a nightmare to customize to get what you actually want if you want to have any control over the output. SDXL/Noob and all booru-based models suffer from terrible prompt bleed and poor prompt following.
>>
chromakek copeymelty
>>
>>106578525
>Generic slop, three generic artstyles
>Seedream just works
I mean, it's slightly better than GPT piss filter and one pose, but that's a VERY low bar

Enjoy your fisher price toy
>>
You xboxtards don't understand the brilliance of our PS3 goodness!
>>
>>106578218
>furry
you answered your own question.
>>
>>106577897
api is getting mighty based. this would take 10 hours on local
>>
Sega does what nintendon't!
>>
>>106578441
I do still use synthetic images in my dataset (mostly detailed close-ups), but they had to be extremely scrutinized before only the best of the best got further touched up in PS, and they're weighted much lower than the rest of the images. Just going full Flux sounds like a recipe for disaster.
>>
>>106578539
please excuse my retardation
schauer: https://files.catbox.moe/1popxh.zip
garmash: https://files.catbox.moe/y52mpi.zip
shejtano: https://files.catbox.moe/y38p84.zip
>>
>>106578505
Not without a massive Chroma style finetune. Obviously it would have to be magnitudes more. Its lack of artist knowledge is very hard baked into the model, and the same applies to Qwen etc... anything that's greater than 2B parameters is pretty hard to teach from scratch. SD 3.5 did it right in that it knew its artists from the start, though it had its drawbacks.
>>
https://youtu.be/ppDnmrgtHBk
>>
>>106578505
>>106578609
Anyways, it's not that Chroma, nor Flux, can't learn artists. But attention is overwhelmed by prompt following. Unless an architecture handles styles seperately, it will only learn styles through LoRAs and tunes that focus solely on those styles. Flux is already SOTA at styles that have been trained by the community, and obviously so is Chroma.
>>
>>106577941
I'm not smart enough to understand any of the details, is this a big deal or something? you just linked this paper without elaborating further
>>
File: well.jpg (4 KB, 146x175)
4 KB
4 KB JPG
>my dad works at lodestones
>>
how organic
>>
you retards, my team is better than your team!
>>
only one model generates at 4k though
>>
/ldg/: 132 / 28 / 1
/adt/: 31/ 138 / 6
>>
>>106578567
Neta Lumina handles plain tag prompts and has less concept bleed than chroma.
>>
File: 1751576523149611.png (372 KB, 1494x1076)
372 KB
372 KB PNG
>>106577941
https://reconstruction-alignment.github.io/
>We introduce Reconstruction Alignment (RecA), a resource-efficient post-training method that leverages visual understanding encoder embeddings as dense "text prompts," providing rich supervision without captions. Concretely, RecA conditions a UMM on its own visual understanding embeddings and optimizes it to reconstruct the input image with a self-supervised reconstruction loss, thereby realigning understanding and generation.
that's impressive wtf
>>
>>106578653
Yes, also it takes the quintuple to generate
>>
>>106578651
if I just spammed every retarded seed variation I get I could fill this thread up with images too
>>
>>106578589
>>106578605
They're more like
>Stadia is superior to owning a console
>>
>>106578632
He's just going to do this until his caretaker takes his internet away
>>
>>106578632
roaches never die
>>
>>106578651
adt is 144 / 33 why would you lie about something so easily verifiable kek
>>
I can't take your model seriously if you don't package it into an sft
>>
>>106578653
>>106578673
The model is not based on Flux.
>>
>leave for 2 weeks
>thread is still chromaseethers vs chromekekkers
no new releases lately?
>>
>>106578752
i'm slopping hard atm with sneedream it's quite fun throwing artists/styles at it to see what it shits out
>>
>>106578310
>But your electricity and internet are also pay as you go API SaaS services.
im adding this to my arsenal of truth nuk3s for when some faggot ITT gets mad at me suggesting renting a GPU
>>
also nice to see civit finally added a chroma category
>>106578773
i thought seedream was saas? is there an open weights release?
>>
>>106578662
>using visual understanding encoder embeddings as dense "text prompts," providing rich supervision without captions
This is fucking genius. Shit like this is EXACTLY what I mean when I say we have so much low hanging fruit and so many more optimizations to find
>>
>>106578784
>i thought seedream was saas?
it is
>>
Imagine having a life so empty you spend 14 hours a day in a general shilling some off topic service and arguing with anons in a thread for LOCAL image gen. It's almost as if the person doing this suffers from some sort of disability.
>>
>>106578784
>i thought seedream was saas?
it is but it's fun to see what non local models can do. don't see the point in limiting yourself to one side of it unless you're just a gooner.
>>
>>106578799
can you explain further, I'm sure it's a genius idea but I can't visualize it, what do they do exactly to get this improvement
>>
>>106578324
>mat1 and mat2 shapes cannot be multiplied
>I hate image generation
you downloaded the wrong version of a model or text encoder
>>
why are people talking about slopdream online here? just ban these trolls
>>
>>106578693
it was obviously a joke. how do you have more images than posts
>>
Am I the only one who felt actual anger when they saw they added chroma to civit?
I don't know why that model upsets me so much. But the feeling is real.
>>
>>106578828
>now some are trolling hard with it
about pretending that chroma is the best model ever because it can do boobs and vagene?
>>
>>106578784
>i thought seedream was saas? is there an open weights release?
its a shill campaign
anons discussed it in good faith when it first released, then some time passed, now some are trolling hard with it for some reason no idea why
>>
File: 0_00112_.mp4 (905 KB, 832x480)
905 KB
905 KB MP4
Good morning
>>
>>106578270
yeah the difference is anons hate you
>>
>>106578876
gm based /ss/ anon
>>
>>106578817
>can you explain further, I'm sure it's a genius idea but I can't visualize it, what do they do exactly to get this improvement

Imagine ML models that can understand images but suck at generating them. This new method, Reconstruction Alignment (RecA), fixes that by using the model's own visual understanding as a super-dense prompt.

Instead of relying on crappy text captions that miss most image details, it makes the model reconstruct images using its own semantic understanding. The result appears to better image generation and editing, and it only takes 27 GPU hours to implement.
>>
>>106578901
that's not him (i am he), all big tit Latina milfs made with WAN lightning just look the same lol

>>106578933
oh and important to note: this is only relevant for improving multimodal models, not pure text-to-image models
>>
File: 0_00115_.mp4 (1.04 MB, 832x480)
1.04 MB
1.04 MB MP4
>>106578901
>>106578945
post more big titted latinas
>>
>>106578849
It's just the new anti-Chroma trolling flavor. Before it was Qwen, now it's Seedream.
>>
>noooo seedream doesn't belong here!
Seedream belongs here, I will clear up any misinformation and miscommunication.
The OP states "Discussion and development of local image and video models and UI". ComfyUI, as linked in the OP, by default has Seedream as one of the available options. When first opening ComfyUI you are greeted with a pop-up requesting you to sign up and add tokens. One of the models you can spend tokens on is Seedream. The fact that ComfyUI prompts you to use Seedream before ever mentioning SDXL or Chroma suggest that it's an integral part of ComfyUI, and by extension a core part of ComfyUI discussion.
>But the OP says LOCAL image!!
True, but it also says "and UI". Discussion of ComfyUI includes the discussion of any and all components of ComfyUI, including the locally run code in the comfy_api_nodes/nodes_bytedance.py file. This file is contained locally on my device, and runs locally in my ComfyUI installation. Seedream discussion fits this thread as it falls under discussion of ComfyUI.
Conclusion: You are allowed to post Seedream outputs as long as they are generated using ComfyUI API nodes.
>>
Every post he makes is an admission to his suffering and losing
>>
>>106578954
>post more big titted latinas
i'm in my "small girls" phase right now, so i'm posting on /b/ for obvious reasons. glad to see you're still around and passionate for video
>>
>>106578884
>anons
Again it's only you
>>
File: 00140-2869718969.png (1.24 MB, 1344x768)
1.24 MB
1.24 MB PNG
>>
>>106578973
It's really quite odd isn't it. Odd and sad.
>>
>>106578962
Seedream is an API locked paypig model. Nano banana at least is free to try and prompt. Truly a wonder why they would pick the chinkshit model over the free one.
>>
>>106578962
This

Neet with anti Chroma obsessive compulsive disorder just can't stop
>>
>>106578986
talking about google anything is a fast road to being called an indian around these parts
>>
Found an incredible artist but her style has evolved so much I'm afraid my vramlet model won't be able to handle her entire body of work at one time
>>
>>106578954
SLOP SLOP SLOP
How many "Speed LoRAs" and "_fast" shortcuts did you use with your "Torch compiled" "fp8_scaled" model?
>>
>>106579004
Show it to me and I might help you
>>
File: ComfyUI_00093_.mp4 (810 KB, 640x640)
810 KB
810 KB MP4
>>106578997
rightfully so
>>
>>106579016
https://x.com/motkaambu/media
>>
>>106579023
I'm not high enough for this shit
No
>>
>>106579031
Kek
>>
>>106578933
>Instead of relying on crappy text captions that miss most image details, it makes the model reconstruct images using its own semantic understanding.
so you don't need to captions images anymore?
>>
File: comfy698.jpg (1.09 MB, 1280x1280)
1.09 MB
1.09 MB JPG
>>
>>106579004
>Found an incredible artist but her style has evolved so much I'm afraid my vramlet model won't be able to handle her entire body of work at one time
And by artist I mean "girl on instagram" and by
"her style has evolved" I mean she's aged and her face/body changed

>>106579041
the abstract says "providing rich supervision without captions"
this is where my ML knowledge ends. if the only purpose of captions is indeed supervision, then yeah there's no longer a purpose

remember though that this just makes the text-to-image as good as its image-to-text. apparently image-to-text mogs text-to-image in pretty much every multimodal LLM so this is a very welcome change
>>
>>106579070
model?
>>
>>106578662
>>106578799
https://huggingface.co/collections/sanaka87/reca-68ad2176380355a3dcedc068
They published their models. Anyone want to try them for us?
>>
File: comfy332.jpg (887 KB, 1280x1280)
887 KB
887 KB JPG
made a mistake and posted it in a dead threat, ups
>>
>>106578799
>this is EXACTLY what I mean when I say we have so much low hanging fruit
I don't know who you are, but you stole that quote from me.
>>
File: 1756987484784285.jpg (1.32 MB, 1744x2240)
1.32 MB
1.32 MB JPG
Which model can I use to make backgrounds and scenery without a subject being the focus like all these noobai/illustrious 1girl models?
>>
File: 1739674824474746.png (89 KB, 1780x870)
89 KB
89 KB PNG
>>106579083
>worse than Kontext
I'll pass, but this method seems promissing, I really believe we'll get Nano Banana's level with this shit

SPRO to unslop + this to make it good = Qwen Image Edit Ultimate <3
>>
>>106577883
Hey can I ask a really stupid question?
What UI do Image-To-Text app use?
I'm looking at gemini, qwen or joycaption, and I don't understand if their suppose to be run with Stable-Dif, Comfy, something else. Or they are just their own stand alone UI.

>I'm having problems installing so I'm trying to figure out if i'm doing something really basic wrong.
>>
I know the micropenis pun model has taken a lot of QIE's thunder, but qwen image edit is still pretty good desu.
>>
>>
>>106579135
use the "no humans" tag, put 1girl, 1boy etc in negatives. use NegPip instead of the default negatives implementation.

>>106579137
it beats Kontext on three of those columns though, and seems it might be less slopped/better at styles? Also they note in the paper that they could have probably gone further with Bagel, but basically say they ran out of money.

>SPRO to unslop + this to make it good = Qwen Image Edit Ultimate
Not sure if this would work for Qwen Image. It seems to be for multimodal models only. Or maybe there's a way to get it to work?
>>
File: 1727012159112235.mp4 (579 KB, 640x640)
579 KB
579 KB MP4
Animating some frazetta paintings
>>
>>106579202
>it beats Kontext on three of those columns though,
oh yeah I'm fucking blind, I should get some sleep lol
>>
File: comfy1231.jpg (922 KB, 1280x1280)
922 KB
922 KB JPG
>>
>>106579135
eg:
>scenery, no humans, an abandoned factory building, rust, grass, vines, flowers,sunlight, day, red sky, clouds,
>by takamura kazuhiro, by sushio, by sadamoto yoshiyuki , very awa, absurdres, best quality, masterpiece, ultra-detailed, amazing composition,watercolor \(medium\),
>(jpeg artifacts, text, watermark, cropped, censored:-1)
>>
File: 1732338371098346.mp4 (707 KB, 480x832)
707 KB
707 KB MP4
>>
File: 00278-3250109216.png (2.81 MB, 1248x1848)
2.81 MB
2.81 MB PNG
>>
File: file.png (1.97 MB, 832x1488)
1.97 MB
1.97 MB PNG
>>106578168
>Chroma doesn't know a single booru tag
no, it knows many. what it does have trouble with is the artist tags that some people would like to use
>>
>>106579202
>Not sure if this would work for Qwen Image. It seems to be for multimodal models only.
maybe it can work if you use the visual text encoder?
>>
>>106579247
>no, it knows many.
show me some tags it can do (action tags and artist tags)
>>
File: comfy12309.jpg (717 KB, 1280x1280)
717 KB
717 KB JPG
>>
>>106579208
Based
>>
>>106579249
I think you're right.
>>
File: 1745697272355017.jpg (2.41 MB, 2040x1664)
2.41 MB
2.41 MB JPG
>>106579202
>>106579235
Whenever I don't prompt for a 1girl and use "no humans" it seems the background quality is greatly reduced, while if a 1girl is there somewhere suddenly the details are pretty sharp and make more sense. Without it the backgrounds almost always have a 1-point perspective and look like early SDXL generic crap.

I think I need better prompts too, but thanks for the tips.
>>
>>106579110
>>106579232
>>106579259
Some Flux LoRA?
>>
>>106579208
moar
>>
>>106579213
>>106579202
>>106579137
there's a demo
https://huggingface.co/spaces/sanaka87/BAGEL-RecA
so far, not seeing great style results. but the demo space might have bad configs
>>
>>106578386
Shit gen
Shit asuka style
Grow up
>>
>>106579362
>but the demo space might have bad configs
I've seen this cope enough times to know the model is dead on arrival.
>>
My fellow /ldg/entlemen, I recently tried out NAI v4.5 to see how proprietary models are doing and the vibe transfer feature is actually pretty good. Now I want to replicate something similar using local models. I've a few questions for the sages of this thread
1. Is IPAdapter the same thing / good enough? If so, is there a good write-up anywhere? Haven't been able to find a lot
2. If not, does local even have anything similar?
3. If so, does anyone have a comfy workflow I can look at?
Yeah that's pretty much it, I just want vibe transfer at home
>>
File: 1750791885221213.jpg (489 KB, 3029x1754)
489 KB
489 KB JPG
>>106579362
>https://huggingface.co/spaces/sanaka87/BAGEL-RecA
bagel is such a shit model though, if they do this shit on QIE then maybe that can be interesting
>>
>>106579135
>>>/ldg/
>>
>>106579388
all local one-shot style transfer options are shit, however style loras absolutely destroy vibe transfer.
>>
File: 1733082335851599.png (2.06 MB, 3133x1323)
2.06 MB
2.06 MB PNG
>>106579393
keeek
>>
I hate Chroma
I hate Comfy
I hate Asuka fag
I hate SeeDream
>>
>>106579395
Yes, I'm posting in it currently.
>>
>>106579423
Landscape diffusion general
>>
File: 00148-3061477033.png (2.49 MB, 1248x1824)
2.49 MB
2.49 MB PNG
>>106579421
>made the list
wow, I am going to finally download chroma and then spin up some api nodes in celebration. cheers.
>>
>>106579429
Long dead and they never posted any tips or helpful comments (I read the the threads in full).
>>
>>106579388
IPAdapter is superior but you have to dial in the settings. The right settings for you, only you can find.
>>
>>106579398
True, but vibe transfers doesn't just copy style. I'm just curious if someone has tried training a similar model pipeline that NAI is using with the information we've been given
>>106579453
Does IPA only copy the style or also the "essence" like VT does? That's what I'd like to see as you can just grab style loras otherwise like >>106579398 said
>>
>>106579443
Didn't they post some nice hudson river school stuff ?
>>
>>106579393
desu, qwen edit might not benefit as much because it's already so bloated and powerful. I'm more interested in this being a way to power up smaller/debloated models
>>
I've just installed kohya ss but when I try to run it a cmd prompt just opens and then closes immediately. Anyone know the fix?
>>
>>106579459
You'd have to define "essence". I used to throw a dozen or so images into it and it'd work quite well, really well in fact. The other anon is right in that no one really gives a shit about it anymore and will train a lora instead.
Desu 1.5's IPA is miles better than XL's the last time I used either.
>>
>>106579138
ComfyUI, for example: https://github.com/1038lab/ComfyUI-JoyCaption
>>
File: 00320-83642732.jpg (707 KB, 2048x2480)
707 KB
707 KB JPG
>>
File: ComfyUI_00020_.png (3.11 MB, 2560x1440)
3.11 MB
3.11 MB PNG
trying to make some new desktop backgrounds
>>
>>106579487
Thanks, Ran...
>>
>>106579501
nice one
>>
>>106579480
>1.5 IPA is miles better than XL
Damn, well I'll see if I can find some workflows online and just try it out, thanks. Loras are great but if you have 3 or 4 of them, they have a tendency of deep-frying your image, not to mention that you (or someone else) has to spend a few hours cooking up a good one. Maybe NAI will open source their VT pipeline/models in a year or two...
>>
Guys how do the models just KNOW asuka?
>>
>>106579512
>Maybe NAI will open source their VT pipeline/models in a year or two...
its not really good enough to really care noob clears nu nai anyway other than text
>>
>ran took everything from me
>>
seedream stole my will to prompt locally...
>>
I honest to god have no idea why you're all shitting yourselves over seed dream. What even makes it good?
>>
File: 244jt6-678755358.jpg (107 KB, 716x716)
107 KB
107 KB JPG
Ehemmmmmm
>>
*yawn*
>>
File: file.png (2.6 MB, 832x1488)
2.6 MB
2.6 MB PNG
>>106579257
> action
have "drawing (action)", one of the few *action* tags in the boorus as far as I can tell

>artist
I literally just said these are some of the tags it has trouble with.

The implication of it having learned many booru tags from the multiple boorus that went into this also is that indeed it hasn't learned all booru tags. Would be nice to have such a model. It didn't learn AS much though, just probably most of all models so far.
>>
>>106579628
SaaS is evil but (you) download their leaked models as fast as (you) can and
(you) despices cloud services (you) download their models from hugging face that is litteraly a cloud service.

(you) are a joke, this is the real /sdg/
>>
>schizos out
>>
File: file.png (2.89 MB, 832x1488)
2.89 MB
2.89 MB PNG
>>106579515
like miku and a few others she is all over the internets even if you don't ingest *booru or anime-leaning parts of social networks or whatever as training data specifically
>>
>>106579649
>tranny hands
>>
File: WanVideo2_2_I2V_00376.webm (1.09 MB, 1024x1024)
1.09 MB
1.09 MB WEBM
How to stop yapping?
>>
Best way to train a lora locally that isn't kohya? Kohya ss just crashes on start up for me
>>
File: 0_00084_.mp4 (1.18 MB, 1024x576)
1.18 MB
1.18 MB MP4
>>106579511
thank you anon
>>106579656
give her a face mask
>>
File: ComfyUI_00247_.mp4 (700 KB, 480x832)
700 KB
700 KB MP4
>>106579656
prompt more about facial expression
>>
>>106579690
OneTrainer, literally just used it to train a lora
https://github.com/Nerogar/OneTrainer
>>
File: 00343-1768283568.jpg (815 KB, 2048x2480)
815 KB
815 KB JPG
>>
File: ComfyUI_00012_.png (671 KB, 640x640)
671 KB
671 KB PNG
>>106579694
forgive the crap video it was just a quick example
>>
>>106579690
>Kohya ss just crashes on start up for me
I'd look into that.
>>
>>106579704
Whoa momma, very nice. Model?
>>
>>106579712
ChromaHD with a self made lora
>>
>>106579703
nice, ty
>>
>>106579621
>What even makes it good?
you can gen 4k quality natively it seems, no upscaling. prompt adherence is also very good, they actually wrote a paper on their algorithm and why its so good

the aesthetic superiority is subjective as always
>>
File: veeunus-spongebob.gif (2.34 MB, 498x361)
2.34 MB
2.34 MB GIF
>>106579714
>Chroma model
>>
File: G0ws4dxXcAAL2Aw.jpg (3.76 MB, 4096x3072)
3.76 MB
3.76 MB JPG
>>
File: 2loras_test__00023_.png (1.45 MB, 1024x1024)
1.45 MB
1.45 MB PNG
some day those signs will actually say something, hopefully by 2030
>>
>>106579714
Is chroma "finished" already? I was under the impression that it's still training. Also, is it a style or character lora? How much vram/time does it take to train one?
>>
File: 00081-2784931192.jpg (190 KB, 1248x1824)
190 KB
190 KB JPG
>>106578036
i tried the model and hated it. its just broken pos model that was obviously train wrong and is very finicky with its settings. I'll just stick to using to sdxl finetunes.
>>
File: tall.jpg (1 MB, 1024x4464)
1 MB
1 MB JPG
>>
>>106579656
have her suck something
>>
File: 2loras_test__00030_.png (1.4 MB, 1024x1024)
1.4 MB
1.4 MB PNG
>>
>>106579811
HD is the finished version but they are still making different versions. I'm using a 5090 and it takes me 8 hours to train
>>
File: 00103-3456784782.jpg (164 KB, 1824x1248)
164 KB
164 KB JPG
>>
>>106579858
8 hours on a 5090 is fucked, training an SDXL lora takes like 1-2 hours on my 3090, I shudder to think about how long Chroma would take. That's probably why there are only a handful of loras out on civitai
>>
>>106579858
hey wait a second i didnt make this post. where did my post about --fast being still relevant for fp16 text encoders even when using Q8 go??
>>
It's just like, why use Chroma when SDXL does the exact same thing but faster?
>>
>>106579881
better prompt adherence and can do text
>>
ok weird i guess spam filter discarded it, first time i had that happen on the evasion site

anyways, --fast matters if you use Q8 with a fp16 encoder

video not using --fast on Q8: 302 seconds exactly every time
video using --fast on Q8: give or take 250 seconds. --fast introduces a much larger time range that can vary between 240 to 280 seconds but its always faster than not using it

un-noticable difference to prompt adherence. nothing like going from fp16 to fp8 scaled for t5xxl
>>
>>
File: file.png (516 KB, 1024x1024)
516 KB
516 KB PNG
>Pruned Qwen Image in half (10B params). It needs a lot of training to be useful, so I decided to make it a pixel space model. Patching pixel space with 32x32 patches. Samples are the current 10B latent version and the pixel space version. Both will need a lot more training.
https://xcancel.com/ostrisai/status/1966987356357226612
>>
>>106579868
>8 hours on a 5090 is fucked
Totally depends on the number of images and resolution

Have you ever trained at all ?
>>
File: ComfyUI_00139_.mp4 (479 KB, 640x640)
479 KB
479 KB MP4
>>106579898
>xcancel
>>
>>106579885
>better prompt adherence
Control nets exist and are extremely good. Most of the actual practical uses of Chroma do not even demonstrate the need for its prompt adherence over SDXL either. It's all just 1 girl stuff.

>can do text
Benchmaxxing useless shit.
>>
File: 00113-2287984840.jpg (168 KB, 1824x1248)
168 KB
168 KB JPG
>>
>>106579919
you should go back to doing lolis with bulges
>>
>>>/b/939770328
>>
File: 00119-1035844509.jpg (184 KB, 1824x1248)
184 KB
184 KB JPG
>>106579925
not in the mood to get another 3 day vacation again. Even posting on /trash/ got me banned and /b/ is full of compressed low res slop gen. Not risking it. just got banned on /v/ and warning the other day on /aco/ for posting young lunafreya.
>>
>>106579818
>w-w-w-Would
>Would
>Would
>Would

https://www.youtube.com/watch?v=fZVDNPODfeY
>>
>>106579990
you should be banned for linking to /b/ in an AI thread and showing me adults instead of a sexy kid

>>106579993
>not in the mood to get another 3 day vacation again.
ok so just replace the boards 4chan org with k1w1 dot st in your url address bar but with an i instead of a 1 so its the name of the fruit/new zealander
>>
>>106579993
fair enough bro cool gens and style tho
>>
File: 00122-3559916170.jpg (163 KB, 1824x1248)
163 KB
163 KB JPG
>>
>>106579789
model?
>>
>>106579901
>training an SDXL lora takes like 1-2 hours on my 3090
Yes. Yes I have. I don't know how many images other people use, but I haven't gone above 100 yet. I train at 1024 pixels and rank 32-64
>>
i regret replying to 106579993 because its obvious he was trolling :/ talking about compressed slop when he posted a sub 200kb jpeg and when i gave him a solution to his only problem he didn't take it. so obviously he could have just left it at "not in the mood" but because he's a left winger he needs to invent problems :/
>>
I like Chroma.
>>
>>106579898
interesting attempt but i wonder if he can train it enough to confirm his setup is working or not (or retrain if it isn't)
>>
>>106579881
i can do 2girls interacting with each other with chroma instead of just 1girl with sdxl+snake oil
>>
File: file.png (2.18 MB, 832x1488)
2.18 MB
2.18 MB PNG
>>106580121
it works better.

but wan (1 frame for images), hidream, qwen and maybe others are probably actually quite at bit stronger yet. they unfortunately also have less interesting 2girls overall
>>
>>106580082
seedream 4
>>
>>106580136
i wish someone would give qwen the chroma treatment
>>
>>106580141
you try this? https://www.reddit.com/r/Bard/comments/1nfl7tx/comment/ndxhrmc/
too much effort for me
>>
So I want to finally try Qwen out: is Qwen + lightx2v 8step the go-to if I want to use LoRAs?
>>
>>106578997
yeah, there is a reason for that anon. iChuds won
>>
Alright, I really want to get something good out of Chroma, since Qwen is so slopped for photoreal nsfw at the moment, but goddamnit I just can't get a decent gen to save my life.

Yes it's a skill issue, yes I'm a fag. Can someone share a box of a decent Chroma-HD-Flash image or recommend some settings?
>>
>>106580121
pure skill issue if you can't do 2girl with noob
>>
File: 00020-1100356185.png (2.78 MB, 1824x1248)
2.78 MB
2.78 MB PNG
>>106580108
not trolling, here is the catbox if it makes you happy.
https://files.catbox.moe/4o5oaq.png
https://files.catbox.moe/k7p27d.png
https://files.catbox.moe/pafows.png
https://files.catbox.moe/d2593f.jpg
https://files.catbox.moe/kjw6xv.png
https://files.catbox.moe/w7n9xh.png
https://files.catbox.moe/51xixp.png
https://files.catbox.moe/s2wuas.jpg
https://files.catbox.moe/0h5tgj.png
https://files.catbox.moe/knmk31.png
>>
>>106580187
Whats the last prompt you used?
>>
>>106580190
i burnt myself out on anime a long time ago
>>
>>106580160
i haven't, though i have found a couple ways to scam infinite credits though such as LMArena and Yupp.
>>
>>106580187
Flash tends to suck ass on its own and I get better results with Hyper-low-step at 1.00 and flash lora at 0.4
>>
go back to the cloud threads and leave this one to the grown ups kek
>>
>grown ups
>512x512
little baby boy soiled his diaper~~
>>
>>106580160
uncensored chinese api sounds mighty tempting.. hard to resist the siren’s song
>>
>>106580216
>>106580216
>>106580216
>>106580216
>>
>>106580213
kys
>>
>>106580195
>here is the catbox if it makes you happy
well i was trolling with my post so now i feel bad for making you put in that effort. i hope the other anon appreciates what you shared and replies thanks to you as well
>>
>>106580205
>LMArena
yeah that's what i've been using tried yupp yesterday but didn't get too far into it
>>
File: file.png (2.75 MB, 832x1488)
2.75 MB
2.75 MB PNG
>>106580147
would be nice but you need someone/some entity with even more money to spare than lodestone (who mostly funded chroma, donations only covered the smaller part of expenses until now)
>>
christ
>>
>>106580228
>who mostly funded chroma, donations only covered the smaller part of expenses until now
Is lodestones just ultrarich?
>>
>>106580222
you started it not me i said what i wanted to say already lol
>>
>>106580237
Furries seem to be rich, for some weird reason
>>
>>106580141
Not local though.
>>
>>106580222
it's alright, tonight i have the right amount energy to slop up some gens and effort post. going to keep cooking.
>>
File: file.png (2.62 MB, 832x1488)
2.62 MB
2.62 MB PNG
>>106580237
the wealth of the ultrarich is far more insane even if you just look at liquid assets they could easily expend on a sustained base.

unfortunately we're not getting them to drop good uncensored NSFW models on the public so far.

but he clearly isn't poor
>>
>>106580251
He will attempt to discuss it here regardless because he is a faggot
>>
>>106580241
>Furries seem to be rich, for some weird reason
there's just a lot of rich people out there bro. some of them are furries. 3.5% of men are pedos. are you a golem who thinks the forbes list of billionaires is all the billionaires on the planet?



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.