[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


Authenticated Figure Edition

Discussion of Free and Open Source Text-to-Image/Video Models

Prev: >>107102952

https://rentry.org/ldg-lazy-getting-started-guide

>UI
ComfyUI: https://github.com/comfyanonymous/ComfyUI
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneo
SD.Next: https://github.com/vladmandic/sdnext
Wan2GP: https://github.com/deepbeepmeep/Wan2GP

>Checkpoints, LoRAs, Upscalers, & Workflows
https://civitai.com
https://civitaiarchive.com/
https://openmodeldb.info
https://openart.ai/workflows

>Tuning
https://github.com/spacepxl/demystifying-sd-finetuning
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/sd-scripts
https://github.com/tdrussell/diffusion-pipe

>WanX
https://comfyanonymous.github.io/ComfyUI_examples/wan22/
https://github.com/Wan-Video

>Neta Yume (Lumina 2)
https://civitai.com/models/1790792?modelVersionId=2298660
https://nieta-art.feishu.cn/wiki/RY3GwpT59icIQlkWXEfcCqIMnQd
https://gumgum10.github.io/gumgum.github.io/
https://neta-lumina-style.tz03.xyz/
https://huggingface.co/neta-art/Neta-Lumina

>Chroma
https://huggingface.co/lodestones/Chroma1-Base
Training: https://rentry.org/mvu52t46

>Illustrious
1girl and Beyond: https://rentry.org/comfyui_guide_1girl
Tag Explorer: https://tagexplorer.github.io/

>Misc
Local Model Meta: https://rentry.org/localmodelsmeta
Share Metadata: https://catbox.moe | https://litterbox.catbox.moe/
GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/
Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Txt2Img Plugin: https://github.com/Acly/krita-ai-diffusion
Archive: https://rentry.org/sdg-link
Bakery: https://rentry.org/ldgcollage

>Neighbors
>>>/aco/csdg
>>>/b/degen
>>>/b/realistic+parody
>>>/gif/vdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/trash/slop
>>>/vt/vtai
>>>/u/udg

>Local Text
>>>/g/lmg

>Maintain Thread Quality
https://rentry.org/debo
>>
File: 1000011617.webm (554 KB, 1024x1024)
554 KB
554 KB WEBM
>>107103884
>>107108437
Wanted to add but even a single video degrades the overall image quality because of compression
>>
File: 2437.jpg (73 KB, 1000x1000)
73 KB
73 KB JPG
>>107108429
There are things chroma doesn't understand e.g. hotel bellman cart
>>
File: ComfyUI_00020_.png (1.11 MB, 1024x1024)
1.11 MB
1.11 MB PNG
>>
Blessed thread of frenship
>>
>>107108383
>>107108394
Referencing my question here since I asked it right at the tail of the thread. If anyone has hit issues like this, I'd love to hear what worked for you.
>>
>>107108768
I'm not sure what you mean by "it looks incoherent"
>>
>>107108437
Why didn't you include my image >>107107984 in the collage OP faggot? Who do you think you are?
>>
>>107108904
Like the defined details within the say: the eyeshadow, lips, strands of hair. It rapidly loses the details from that first frame and does that wobbly motion blur. I'm not sure how to better describe it, but I notice it on mostly all wan2.2 i2v gens I've done. both base workflow, and with smooth mix. Using ComfyUI desktop on windows 10 with a 4090 24gb if that helps.
>>
File: 00020-117623596.png (1.59 MB, 1080x1920)
1.59 MB
1.59 MB PNG
>>107108949
lol mad
>>
>>107108985
What model and lora you use for those thiccallisious edits bwo? I need to know ;_; I asked on adt too
>>
>>107108995
>he doesnt know
Holy newfag NTA
>>
>>107108995
wai NSFW, this lora for the new psg style https://civitai.com/models/1923265/new-panty-and-stocking-with-garterbelt-regular-style?modelVersionId=2176786

i have no idea what you mean by edits but good luck bud
>>
>>107109002
Im not glued to these threads 24/7 you cunt
>>
>>107109004
Thank you anon, I really liked the thristy stocking sticked to the car windshield, could've been an actual anime frame
>>
>>107108982
by base workflow do you mean with lightx2v loras? maybe try res_2s and bong tangent, just a thought
>>
>>107109005
unironically NGMI you should kill yourself
>>
File: ComfyUI_00261_.png (1.18 MB, 1280x1120)
1.18 MB
1.18 MB PNG
>>107109022
>>
File: 00601-2244519855.png (2.22 MB, 1824x1248)
2.22 MB
2.22 MB PNG
>>107109013
oh man that one wasn't even using the psg lora, half of that style came from the carwash concept kek
forget which one that is, later ill dig up the specific concept from the hash since i cant find it through search.. but yeah just mix concept loras with different styles and facial reactions, you get some fucking wild mixes like picrel
>>
>>107109039
So you just prompted psg stuff in wai with a car wash lora sans the psg lora? Interesting, your shits quality, you should upload to civitai some time (so I can lazily copy the seed and prompt :3)
>>
>>107109062
shit actually thats a good point, heres my civitai i made recently https://civitai.com/user/sneedingonmyligma420
ill just dump them there, less effort.
>>
File: tbh-creature.gif (132 KB, 220x220)
132 KB
132 KB GIF
>>107109068
Bookmarked
>>
>>
>>107109015
Yea at first I thought it was them. but using smoothmix model and workflow with no lightx2v loras even enabled its still there, albeit less so.
>>
>>107109150
based
>>
File: ComfyUI_00110_.png (2.83 MB, 1024x1312)
2.83 MB
2.83 MB PNG
I really like how I asked chroma to do a "comic book cover of marvels carnage character" and it literally did a plastified photograph (even tho it fucked up the letters). It also did it in some very old timey 50s comic book style where carnage wasnt even invented yet. Thank you chroma, very cool.

/blogpost
>>
>>107109170
smoothmix has it baked in
>>
File: 1732258049477027.png (1.2 MB, 1152x896)
1.2 MB
1.2 MB PNG
>>
File: 1749967284990749.png (1.34 MB, 1152x896)
1.34 MB
1.34 MB PNG
>>107109318
>>
>>
>>107109335
Sniff it up janny
>>
>>107109370
jannies, street, dragged
>>
Did the internet die? Did enslopification kill it?
>>
File: 00062-1247422989.png (1.37 MB, 1024x1024)
1.37 MB
1.37 MB PNG
>>
Is any model good at nose picking?
>>
File: 1754775644911087.png (2.76 MB, 1344x1728)
2.76 MB
2.76 MB PNG
>>107109411
>>107109335
I really like your art and I save them with love
>>
>>107109490
who nose?
>>
>>107108486
Just use a boomer prompt to describe it.
>>
>>107109411
He will clean it up in vallhalla now
>>
>>107109501
I bet the Sniffer knows.
>>
>>107109512
lmao

>>107109497
thank you for your support
>>
>>107109507
wdym?
>>
https://vocaroo.com/1b2S4GZqf3Xe
I've learned so much about SongBloom.

SongBloom is very different from suno and udio, at least as far as we know.

So the biggest info is that you're trying to ride a razor's edge (and possibly just try more seeds) against the model just blurting out a verbatim sample of the source audio.

Primarily I think this is under cfg. But perhaps not, it's delicate dialing it in.
>>
File: 00004-1625842642.png (1.67 MB, 1000x1000)
1.67 MB
1.67 MB PNG
>>107108437
You are a faggot. You shit on Neta Lumina but defends the 2nd or 1st most spammed failed model in history which is Chroma.
Qwen objectively mogs it, WAN txt2img at 1 frame mogs it, even base Flux mogs it.

You can keep spaming this garbage shilling general for eternity, but you must know that you are also a shill.
And a shill of the worst overhyped overtrained and overshilled model in history: Chroma

I expected more from you.
Newfag
You losted

>t. antichromaschizo, anticomfyschizo, antinetaschizo, antsageatentionschizo since mid 2025
>>
>>107109237
That makes a lot of motherfucking sense. Thanks I'll adjust things accordingly. You saved me another day of schizophrenically analyzing and adjusting settings.
>>
>>107109675
My post is easily the top music of this entire thread, hands down.
>>
whu
>>
File: 1girl Certification.png (505 KB, 1111x1939)
505 KB
505 KB PNG
Are you guys gonna get your 1girl Goon Cert?
>>
File: ComfyUI_06868_.png (1.63 MB, 1200x896)
1.63 MB
1.63 MB PNG
>>
File: ComfyUI_06869_.png (1.45 MB, 1200x896)
1.45 MB
1.45 MB PNG
>>
File: ComfyUI_06874_.png (1.54 MB, 1200x896)
1.54 MB
1.54 MB PNG
>>
File: ComfyUI_06879_.png (1.57 MB, 1200x896)
1.57 MB
1.57 MB PNG
>>
My sister showed me an ai image on her phone today. It's over.
>>
>>107109335
is that base ΣΙΗ model or some mix?
>>
>>
We are literally 1 year away from local generating (automatically) personal Computer Chronicles updates on the week's tech news.
>>
>>107109887
comfy gen
>>
File: WanVid_00024.webm (634 KB, 720x960)
634 KB
634 KB WEBM
just a toosday night
oh shit it's wednesday morning
>>
>>107109926
Thanks
>>
>>107109967
can it do boiling water?
>>
File: ComfyUI_06894_.png (1.2 MB, 1184x880)
1.2 MB
1.2 MB PNG
>>107109813
>>
what's a good comfy nood to browse images in folders and then send it somewhere?
>>
that time of year the GPU doubles as a furnace
>>
I generated a 1girl, I hope you are happy. I'm not posting it, he looks like a troon.
>>
>>
>>107109748
kino. reminds me of that soviet concept tank
>>
File: ComfyUI_00003_.png (1.47 MB, 1024x1024)
1.47 MB
1.47 MB PNG
idk, Chroma can make first rate slop.
>>
File: file.png (261 KB, 2075x886)
261 KB
261 KB PNG
not sure what i'm doing wrong, but i am trying to use the wan 2.2 5B and every time i try to generate a video, say 5 seconds long it falls apart after like half a second where it just turns into a blur of mushed bright colorful streaks
i checked out other people using this same model and their videos look fine,
i am using the default json node setup in comfy ui that came with it, any idea where i might be fucking up?
>>
>>107110254
You need length 81 and fps 16.
Plus 5b isn't really great.
It's more like an experiment that debunked its premise
>>
>>107110277
Oh fuck ignore what I said 2.2 was 24fps.
I dunno it's bad but shouldn't be that bad.
>>
>>107110282
*2.2 5b was
I don't really recall what workflow I used when I experimented with it neither.
>>
File: 1745301698687005.png (1.53 MB, 880x1176)
1.53 MB
1.53 MB PNG
>>
>>107110254
anon, it's the vae, you need to use the 2.1 vae
>>
File: 00027-2911149530.png (1.77 MB, 1080x1920)
1.77 MB
1.77 MB PNG
>>107109076
they're up :)
two new tips; set your quality tags at the end of the prompt, don't use masterpiece. it twists the style.

>>107109150
shih muh dih that space gas better stop actin up

>>107109335
>>107109411
>>107109497
>>107109520
thank you for your service
>>
>>107110302
No 5b needs 2.2 one.
>>
>>107110317
ohh then i have no idea
>>
File: demeterlo.png (2.49 MB, 1600x1224)
2.49 MB
2.49 MB PNG
>>
Any new wan speed boosts lately?
>>
File: 00031-2660807875.png (2.11 MB, 1080x1920)
2.11 MB
2.11 MB PNG
>go looking for janny lora
>find this bad piece of bunny meat instead
zamn i forgot arthur was HOT CHICK HEAVEN
>>
>>107110345
Literally what mineshafts can look like :)
>>
File: test.webm (911 KB, 640x352)
911 KB
911 KB WEBM
>>107110302
i tried 2.1 vae but got an error: Given groups=1, weight of size [16, 16, 1, 1, 1], expected input[1, 48, 31, 22, 40] to have 16 channels, but got 48 channels instead

>>107110317
yep doesn't work with 2.1

here is the video example, its a photo of a girl with prompt "girl dancing" i am super new at this so maybe i am screwing up something obvious?
my gpu is rtx3060 with 8GB vram so i don't think i can run any bigger parameter models, or is there some better model i can use? i need mainly image to video to make clips around 5 to 10 seconds
>>
:(

I don't want a rabbit drawing, I want a song about the best German ever.
>>
>>107110366
Kino
>>
File: ComfyUI_00004_.png (774 KB, 1024x1024)
774 KB
774 KB PNG
imo I just won ldg of the day award.
>>
>>107110366
This is a long shot but maybe just really bad seed? Is this error or similar errors repeating? Maybe paraphrase the prompt and try again? Sorry I have no concrete idea what's off with your gens anon.
>my gpu is rtx3060 with 8GB vram so i don't think i can run any bigger parameter models,
You can run them but the speed will be shit.
>i need mainly image to video to make clips around 5 to 10 seconds
Stick to 5 secs if you want coherency.
>>
>>107110383
>You can run them but the speed will be shit.
how shit is shit?
>>
>>107110366
Kind of reminds me of Loab.
>>
>wan2.2_i2v_A14b_low_noise_lightx2v_4step.safetensors
>wan2.2_i2v_A14b_high_noise_lightx2v_4step_1030.safetensors

How do I use them? They don't show up in diffusion model dropdown list
>>
File: ComfyUI_00006_.png (1.78 MB, 1024x1024)
1.78 MB
1.78 MB PNG
>>107110381
>shit.
>>i need mainly image to video to make
>>
>>107110412
only one way to find out
try the Q3_S model, maybe it works on 8GB
>>
>>107110412
I mean I have the 3060 12gb which has 50% higher VRAM and bandwidth than yours and it takes a few minutes to shit out a video with most aggressive "optimizations" (aka quality degrading shit like distill loras).
fp16 is out of question so I used Q8 as well.
>>107110448
I disagree Q3 will be both too shit, and these aggressive cope quants don't really run faster than Q8.
>>
>>107103884
moar
>>
>>107110433
They are loras, you load them as you would load any lora, to their respective models. Also
>model dropdown list
No idea what you are referring to.
>>
>>107110469
they are like 27GB, I thought they are wan model.
>>
>>107110477
Then they baked the loras into the model?
Load them as you load the normal wan.
Restart your UI if they don't show up. (If this is comfy you can just press r too)
>>
Looks like the video as prompt is out but is there a workflow anywhere? I cannot seem to find one.

https://huggingface.co/Kijai/WanVideo_comfy_fp8_scaled/tree/main/Video-as-prompt
https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Video-as-prompt

>>107110433
Have you refreshed comfy? Also apparently you still got to use the 2.1 lora on low, kek, see https://huggingface.co/Kijai/WanVideo_comfy/discussions/105
>>
How to deal with system lagging/oom during the frame interpolation stage?
>>
>>107110503
add a clean vram node before you start that stage.
>>
>>107110506
That fixed it, thanks.
>>
I'm getting weird results with flashvsr, I can't tell if it has added/removed frames or greatly changed the parts of the video. Anyone that has experienced this?
>>
>>107110595
Took this from seedvr2 setup and it seems to have fixed it.
>>
>>107109218
>Did plastic wrap
>Didn't do hand or feet
Sad
>>
>>107108437
>tranny baker includes a literal ad for a shit product and some faggot's celeb obsession into the collage but not a dozen other different normal gens
Quite something when there are more mentally ill retards in these generals than the average poster on /lgbt/
>>
soo, is chronoedit good? anyone manage to test it? how does it compare to gwen? in the process of downloading it right now
>>
how long till we can we can run capabilites like grok imagine or sora offline at our gayming pcs? 10 years?
>>
>>107110746
It's absolute trash, worse than kontext
>>
>>107110769
hmm, i mean even if they ban ai today, then current models aren't going anywhere, but i don't even see that happening, because if one country bans AI it will instantly fall behind all others that don't (like china) which means no country ever will do anything above the occassional NSFW model restrictions.
I am trying to imagine of the online art landscape will look a decade from now, will everyone just be making pictures and videos on for themselves to look at? because since it will be so easy and cheap for everyone to make, how could anyone gain a following in such an environment?
>>
>>107110769
>>107110797
kys furryshit
>>
>>107110797
>A.I vtubers will totally be a thing, regardless of the current stigma.
oh absolutely, real vtubers are cooked
sure, they are "real" but to you as a fan that does not matter, you will never meet them in real life and if you want her to know you even exist you need to make a $20 dono TTS
while an AI vtuber can stream 24/7, but most importantly the can instantly reply to every chat message you send them, plus while the irl vtuber is pretending to be an anime girl, the AI tuber is a "real" anime girl, as in she only exists as the anime girl with no normie identity.

>>107110807
we don't discriminate here, generate whatever make's you little 0.5inch pp hard
>>
>>107110807
if those gens were good enough for me to trigger a schizo, then i take furryshit as a compliment. for once.

>>107110811
kek i hadn't even thought of it to that extent until you detailed it like that. Man they are F U C K E D for sure.
Which honestly brings up another question of, what website would be willing to host things like that, given there's no way it'd last 2 seconds on youtube with its fucked rules and lax banwaves lately. All it takes is one funny fella goading the a.i into saying the nigger word to topple an entire channel.
>>
To the guy who replied to me about no example images regarding eq-vae last thread >>107104387 , it apparently has a civit page too which I stumbled on today:
https://civitai.com/models/2071356
>>
I'm not a tech guy, is it possible to change the text encoder of a model without needing to train from scratch?
>>
What does ModelSamplingAuraFlow shift value actually do? I've tried it out and I can see an effect, it's like it adds that stippled look, blurs things a little, colors get sharper, you 'smooth out' some of the tendency for non-ancestral sampling to produce artefacts. In some ways it's like a weird grainy alternative to ancestral sampling, but not quite the same. It's also a bit like turning up cfg, but again, not the same.

When you turn it below 1 you start getting more like the rough-looking gens that I associate with low step Uni_PC sampling. The gens also start to wash out a bit.

Do you guys use it? I guess it's one more tool in the toolbox.
>>
>>107110894
pretty much no.

the theoretical translation layer to convert text encoder a to text encoder b is currently probably actually more complex than the whole training of the text encoder AND image/video model.
>>
>>107110820
well we have an example of it like neurosama which is an AI vtuber and the her creators makes serious bank and she is super popular, and hes on youtube and everything
but if youtube won't accommodate them someone will absolutely create a platform that will because this is a billion dollar waifu market
but i can't see platforms like youube or twitch banning them since AI tubers will be money printers and youtube/twitch etc are for profit companies, so they will either go with the flow or become irrelevant
>>
File: Untitledsdgsgsdg-1.mp4 (3.8 MB, 562x1000)
3.8 MB
3.8 MB MP4
Hell yeah, starting to get those seamless loops working with editing.
>>
>>107110963
nice. got a workflow for that blud? one thing i really wanted to pull off with wan is making 3 second gif loop type gens, save time and accomplish the same end goal of fappin'.
how much editing do you have to do with your method?
>>
>>107111024
Any first frame last frame workflow should work. They all suffer from the degrading quality. Add a Color Match node between vae decode and video save.
Then the editing is in davinci resolve, it has Smooth Cut which is an automated effect to blend frames and opacity between the two clips. Depending on the severity of the degrading quality, it works great or ok. Then some manual masking of the original image and perhaps some reverse clips.
>>
>>107111054
>They all suffer from the degrading quality
fuck. ill give it a shot sometime, thanks.
>>
>>107110312
Thanks a lot for posting these anon, im not on PC right now but I'll try to make them justice when I get back home in couple days
>>
what's the best hardcore txt2img? Is it the bigasp models or chroma?
>>
>>107110360
>>
>>107110894
Ignore that idiot, you need serious finetuning budget but it can be done (and is being done, see rouwei) without spending millions.
>>107110909
Dunno about auro flow one specifically but model sampling changes the denoising behavior. Higher values will let it spend more time on the composition of the image rather than finer details. Flow matching models actually benefit from this a bit. Single digit values are fine.
Set it to 1 to disable it if you don't care.
>>107111108
Chroma is too unstable and schizo for me so I would go with bigasp despite being more limited.
>>
>>107109675
You look like this and say this
>>
File: 00079-3652864623_cleanup.png (1.63 MB, 1080x1920)
1.63 MB
1.63 MB PNG
>>107111100
anytime, ill be remaking them at a higher res/better settings in a moment too.

>>107111119
Yes.
>>
>>
>>107111337
me on the right
>>
>>107109150
>>
which models do people use to make those short anime sex videos? like a 5 second loop of an anime girl riding dick?
>>
Man I want to make smut by i can't afford a 24gb vram gpu. What sort of hell is this?
>>
>>107111108
wan
>>
>>107111415
Early next year 24GB cards will be available with nvidia mid cycle refresh.
Or just get a used 3090.
>>
>>107111398
Wan 2.2 with nsfw loras probably.
>>
>>107111514
will there be a 5070 with 24GB?
>>
>>107111571
Who knows.
>>
>>107111571
ye, supposedly and will probably be unobtainium
>>
>>107111624
useless bitch
>>
>>107111571
There have been credible (as far as hardware leaks go, so above MLID tier garbage) leaks discussing the super refresh early 2026 so yes, it's likely that there will be 5070 ti super with 24gb VRAM.
It's possible that their plans can get changed however.
>>
>>107111686
correct, either it will happen or it won't, those are the only two possibilities
>>
any idea why when I try to gen video with res_2s and bong tangent the sampler just hangs indefinitely? for images it works
>>
>>107111692
no problem for me but I use res3m/bong
>>
>>107111692
Maybe they just don't work with videos.
Not all sampler/schedulers work with all models.
>>
>>107111562
>loras
fuck man, i really have to take the time to learn all this shit like what loras are and how to use them
>>
>>107111750
there's people talking about using it with wan all the time
>>
>>107111686
probably not gonna be cheap tho
i got to start saving up my welfare buxx starting today so i can get it when it drops
>>
>>107111692
do you have enough paging file?
bongmath, if you use it, also makes the first step take a lot longer than the next ones
>>
>>107111768
Post catbox and hope that one of those people is around to tell you what might have went wrong.
Are you also sure about res_2s+bong tangent combo? Res4lyf samplers aren't a monolith.
>>
How do I add more samplers and schedulers to these?
>>
>qwen t2i then chroma i2i
any anon has experience with it? is it any good?
>>
>>107111790
you can't
>>
>>107111810
That's gay, why not?
In forge it's as simple as to adding them to a txt file and it'll download it.
>>
Holy, did not expect to max out vram for images in comfy. What are the options to alleviate the load, tiling?
>>
>>107111834
>2160p

>((Flux))
>>
File: 1731004420112652.png (9 KB, 370x254)
9 KB
9 KB PNG
>>107111790
stop using kijai's nodes. you might have to cook a huge plate of spaghetti with this
>>
>>107111834
use multigpu node to offload the model to ram
>>
File: ComfyUI_03887_.jpg (701 KB, 2304x2304)
701 KB
701 KB JPG
>>107111834
3840x2160.. found your problem.. gen half of that size (or less) and then upscale
>>
>>107111886
Oof disregard I suck cocks.. wasn't DyPE snake oil?
>>
>>107111844
>>107111886
>he doesn't know of dype

>>107111867
Does any multigpu node work, offloading automatically?

>>107111855
Kijais nodes has the best results so far for me. But I'll try that one out, thanks.
>>
>he dyped up
>>
File: 00000-2217333902.jpg (1.12 MB, 2560x2048)
1.12 MB
1.12 MB JPG
>>
>>107109861
there's a mr morris lora on civit
>>
Strange error, can't seem to find a proper answer, I'll backtrack I guess.

>>107111867
Oh I see now, I'm retarded. I'm so used to the kijai stuff doing stuff automatically.
>>
>>107109705
>Are you guys gonna get your 1girl Goon Cert?
I got laid off so I actually might if it's free kek. I don't put certs on my resume anyways I just have a section on my site for them but I would totally do it since I'm unemployed and for the memes
>>
>>107111898
this one
>>
>>107111898
oh I assure you he knows all about dype
>>
File: 177.jpg (910 KB, 857x579)
910 KB
910 KB JPG
how do you prompt for a handjob on XL models, when I prompt handjob or jerking off a penis bitches come out sucking dick 95% of the time
>>
Wtf is going on? How am I supposed to get xformers working with this?
>https://github.com/kohya-ss/sd-scripts/tree/sd3
>The command to install PyTorch is as follows: pip3 install torch==2.6.0 torchvision==0.21.0 --index-url https://download.pytorch.org/whl/cu124
pip install xformers installs torch 2.8 which breaks everything.
No worries we will find the specific xformers version that is compatible, right?
pip install xformers==0.0.29.post2 --dry-run and pip install xformers==0.0.29.post3 --dry-run doesn't try change torch version so they seem compatible... right?
 RuntimeError:
The detected CUDA version (13.0) mismatches the version that was used to compile
PyTorch (12.4). Please make sure to use the same CUDA versions.

Fuck. What can I even do here? Try to force and earlier torch version to COOOOOMPILE for current CUDA for who knows how long and pray that it doesn't throw an error???
>>
File: 1748827925487360.png (2.57 MB, 1328x1328)
2.57 MB
2.57 MB PNG
Emu 3.5 status?
>>
File: 00001-292229446.jpg (2.21 MB, 2560x2048)
2.21 MB
2.21 MB JPG
>>
>>107111998
yeah what's going on? I want to create a fake tech support discussion and waste the time of everyone and I write like that?????
>>
Do anon use this
https://huggingface.co/Comfy-Org/Qwen-Image-DiffSynth-ControlNets/tree/main/split_files/model_patches
or this
https://huggingface.co/InstantX/Qwen-Image-ControlNet-Union
?
>>
>>107112026
I am surprised because I am following the instructions on the repo itself and it seems partially broken.
It appears I didn't account for you being a useless illiterate monkey, my apologies.
>>
>>107111976
That one did great, thanks.

>>107111979
Does he know why results are like this?
>>
File: flux_krea_00005_.jpg (1.09 MB, 2160x2160)
1.09 MB
1.09 MB JPG
Not only beautiful, but very smart too.
>>
>>107112107
brainiac looking bitch lmao
>>
>>107112044
yeah I'm sure gonna help if you insult me??? weird why don't I want to help??? retarded USELESS people don't want to help???????

stop being so fake anon, you have better things to do in life than playing the dumb user
>>
>>107112107
i'd advise not to use the fp16 accumulation, it does increase speed but visibly fucks the quality imo
>>
>>107112122
on wan the trade off is acceptable (+20-30% speedup for me), on image gen, not really
>>
File: 00002-2319075882.jpg (2.55 MB, 2560x2048)
2.55 MB
2.55 MB JPG
>>
Fucking schizos man.
>>
File: flux_krea_00009_.jpg (1.86 MB, 2160x3840)
1.86 MB
1.86 MB JPG
Does flux not do anything other than a 1:1 aspect ratio, what the fuck is going on?

>>107112122
I really feel the difference in speed, I'll do some tests to see the difference.

>>107112111
It's not even the worst.
>>
<schizos_can't_XML>
I don't get it, there must be more than one person, how do they keep going nonstop 24 hours straight for weeks? Any answer from the jannies?
</schizos_can't_XML>
>>
>>107112164
It does But you need to hover around 1 million pixels. Try 712x1424. This is not specific to flux, don't rawdog such resolutions.
>>
Fuck it just using sdpa for now. Hopefully good enough.
>>
>>107110312
>set your quality tags at the end of the prompt, don't use masterpiece
i also stopped using masterpiece and tend to put lower than 1 weight values on the other quality tags. putting them at the end sounds sounds like it might free up more of the early prompt clip attention and direct it to the style tags.
do you have any other experience on which quality tags work and which dont? e.g. best quality, absurdres vs hires
>>
>>107112181
Dype gave me the impression it could handle these large sizes. You're stuck at 1mp no matter what?
>>
>>107112122
Seconding the other anon that said that --fast is awesome on wan. Didn't notice any degradations between using it and not using it, and the speedup is essential

If you're spending more than one minute generating an image, well I think that large image models are a waste of time (the future of imagegen is videogen of 1 frame) so we diverge on fundamentals

>>107112173
Evading isn't difficult. You just punch through cloud flares bot detection. The zoophiles who run the evasion site talk about it openly. There is nothing the moderators or jannies can do. AI has changed the social contract on the Internet. Anonymity only worked when mostly humans used the internet

And I say this as someone who uses the aforementioned platform to share beautiful children itt occasionally

That being said, the evasion platform put up a "no automation allowed" banner. My thoughts are that letting people shitpost on 4chan manually is one thing, but letting it be automated starts to enter into Computer Misuse Act territory and more people than just antis will start to care about their site if they catch wind of a defacto botnet-for-hire running on the clearnwt
>>
<schizos_can't_XML>
Why do they do us? What did /ldg/ did to you besides trying to create quality discussions and do model tests and help anons make their first local gens?
</schizos_can't_XML>
>>
>>107112293
<schizos_can't_XML>
But then we have to wait for them to get bored? i dont get it, what's the point?
</schizos_can't_XML>
>>
>>107112293
maybe it's just me but i notice degradation with wan too, and it sucks cause i'd love the speedup
>>
<schizos_can't_XML>
Is there any way to contact the Jannies or mods? I wanna at least hear what they think if they even know about all this
</schizos_can't_XML>
>>
File: 00003-3660239168.jpg (1.94 MB, 2560x2048)
1.94 MB
1.94 MB JPG
>>
>>107111686
i was eyeing the 5060 16gb version, but if true i will def wait till next spring for the 5070 24gb so i don't have to run these gimped amputee models but can actually run the full parameter versions
>>
File: AnimateDiff_00004.webm (3.93 MB, 784x1200)
3.93 MB
3.93 MB WEBM
fp16 model gen
looks pretty nice and works on 3090 too with multigpu node
>>
@comfy
Maybe it would be a good idea in comfyui to have an option to precompute all prompts first, which if enabled would then do that first and thus ensure a HUGE speedup given you can throw away the text encoder forever out of VRAM.

Hugely speeding up the prompt computation since initially the model used to compute the prompt will be permanently loaded in VRAM and process everything in one go.
And then later on you will also permanently now have the diffusion model in VRAM and computing the images one after another.
Never needing to waste huge time swapping from RAM (or God forbid, disk) to VRAM, TWICE for every single generation.
This is a huge, double digit % speedup for most newer models. And would also allow people to load older models at maximum precision too.
>>
>>107112436
supporting multi-stage workflows for this would be quite an endeavour
>>
>>107112473
If any text encoding job is next to do, load text encoder and do all of those jobs that are next in the queued workflows, otherwise continue as normal
This will automatically solve even multi stage workflows
>>
"oh yeah, this model can do nudi-"
https://files.catbox.moe/870k7n.png nsfw

Ok, not doing that again.
>>
>>107112425
how long does it take vs q8/fp8?
>>
>>107112524
fluxbros... NOT LIKE THIS!!!
>>
>>107112376
This is not gonna make it into the collage bub
>>
>>107112436
A simple "Process all prompts first" switch which enables this in the settings would be great.
I also don't see why this wouldn't be a default option anyway, this would hugely speed up anyone who has more than 1 thing queued up, which is especially true for enterprise users which comfyui wants to cater to anyway.
>>
>>107112555
Just beat it.
>>
File: 00047-3944818114.png (1.35 MB, 763x1032)
1.35 MB
1.35 MB PNG
>>
File: 34637757.jpg (90 KB, 1280x720)
90 KB
90 KB JPG
Anyone else feel that the chasing of parameters is hurting local on the short and medium term? The huge new models never take off because basically no one has the hardware to modify them(inb4 vramlet etc, just look at the reality, new model appears, some loras get made, most of them on saas services because barely anyone has the hardware at home, finetunes/retrainings are practically impossible unless some autistic furry millionaire appears, once the novelty wears off most people stop caring and the buzz dies off). XL was a hit because most people interested could train stuff at home at their own GPU right away when it came out, whereas the hardware required for the new models won't become widespread enough for several years minimum, it's doubtful that it'll happen with the next GPU gen. Some new intermediate model or improvements of XL would have much more potential for the next few years, however no one seems to do anything like this.
>>
Our cute sister general was deleted :(
>>
>>107112720
Why?
>>
>>107112539
the steps take the same amount of time as q8.
For me, the total gen time is 20% more than q8, because i only have 64gb ram and it needs to do some paging.
If I had 96gb ram, the gen time would be identical to q8
>>
>>107112713
I agree, we need to iterate on SDXL somehow, make it understand natural language better, without loosing its booru tag capabilites, and scale it up for 16gb gpus since thats the new normal
>>
>>107112736
wtf I have 128gb, I should do this
>>
>>107112783
yea, for me the extra time is spent switching models from high to low, it should really be super quick for you
>>
File: 00033-1107814688.png (2.33 MB, 1248x1824)
2.33 MB
2.33 MB PNG
>>
>>107111337
Is this Abby Phillip?
>>
File: 1756915707734759.jpg (59 KB, 680x680)
59 KB
59 KB JPG
>>107112746
Yeah work whould be done on increasing the resolution and quality of output of existing architectures(well mostly XL) like better vae etc. Quality would increase alot by that alone because now the model would be able to tell that these pink splotches of color in the initial noise are supposed to be two separate fingers and not some gigeresque body horror, it would also make crutches like controlnets more effective due to that reason etc. Yeah 16GB would be the target, it's not quite there but it'll be the new norm soon, basically if a model can't be loaded fully into a 16GB GPU without any humiliation rituals it won't take off for the next few years.
>>
File: 00040-1107814688.png (2.29 MB, 1824x1248)
2.29 MB
2.29 MB PNG
>>
>>107112713
it's also snake oil fatigue I think. nlp isn't worth the memory rape either. that new encoder that does json format looks promising for solving the shortcomings of tags while being more granular than nlp. boomer prompting must die
>>
>>107112848
Ellie Ensley
>>
File: 00004-2699879500.jpg (1.7 MB, 2560x2048)
1.7 MB
1.7 MB JPG
>>
File: QWEN_00113_.png (1.09 MB, 896x1152)
1.09 MB
1.09 MB PNG
>>
Has anyone seen a workflow how to use WAN VACE as video Edit model?
Not redraw a new object, but edit an existing one in each frame, like Qwen Edit does for an image.
>>
who is going to rebake /adt/? I need my fix of cunny tranime
>>
>>107112713
>Some new intermediate model
Already exists and everybody ignored it for some reason. (hint: it's not lumina)
>>
is the thread dedicated to anime gonna come back?

>>107112901
you play bombergirl? i used to play it
>>
Why? Anon already posts anime here.
>>
>>107113004
there's too much of everything else. /ic/ has a thread dedicated to anime, i don't understand why we can't have one. the resources in the op were also really helpful to me
>>
File: 1761774105935020.png (1.37 MB, 1024x1344)
1.37 MB
1.37 MB PNG
>>107113013
this
>>
>giant ani saggers
>>
>>107113013
>the resources in the op were also really helpful to me
They were the same as the ones in this OP.
>there's too much of everything else.
?
>>107113029
Why repost someone else's gens?
>>
File: 1760837488294702.png (1.58 MB, 1024x1344)
1.58 MB
1.58 MB PNG
>>107113038
I'm not anistudio anon but I think his gens are really cute and diverse
>>
>>107113047
better than cumfart for sure. if only he made anistudio better than the webuis
>>
File: 00043-1107814687.png (2.29 MB, 1248x1824)
2.29 MB
2.29 MB PNG
>>107112983
never played the actual gacha game. i only gave shit about the girls because /v/ wouldn't stop making threads of pine.
>>
Do any of you guys actually understand how to properly use negative prompts? Mostly using sdxl models, but the only thing I've EVER Had success with is "3d" to exclude western style 2.5d stuff. All other attempts just do literally nothing except change the seed in unpredictable ways.
Negative quality tags don't seem to improve quality, using stuff like "deformed limbs" or "bad anatomy" either does nothing or actually introduces them into the image.
If I want a CAT instead of a catgirl or exactly 1 boy and not multiple boys, catgirl or multiple_boys in the negative prompt do exactly nothing, I just have to gamble on the seed. And sometimes I can get the image I want, but its alway without negatives, and if I can generate it without negatives, why add negatives anyway? That will just degrade the image. What am I not getting?
>>
>>107113113
you want a jacket with a zipper instead of buttons? you want long hair but not twintails or a ponytail? Want to get rid of hair accessories or a logo on a shirt? Want to get rid of artist watermarks or text? it's about the smaller annoying details that should go into the negative not quality or dumb shit like hand fixes
>>
>>107113047
>>107113057
Ok.
>>
File: 00005-1081229731.jpg (1.26 MB, 2560x2048)
1.26 MB
1.26 MB JPG
>>
>>107112969
I won't bake from phone and I'll be back only in like 3 hours. Hopefully someone will do the needful by that moment.
>>107113004
As for me, I greatly prefer thread dedicated to anime. Not really interested in anything else when it comes to AI gen and this thread is practically useless for me.
>>
>>107113153
I think it was the dox image in the collage that killed the thread so let's not do that next time
>>
>>107113113
negative quality tags only really matter if you're not using positive quality tags, and even then your lora probably doesn't care either
maybe your model just sucks, dark_skinned_male in negatives seems to get rid of them just fine for me
>>
>>107113153
>I greatly prefer thread dedicated to anime
Why not one of the many already existing anime specific diffusion threads on other boards?
>>
>>107113205
>dark_skinned_male
I just use nigger since it's a more direct and efficient token
>>
>>107113136
Okay well I did just try to get rid of a girls pink hair and it did make her blond. But if I want to get rid of characters or supress unwanted things like with the catgirl example that never seems to work. Maybe I'm just using it wrong then
>>
>>107113212
/a/ just deleted them and the rest are schizos on /h/ so no. it's been doing just fine on /g/ and nobody complains about saas either. we have a good thing going here
>>
>>107113246
use cat ears and cat tail in the negative and cat in the positive. maybe kemonomimi might work in the negs. dunno what model you are using
>>
>>107113252
>and the rest are schizos on /h/
There are anime threads other than /h/ why not one of them?
>>
>>107113246
if you're using a danbooru model try cat (animal) in positive.
>>
why is it so hard to make the background look real? is it because it has to make sense?
>>
>>107113274
Okay thanks, will try those.I was mostly just using the same models I was using for NSFW, so that may have been the problem if waillustrous is just too horny, even on autismmix pony I didn't get that many catgirls. I also tried juggernaut xl but that gave didn't really give me a nice anime style.
>>
>>107113290
what blue board has an anime diffusion thread other than /g/?
>>
>>107113305
>autismmix pony
>juggernaut xl
bro, just get noob or wai_nsfw already.
you are living in 2024. vpred will help with accuracy and the models I mentioned work really well with artist tags
>>
>>107113311
Who said anything about blue? Anon brought up /h/.
>>
blue bored
>>
>>107113252
>the rest are schizos
desu the entire website is full of schizos kek
>>
File: 00121-3940376617.png (1.07 MB, 792x1064)
1.07 MB
1.07 MB PNG
>>
>>107113295
shitty XL 1girl models sacrifice backgrounds for booba
>>
we can leave the cope thread now >>107113313
>>
>>107112436
This sound like a great idea
>>
File: ComfyUI_00171_.mp4 (347 KB, 640x640)
347 KB
347 KB MP4
Wan experts, which sampler/scheduler pairs are the best? I have been trying res_2s/bong_tangent that I heard it is the best combo but it is so damn slow and it doesn't work with three ksamplers method as well. I have used euler/simple and unipc/simple and they generate good results but not that prompt accurate.
>>
>>107112746
>we need to iterate on SDXL somehow, make it understand natural language better, without loosing its booru tag capabilites, and scale it up for 16gb
why try to bandaid and scale up a shitty old arch instead of using something modern
none of the current fix-up-xl models are actually any good compared
>>
>>107113295
That's why I prefer qwen over chroma, because it produces background that is structurally more sound. I just can't stand the high frequency noise in background from chroma, because it adds no information.
>>
>>107113443
euler/beta57 is pretty good for high detail
>>
>>107113506
It's either XL upgrade or a new model that isn't as huge as the other new models, but neither gets done which why it'll be XL for several more years minimum
>>
File: FluxKrea_Output_6226373.jpg (2.92 MB, 1760x2336)
2.92 MB
2.92 MB JPG
>>
>>107113618
Some are still using SD1 even
>>
>>107112909
>ESL psyop
>>
>>107110190
nice
>>
>>107113443
euler/beta or beta57
>>
>>107113212
I've wanted a sfw-focused thread for a long time. As for other threads in general, edg is fine for posting sometimes I guess, hdg is a completely unusable shithole.
>>
>>107113618
Lumina 2 is already about as close to SDXL size as you can possibly get. The only issue with it really is that the inference for some reason isn't nearly as optimized speed wise as SD 3.5 Medium's is comparatively. Cosmos 2 has the same issue, both are 2B vs 3.5 Medium's 2.6B but don't gen nearly as fast as it.
>>
>>107113618
>why it'll be XL for several more years minimum
For the common anon. But we are not common anon.
>>
>>107113698
How's your finetune going?
>>
File: 1742534170253751.jpg (1.67 MB, 1248x1824)
1.67 MB
1.67 MB JPG
>>107113443
For chroma i've been using res multistep beta for over a month now.
>>
>>107113707
Calm down.
>>
>>107113752
Well, how is it going? This is the reason why these models don't take off.
>>
>>107113625
prompt?
>>
>>107113765
To pin it entirely on finetunes is disingenuous.
>>
>>107113144
Why do you upload this weird shit
>>
>>107113789
base model can't be modified with consumer hardware = model won't take off
>>
>>107113790
nta but ill take a million "weird shit" gens over a single trite shitmix jeetmerge gen, you know the ones that are everywhere in this hobby
>>
>>107113792
Which of the meta changing finetunes were trained on a single consumer card?
>>
>>107113823
It keeps an ecology going with people learning from each other and basing their work on other's works, which is why XL is so popular. Why isn't this happening for new models?
>>
>>107113846
You would be correct if there were no finetunes of models newer than XL but that is not the case. Again, which finetunes were trained on consumer hardware?
>>
>>107113857
Why aren't any new models taking off, why is no ecology for them arising? What's the reason?
>>
File: 1743209993167567.mp4 (1.04 MB, 512x640)
1.04 MB
1.04 MB MP4
killer moves
>>
>>107113877
>still wont answer the question
Alright.

Anyways, XL took longer to "take off" than 1.5 so I'm not sure what your hold up is.
>>
>>107113908
There's been quite a few new models since XL, and pretty much all of them are dead in the water. What's the reason?
>>
>>107113915
>dead in the water.
In this thread anon posts more newer model gens than XL gens. I'm not even sure what your contention is anymore other than "newer models are bigger than older models and require better cards" which is... self evident and not even a "gotcha".
>>
File: ComfyUI_00038_.png (2.96 MB, 1248x1848)
2.96 MB
2.96 MB PNG
>>
>>107113113
>>107113205
>>107113218
negative prompts aren't implemented properly in any UI. just zero out your negative prompt input and use NegPip to put your negs in the positive prompt:
https://github.com/hako-mikan/sd-webui-negpip?tab=readme-ov-file#instructions
https://github.com/pamparamm/ComfyUI-ppm?tab=readme-ov-file#clipnegpip

negpip will cause negs to actually get rid of the shit you don't want.
>>
File: 1747237200355507.jpg (1.04 MB, 2016x1152)
1.04 MB
1.04 MB JPG
>>
>>107113995
very cool anon
>>
>>107113716
Cats hate water, though. They are desert animals.
>>
File: 00049-899712552.png (2.17 MB, 1824x1248)
2.17 MB
2.17 MB PNG
>>
>>107114158
a grave situation
>>
File: file.png (115 KB, 245x256)
115 KB
115 KB PNG
contrastive flow matching is tight
>>
>>107112291
What's that? Something in your workflow?
Clearly not working if so.
Otherwise for Flux yes. I have heard of purportedly some other models coping with it better, but none I have come across.
>>
File: WAN_22_I2V_INT__00019.mp4 (3.72 MB, 1120x1440)
3.72 MB
3.72 MB MP4
>>107113568
>>107113652
>>107113716
Thank you!

Using ComfyUI with ROCm on Windows is not easy. There is no way to use any attentions other than Pytorch attention yet and ComfyUI on WSL is much slower it is not worth the hustle.

So far I have optimized the system enough to make 5 secs 16fps videos under half an hour. I'm on a budget Radeon GPU with 16 VRAM btw.
>>
>>107114260
Nice. How many epochs until convergence vs the old method?
>>
File: file.png (130 KB, 767x431)
130 KB
130 KB PNG
>>107114260
>>107114275
down and to the right how we like it

It's been training since Oct 21 and it's 1.5B so actually bigger than the previous one that trained for months.
>>
>>107112823
>>107113716
Nice
>>107114260
What model is this?
>>
File: file.png (266 KB, 1162x676)
266 KB
266 KB PNG
>>107114292
I do think the 3D Perlin Automagic Optimizer is good, I've been switching between it and AdamW.
>>
>>107114292
Just steady decrease since 21th? Very nice

>>107114308
CAME all day every day, but it's just loras for me. I wonder if CAME8bit could handle finetune
>>
File: file.png (55 KB, 245x247)
55 KB
55 KB PNG
>>107114318
Yes learning is much much more stable. Which really goes to show people could be training models from scratch all along, this is just a single 5090 with 1.5 million images. Imagine what a 8x GPU cluster could do, could do like a 25 million dataset on a 4-8B model in a couple of months.
>>
>>107114308
> 3D Perlin Automagic Optimizer
Wth is this, are you shitposting? Google returns nothing.
>>
>>107114347
>1.5 million images
Did you clean watermarks etc or do other image processing?
>>
>>107114356
I adapted Automagic where I mask it using the same procedural 3D noise texture you'd use for a game like Minecraft to generate caves. Basically you're soft masking parameters individually on each layer forcing the whole network to adapt and theoretically preventing lazy networks from being created and also doing what they call the parameter lottery meaning you find parameters that specialize and get really good at certain tasks randomly during discovery and rotation. It also acts as self regularization as parts of the model are masked which allows them to be recycled later.
>>
>>107114347
Please write a rentry with the useful shit you learned along the way for us plebs.
>>
>>107114393
My dataset is well captioned for watermarks and I don't train on stock photo sites with layered watermarks.
>>
>>107114395
Did you come up with the name and am I gonna see this in an arxiv paper?
Sounds cool though.
>>
>>107114418
No Automagic comes from Ostris (AI Toolkit) which is an 8-bit optimizer with signed parameters where each parameter has their own learning rate that they automatically adjust up and down. I took the basic concept but added a 3D perlin noise mask that works like a wind effect in a video game so every 100 steps it crawls and soft masks parameters in each layer which might be more effective than random masking. This is more of a random experiment and I'm an amateur.
>>
>>107114395
Can you use network/neuron drop for finetuning? could protect from overfitting stuff
>>
>>107114458
Yeah the concept applies to all finetuning I believe and it should prevent overfitting because different parameters are taught different lessons from the same dataset. I also randomly jitter my images which also guarantees patches don't see the same information twice. I think extremely large datasets are cope and I also think they're full of redundant and duplicates no matter what they say. All you have to do is look at a fairly unique million image dataset to realize the massive possible variety.
>>
new

>>107114476
>>107114476
>>107114476

new
>>
>>107114347
seems to be 80% there
>>
>>107111886
Chroma LoRA?
>>
>>107114478
>I also randomly jitter my images which also guarantees patches don't see the same information twice.
Jitter is great for loras too. OneTrainer comes with Random Flip automatically enabled which is pretty funny. They should call it "Just fuck my shit up" instead
>>
>>107114502
Flipping seems retarded to me because most images don't work mirrored but they can work with +/-20px.
>>
>>107111834
Going off of how deepshrink works, I think the width/height values and dype exponent stuff should be proportional to the resolution you are targetting but I dunno how it works.
>>
>>107114509
I genuinely can't think of any use case for flip
>>
>>107114530
I mean it's good when you don't have thousands of images.
It's helpful for training loras as long as there is no important asymmetrical detail.
>>
>>107114570
It sucks even then. Just flip + jitter crop the dataset and caption new images accordingly. Way better.
>>
>>107113964
Interesting, thanks,I'll have a look
>>
>>107114497
You know it
>>
File: 00188-3170155938.png (864 KB, 1144x672)
864 KB
864 KB PNG
>>
>>107112524
Wtf kind of loras were you running lmao



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.