[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: tmp.jpg (976 KB, 3264x3264)
976 KB
976 KB JPG
Discussion of free and open source text-to-image models

Previous /ldg/ bread : >>102070583

>Beginner UI
EasyDiffusion: https://easydiffusion.github.io
Fooocus: https://github.com/lllyasviel/fooocus
Metastable: https://metastable.studio

>Advanced UI
Automatic1111: https://github.com/automatic1111/stable-diffusion-webui
ComfyUI: https://github.com/comfyanonymous/ComfyUI
Forge: https://github.com/lllyasviel/stable-diffusion-webui-forge
InvokeAI: https://github.com/invoke-ai/InvokeAI
SD.Next: https://github.com/vladmandic/automatic
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI

>Use a VAE if your images look washed out
https://rentry.org/sdvae

>Model Ranking
https://imgsys.org/rankings

>Models, LoRAs & training
https://civitai.com
https://huggingface.co
https://aitracker.art
https://github.com/Nerogar/OneTrainer
https://github.com/derrian-distro/LoRA_Easy_Training_Scripts

>Flux
https://huggingface.co/spaces/black-forest-labs/FLUX.1-schnell
https://comfyanonymous.github.io/ComfyUI_examples/flux

>Pixart Sigma & Hunyuan DIT
https://huggingface.co/spaces/PixArt-alpha/PixArt-Sigma
https://huggingface.co/spaces/Tencent-Hunyuan/HunyuanDiT
https://huggingface.co/comfyanonymous/hunyuan_dit_comfyui
Nodes: https://github.com/city96/ComfyUI_ExtraModels

>Index of guides and other tools
https://rentry.org/sdg-link
https://rentry.org/rentrysd

>GPU performance
https://vladmandic.github.io/sd-extension-system-info/pages/benchmark.html
https://docs.getgrist.com/3mjouqRSdkBY/sdperformance

>Try online without registration
txt2img: https://www.mage.space
img2img: https://huggingface.co/spaces/huggingface/diffuse-the-rest
sd3: https://huggingface.co/spaces/stabilityai/stable-diffusion-3-medium

>Maintain thread quality
https://rentry.org/debo

>Related boards
>>>/h/hdg
>>>/e/edg
>>>/c/kdg
>>>/d/ddg
>>>/b/degen
>>>/vt/vtai
>>>/aco/sdg
>>>/u/udg
>>>/trash/sdg
>>
File: 2024-08-25_00271_.jpg (3.23 MB, 7680x4320)
3.23 MB
3.23 MB JPG
8 fucking k ..

>>102074644
oh.. also ty baker
>>
File: file.png (2.5 MB, 1000x1000)
2.5 MB
2.5 MB PNG
>>
buy gpu or just host on runpod ?
>>
File: 1696057164270474.png (115 KB, 641x321)
115 KB
115 KB PNG
Fug, didn't see new bread. How the FUCK do I stop Flux upscales from inserting so much grain and noise?
>>
>>102074794
first you must invent the universe
>>
can someone post a lora guide for flux that actually worked for them, cause kohya and etc can suck a fuck
>>
File: 2024-08-25_00274_.jpg (3.37 MB, 7680x4320)
3.37 MB
3.37 MB JPG
8k!

also
>>102074794
by not inserting so much grain and noise in your upscales

-> set your noise levels to 0.20-25 max

unless you know what you are doing, use sensible upscale methods, don't use loras (they fuck upscaling up)
>>
>>102074837
>>102074794

wait you can use flux to upscale? how?
>>
>>102074876
I use a two pass SD ultimate upscale. I gen at 1280x720, first pass x3, second pass 2x .. with pic related setting and 4x_nmkd_siax as upscale model for the raw img .. first pass is 0.23-0.28 denoise depending on picture .. second pass will be 0.2

results see >>102074703 >>102074837
>>
File: file.png (2.04 MB, 1024x1024)
2.04 MB
2.04 MB PNG
>>
File: 1718945009517493.png (1.05 MB, 1024x1024)
1.05 MB
1.05 MB PNG
>>
File: IMG_0834.jpg (25 KB, 512x416)
25 KB
25 KB JPG
woah my Kasia LoRa already has 91 downloads.
>>
File: 0.jpg (275 KB, 1024x1024)
275 KB
275 KB JPG
>>
File: file.png (1.91 MB, 1024x1024)
1.91 MB
1.91 MB PNG
>>
File: 0.jpg (345 KB, 1024x1024)
345 KB
345 KB JPG
>>
File: ComfyUI_00025.png (1.2 MB, 1280x720)
1.2 MB
1.2 MB PNG
>>102075003
yea I downloaded it yesterday and mixed it with the Emiru lora .. funny effect.
>>
Lewds with Flux are easy.
https://litter.catbox.moe/z9xjoh.png
>>
>>102075109
they are, the coomers will probably still complain that they cant do 4k pussy closeups yet and 1 squaremiles sized ass gapes
>>
For those using AdaptiveGuidance, I'd recommand you to update the package, there was a bug that was making it output differents outputs than expected, the bug got fixed today
https://github.com/asagi4/ComfyUI-Adaptive-Guidance/commits/master/
>>
>>102075109
hell yeah
https://litter.catbox.moe/h5ldcv.jpg
>>
>>102074833
read
>>102075064
>>
>>102075196
>quadruple the rank
you don't need it
>>
https://developer.nvidia.com/blog/introducing-dora-a-high-performing-alternative-to-lora-for-fine-tuning/
Why are we still using Loras when Dora is the superior method?
>>
>>102075127
It's trivial to do that as well. 4000 to 10000 steps will train anything. And that is just with LoRA.
However, hairy pussies tend to take over the dataset overpowering the much more numerous shaved pussies I have. It's actually bothering me.
https://litter.catbox.moe/xf8qt5.png
https://litter.catbox.moe/1wxlwu.png
https://litter.catbox.moe/y3p2xg.png
>>
>>102075217
the higher rank allows training to complete by epoch 10, making it only 8 hours on my 12gb VRAM for 150 pic dataset vs 24 hours for 20-30 epoch.
the results are also cleaner and details superior. huge difference compared to training sdxl at 8 vs 32 dim
https://desuarchive.org/g/thread/102067488/#102069436
>>
what network dim for a character, if it matters
real life, and whole body not just the face
>>
>>102075302
8
do not listen to the guy right above you saying you need 32
>>
How do i make an image of someone pointing a gun at their own head or put the gun in their mouth? I remember it being suicide pose or something
>>
File: FLUX_00388.png (1.09 MB, 896x1152)
1.09 MB
1.09 MB PNG
i trained a new pixel art lora, hope you guys like it

https://civitai.com/models/685038/soft-pixel-art
>>
>>102075223
because anyone who tried Dora's for SDXL found the improvements were extremely minimal compared to the large increase in training time. you can try training it on flux but I'm sure as fuck not doing anything that increases training time without a 75% improvement minimum
>>
>>102075298
>the higher rank allows training to complete by epoch 10
sounds like a caption issue to me
>>
>>102075338
Oh I see, thanks for the answer anon
>>
>>102075352
caption process:
https://desuarchive.org/g/thread/102013088/#102014906
post your loras, settings, Gpu used, time consumed and show evidence of your claims or fuck off. I posted mine
>>
>>102075387
> both tags and boomer prompting
yup, that'll do it
>>
>>102075334
its soft!
>>
>>102075334
pixel art works fine on vanilla flux though? >>102075011
>>
>>102075416
>no proof to show that changes cook time to such a dramatic extent
shitty bait
>>
File: 1722498601975253.png (70 KB, 433x450)
70 KB
70 KB PNG
>upscaling
>random face inserted
KEK
>>
>>102075450
sovl
>>
>>102075446
yup, buttmad
>>
>>102075420
yes! there is a link in the lora post to my second model i trained on hard edge pixel art if you prefer that

>>102075431
my lora is pixel perfect with 4X scaling, base flux cannot do that, and hand picked images to constrain it to a more consistent certain style
>>
>>102075450
I'm scared
>>
>>102075387
>>102075416
Have you guys tried florence2+tags? Worked really well for 1.5 and sdxl
>>
I want to talk more about LORA training and discovery and shit, but I feel like that's just giving information to that one faggot with the Patreon
>>
File: FLUX_00068_.png (1.27 MB, 896x1152)
1.27 MB
1.27 MB PNG
to uglify, you must think outside the box
>>
>>102075480
imagine not doing things because you care about what someone will do
him copying you does not actually matter in the grand scheme of things
>>
>>102075480
youve been mindbroke by the patreon guy, sad, and kinda pathetic. there is some grass outside you might aquaint yourself with.
>>
>>102075480
I've considered not posting anything here in fear of that, but realistically there is more gain from positive discussion than there is loss from that faggots grift. plus people have been going out of their way to post free versions of whatever he does, so in a way it actually helps dismantle his grift
>>
>>102075501
>>102075510
>>102075516
It's mostly the grift that gets me but maybe I'm being too much of a bitch
>>
>>102075315
I don't know man, I just did a test of a character LoRA at 64 dim and while it aesthetically nuked the model to fit that style, the resulting variation of the poses and things I could prompt the character to do was insane.
>>
is 12gb enough or should i save for more vram?
>>
>>102075480
When you post something useful state that it's licensed CC-BY-NC-ND-NG
>attribution
>non commercial
>no derivatives
>no grift
NG isn't a real thing but it should be
>>
>>102075564
save up for a used 3090, you should be able to find one for around 600
>>
>>102075564
if you already have it then it's good enough. if you haven't already bought it then yes save up for more vram.
>>
File: 2024-08-24_00344_.jpg (680 KB, 2560x1440)
680 KB
680 KB JPG
>>102075564
BUY VRAM

.. well you can use a quantized model, but heck..

BUY MORE VRAM
>>
>>102075477
I haven't tried florence personally because I plan to do NSFW loras too and wanted to keep captioning consistent (unless I'm retarded and it's uncensored? was under the impression it's not). I've read anons saying Florence is good, though
>>
>>102075556
I think it's pretty fucking repulsive to take other peoples work from github and release it as own on patreon
>>
>>102075556
you will be a much happier person if you stopped worrying about what random people on the internet will do
>>
>>102075613
its kinda funny and based
>>
>>102075613
well it's only fooling fools
>>
>>102075613
you can spend all day being mad about what random people are doing to other random people
>>
File: KINGOFIMGGEN~4.jpg (3.28 MB, 1792x2304)
3.28 MB
3.28 MB JPG
>>
>>102075605
It doesnt work well with nsfw stuff, but v3 tags are very precise. I just caption those manually + tags
>>
>>102075613
Someone should really do a plebbit expose on him having stolen and resold info and code. Think you'd need to sign up for his patreon and back charge to get all the proof for it, though, or his dick suckers would claim you have no evidence
>>
why do niggas be censoring ai technologies?
>>
Amd more vram
or nvidia less vram
>>
Why are you people all addicted to porn
>>
>>102075671
because low inhibition retards ruin it for everyone
>>
>>102075671
because nerds have no balls
>>
File: file.png (1.26 MB, 1280x896)
1.26 MB
1.26 MB PNG
>>
>>102075662
maybe I'll give it a shot on my next dataset, I do still have a ton of SFW ones to caption
>>102075682
sadly always nvidia, you will do nothing but suffer if you go amd. jewvidia has a stranglehold monpoly on AI still
>>
>>102075718
AMD can afford to pay people to port CUDA to AMD, but they don't. Probably because the CEOs are related and this should've been busted up years ago.
>>
File: 2024-08-25_00285_.jpg (1.21 MB, 3840x2160)
1.21 MB
1.21 MB JPG
>>
>>102075718
I used it for few 1k anime datasets and the results were great (trained on v-prediction furry model).
>>
>>102075562
>the resulting variation of the poses and things I could prompt the character to do was insane.
you got examples?
>>
>>102075736
>Probably because the CEOs are related and this should've been busted up years ago.
p much the reality of all big corpos, they all work in each other's interests (which is never in our interests)
>>
File: ComfyUI_02384_.png (865 KB, 1120x704)
865 KB
865 KB PNG
>>102075480
I don't think there's much you as an individual can do to stop him. He exists in a weird space where everyone is too polite to oust him entirely and he has a following of people who think the work he does has merit.
>>
>>102075689
why aren't you?
>>
>>102075718
I hear amd is serviceable on linux, which is what i use
also
what is the point of being faster if you cant run the models...
>>
File: FLUX_00007_.png (1.21 MB, 1024x1024)
1.21 MB
1.21 MB PNG
>this is flagged for review
why does civit make me feel like a criminal just for sharing stuff
>>
File: file.png (2.38 MB, 1024x1024)
2.38 MB
2.38 MB PNG
>>
>>102075813
they use the wdv tagger, probably anything with small breasts, flat chest, etc, gets flagged
(also sharing images on Civit lmao)
>>
>>102075807
Its weird how you didnt answer the question
>>
>>102075828
>(also sharing images on Civit lmao)
this, I'm sure Civitai is "letting" people make loras just to gather a giant dataset so they can make their own giant finetune and end up with an API model or some shit
>>
is overtraining a lora possible with flux?
>>
>>102075766
>I don't think there's much you as an individual can do to stop him.
The guys posting free versions that do the same thing are the real MVPs. There will still be retards that want to support the grifter for some godforsaken reason but each step to dismantling his hugbox brings us closer to a world without his bullshit
>>
>>102075829
your dumb question
>>
>>102075846
No, flux is the only transformers based model in history that cannot be over trained
>>
>>102075689
I was raped as a 10 year old
>>
>>102075852
>calling an innocuous question based on clear observations of this thread we are all participants in dumb
I'm not the one addicted to porn.
>>
File: 2024-08-25_00288_.jpg (1.35 MB, 3840x2160)
1.35 MB
1.35 MB JPG
>>
PSA for my fellow forge users: we can finally use ViT-L-14-BEST-smooth-GmP instead of clip_l for better prompt understanding!
https://huggingface.co/zer0int/CLIP-GmP-ViT-L-14/blob/main/ViT-L-14-BEST-smooth-GmP-TE-only-HF-format.safetensors
this version has only the text encoder part of the model because you don't really need the vision part, but if you want that full package he also uploaded a new HF format of it in the same repo.
I heard there was also a bug where on comfyui you cant use it with ggufs or something, so maybe this fixes that as well?
>>
>>102075846
yes you can always push the training into no-man's land especially with aggressive learning rates
>>
File: file.png (2.22 MB, 1024x1024)
2.22 MB
2.22 MB PNG
>>
is there any way to force Ultimate SD Upscale to start on, say, tile 3 where the face is?
since that's usually the deciding factor as to whether an upscale looks good or not, i'd like to preview that first and see what the quality looks like so i can cancel and retry faster if needed
>>
>>102075812
its not just about being faster. there are frequent incompatibilities, you use more of your VRAM losing Nvidia's bullshit, you can't train loras, so on. I have never once seen a single person not say "holy shit amd is hell for this why did I do it". none of us, and I repeat none of us, pay Nvidia's retarded fees for a small splattering of VRAM out of loyalty to the brand. you can use AMD, it is possible, but it's suffering
>>
File: 00008-3702331729.jpg (154 KB, 1040x1176)
154 KB
154 KB JPG
CEO of Milk: Okay, we need Ideas on how to get gen Z to drink more milk. It doesnt seem to be popular as a stand-alone beverage amongst the younger population. Any Ideas?
Some guy: Milk energy drinks.
>>
>>102075876
>I heard there was also a bug where on comfyui you cant use it with ggufs or something, so maybe this fixes that as well?
that was fixed yesterday, or two days ago, time is weird right now
>>
>>102075865
just addicted to being butthurt at others being into things you arent into.
>>
>>102075851
His LoRAs suck too. I can't believe that I can go to issues section of Kohya and see this guy linking his patreon.
It flies in the face of common decency.
>>
>>102075926
Whatever you gotta tell yourself
>>
File: FLUX_00421_.png (834 KB, 896x1152)
834 KB
834 KB PNG
>>
>>102075846
I've burnt several loras testing LR to the point of it being a blurred mess or pure noise, and in long epoch runs you start to notice the tell tale signs of rigidity, fucked hands, extra stuff you didn't prompt, etc. same as any model
>>
>>102075876
if you don't rename that thing to Vit-L-14-BEST-MASTERCLIP-ABSURDCLIP-smooth-babybutt-GmP. you are gay.
>>
File: 2024-08-25_00297_.jpg (1.79 MB, 3840x2160)
1.79 MB
1.79 MB JPG
>>102075896
sadly I don't think that is possible with SDUltimateUpscale .. maybe with adetailer? idk tho.. that is just a wild guess
>>
Alright, fastest way to train a lora of myself? Never done it before, do I just have to take a bunch of pics of myself? And how would I caption them (since I'm sure JoyCaption would just say "a young man with a neutral expression on his face, taking a mirror selfie, holding a black smartphone, ...", something boring and unspecific like that).
>>
File: 00000-2931440689.jpg (390 KB, 1400x1112)
390 KB
390 KB JPG
>>102075945
Mister nanner
>>
File: file.png (85 KB, 801x965)
85 KB
85 KB PNG
>>102075828
no they have a list of words and check ages etc
>>
>>102075977
<your name> + <long detailed caption>
>>
File: flux_style_1.jpg (1.56 MB, 1824x1248)
1.56 MB
1.56 MB JPG
>>102075846
Yes, absolutely. Plus, flux much more easily learns artifacts or biases of dataset, so it's need to be curated if you want to train for long.
>>
>>102075950

Personally, I don't see much reason to deviate from 1e-4
>>
>>102075930
Honestly more people need to throw some shade his way, he's existed in a bubble of nothing but positivity for what he's doing. It looks like insanity because everyone is so afraid of not being polite these days. (Though even as I say this I'm not making a plebbit or git account to do it, either).
>>
File: file.png (108 KB, 723x947)
108 KB
108 KB PNG
>>102075988
from the prompt in metadata*
>>
>>102075988
they filter 'scrawny', amusingly. how would one find other such words they filter? just so i can avoid them ha
>>
>>102075988
the literal fuck
>>
File: 1.jpg (114 KB, 960x1600)
114 KB
114 KB JPG
>>
>>102075988
wait they opensourced their filter? what?
>>
>>102075918
>fixed yesterday, or two days ago, time is weird right now
I've been in a fog since flux released.. I can't even use the threads to keep track because they move so fast .. where are we .. who am I ..
>>
>>102075930
my issue with it, is the damage to the community. people start parroting bad advice from grifters like that, and soon, we have another lykon/dreamshaper situation.
>>
>>102076006
He gets plenty of shade, but he just blocks it. The man has zero shame. Like, you can't shame cancer into remission.
>>
>>102076041
>lykon/dreamshaper
Qrd?
>>
>>102075990
It won't complain about coherency or anything? Since I figured something like "John Smith The photograph depicts..." wouldn't really be a proper sentence.
Guess I'll give that a go then, any ideas what sort of rank/dataset size I'd need? Again, never really tried this before.
>>
File: file.png (30 KB, 310x907)
30 KB
30 KB PNG
>>102076010
easiest thing to do would be remove the metadata, a lot of images on civit don't have it anyway
>>102076020
no they use sourcemaps in production so i open sourced it for them :^)
>>
>>102076001
I didn't find it worked well for my style loras, felt like they did nothing at 1e or even 2e. 3e-4 has been my go to, but I tested a 4e-4 last night that I'll check the results of when I drag my ass to the PC
>>
File: 2024-08-25_00299_.jpg (1.56 MB, 3840x2160)
1.56 MB
1.56 MB JPG
>>
>>102076092
damn, prompt/lora?
>>
https://github.com/civitai/civitai/tree/main/src/utils/metadata/lists
for your health
>>
>>102076046
I think if there were a majority shade the NPC shills would lessen substantially. it's less about it getting to him specifically as it is creating a hostile environment around him, slowly making it a faux pais to support him. If someone really wanted, a good smear campaign with either a couple dedicated partners in crime or some alts would be all it takes to turn the tide...
>>
File: 2024-08-25_00302_.jpg (1.72 MB, 3840x2160)
1.72 MB
1.72 MB JPG
>>102076099
thanks

no lora, just some prompt I boomerprompted together:
>This is a wide view perfectly clear photo.
>In the background is an isometric view of ancient Rome in the post-apocalypse, the city is burning, smoke is everywhere. There are explosions and ruined skyscrapers. There is a slight mist and the city is glowing in warm light. Its a panorama view.
>It is a surrealist mix of of Cyberpunk, realism and Steampunk. The sky is purple and orange, there are black clouds.
>>
>>102076134
>faux pais
Pas
>>
File: s-l960.jpg (135 KB, 720x960)
135 KB
135 KB JPG
I dont know what LoRA to create next. I have several ideas. My last one was Cain, but I want to try something different this time. One idea I’m considering is based on a figure I actually own. It’s a great choice because I can create a perfect dataset, and it’s very niche and can maybe use its style for other stuff. The final result could look absolutely incredible, it's the figure in the picture. Cost me like $400 with shipping so might as well put it to use instead of just standing there, but I love his face, you dont have many boomer faces in the 1/6 scale world that actually matches their character. And no this is not a gen, the gen will be way better :) But gotta take it down, set it up, take bunch of poses and shots with weapons, etc, plus then pay openai for proper captions since i suck at it, then cut and copy like 200 pictures. It's a lot of work /g/ but I think I can do it, ill start next week with the labor day holiday :) see you then faggots.

in before

>cool story bro
>>
>>102076144
fo pa
>>
>>102076066
It really doesn't matter, at the end of the day it's just encoding text into something that can be trained on. You could start at 16 dim alpha 16. You can always clean up the captions if you want. Or if you're using a VLM that lets you prompt it, then you would do something like "write a two paragraph description of this image of <your name>"
>>
>>102076127
oh neat it's already open source. makes sense, free labour
>>
>>102075196
okay thank you, its running and training, question though what is your dataset structure, I can train without captions right? (It worked on civitai at least)
>>
>>102076157
>then pay openai for proper captions
look up joy caption in g on desuarchive there's a comfy WF and a stand alone script that you can run locally. V cool idea though, agreed
>>
File: 2024-08-25_00306_.jpg (1.66 MB, 3840x2160)
1.66 MB
1.66 MB JPG
Who will save us now?! Is all lost?!
>>
>>102076204
don't use dim 32, it is overkill
>>
File: 2024-08-25_00308_.jpg (1.7 MB, 3840x2160)
1.7 MB
1.7 MB JPG
Gojiiiiira-kun...!
>>
>>102074644
Thread Challenge
how many poster are here?

Can each one of you generate an image for this prompt

Draw me picture of vanity, materialism, pride, self obsession and body dismorphia

In your own style, modify the prompt as you like, bring our the best
>>
>>102076274
nah
>>
>>102076264
Where is Winnie in all this?
>>
File: file.png (2.49 MB, 1024x1024)
2.49 MB
2.49 MB PNG
>>
>>102075930
>I can go to issues section of Kohya and see this guy linking his patreon.
report him

using our servers for any form of excessive automated bulk activity, to place undue burden on our servers through automated means, or to relay any form of unsolicited advertising or solicitation through our servers, such as get-rich-quick schemes;

Its under spam
https://docs.github.com/en/site-policy/acceptable-use-policies/github-acceptable-use-policies
>>
File: delux_hh_00035_.png (2.32 MB, 1024x1344)
2.32 MB
2.32 MB PNG
>>102076274
its just you, me, and some guy having conversations with himself
>>
>>102076274
>self obsession and body dismorphia
no

>>102076065
just comfyanon being jealous. You can ignore it.
>>
>>102076204
Imo captions improve loras a lot but you can train without them, yeah. you'll want to remove the caption related and --wildcard commands

I use both WD tagger for booru tags and joycaption for boomer prompts in mine, it's not at all necessary to go to such a length though: https://desuarchive.org/g/thread/102013088/#102014906

anons have also had success just doing booru tags or just doing boomer prompts. some prefer Florence to joycaption, etc
if you're interested in trying captioning, someone from here posted this iirc:
https://civitai.com/articles/6901
>>
File: file.png (2.56 MB, 1024x1024)
2.56 MB
2.56 MB PNG
>>
>>102076225
Dim 32 trains faster, significantly better details on flux. We aren't using SDXL anymore, bub.
>>
File: file.png (2.49 MB, 1024x1024)
2.49 MB
2.49 MB PNG
>>
>>102076359
Remember you can also drop captions per x epoch which is really nice for larger darasets
>>
>>102075997
oh wow, nice style, lora?
>>
>>102076380
>We aren't using SDXL anymore, bub.
You're right, we're using Flux. 32 dim is overkill for Flux.
>>
File: ComfyUI_00081_.png (1.45 MB, 1024x1024)
1.45 MB
1.45 MB PNG
>>
>>102076426
Proof?
>>
>>102076428
fried
>>
File: 2024-08-25_00315_.jpg (1.67 MB, 3840x2160)
1.67 MB
1.67 MB JPG
>>102076298
>Winnie
Idk.. he probably would not fit into all this chaos..
>>
>>102076435
Its the lora, makes everything fried. Shutup nerd
>>
>>102076345
Is that the result from the prompt?
>>
>>102076431
civitai.com
>>
>>102076426
>t. has never even trained a flux lora
starting to think this isn't a thread schizo trolling but just a Civit jeet copeing they force him into dim 2
>>
File: 1695212351927417.png (2 KB, 172x74)
2 KB
2 KB PNG
>>102076426
>tfw i was doing picrel on my last lora
>>
>>102076443
he can beat them both tho
>>
File: 2024-08-25_00317_.png (1.3 MB, 1280x720)
1.3 MB
1.3 MB PNG
>>
>>102076451
That's proof against not using 32 dim, thanks for confirming.
>>
File: delux_hh_00036_.png (2.26 MB, 1024x1344)
2.26 MB
2.26 MB PNG
why does ldg hate making gens ? yall have a nogen infestation
>>
>102076484
kys
>>
File: notdalle3.jpg (146 KB, 1024x1024)
146 KB
146 KB JPG
>>102076274
>>
>>102076464
nta but I haven't tested 64 dim yet. I think the anon who originally told me to try 32 for faster training said they didn't see a huge leap between 32 and 64 but I might be remembering wrong. have you tried 32 dim and noticed a significant difference in either quality or training speed compared to 64?
if I don't oom I can test 64 dim next run. not like we can't resize the lora later for space if need be
>>
>>102076489
don't give attention, just ignore and report
>>
>>102076460
>>102076464
>>102076473
Just go to 512 rank already. It must be better.
>>
File: file.png (2.12 MB, 1024x1024)
2.12 MB
2.12 MB PNG
>>
File: delux_hh_00037_.png (2.42 MB, 1024x1344)
2.42 MB
2.42 MB PNG
>>102076491
>when I look in the mirror, all I see is a giant pikachu
relatable
>>
>>102076527
>Civit Jeet absolutely seething
It'd cost you $0.70 cents to rent a 4090 and train off of Civit with proper dim if you don't have literally 10gb VRAM to train locally.
>>
>>102075687
deleted wonder why kek
>>
>>102076566
>no evidence
>>
File: 2024-08-25_00322_.jpg (1.15 MB, 3840x2160)
1.15 MB
1.15 MB JPG
>BREAKING NEWS: Hatsune Miku asks WINNIE questions before the fight

>>102076468
he sure can
>>
File: delux_hh_00038_.png (1.95 MB, 1024x1344)
1.95 MB
1.95 MB PNG
>>102076591
I wonder why too, actually
>>
>>102076359
nigga you're a star (ty)
>>
>>102076712
happy I could help, enjoy making your loras anon
>>
File: 1715890691035624.png (2.11 MB, 1024x1344)
2.11 MB
2.11 MB PNG
>>
>>102075172
How? Not even into pregnant or celeb, but did you inpaint after Flux with ponyxl?
>>
File: 0.jpg (173 KB, 1024x1024)
173 KB
173 KB JPG
>>
File: delux_hh_00001_.png (1.73 MB, 1024x1344)
1.73 MB
1.73 MB PNG
>>102076777
ARE THOSE DUNKS???
>>
File: 00068-224693956.png (1.8 MB, 1024x1440)
1.8 MB
1.8 MB PNG
>>
File: 0.jpg (363 KB, 1024x1024)
363 KB
363 KB JPG
>>
File: flux_cyber-env01.jpg (3.88 MB, 2080x2720)
3.88 MB
3.88 MB JPG
>>
>>102076780
it's all loras. SCG Anatomy and Saggy Breasts at lowered strength, Amateur Photography at low strength, celeb lora
>>
File: 102055035459.png (1.93 MB, 1024x1024)
1.93 MB
1.93 MB PNG
budget aquaman
>>
File: cottage.jpg (1.85 MB, 2320x1696)
1.85 MB
1.85 MB JPG
>>
File: ComfyUI_01064_.png (1.28 MB, 896x1152)
1.28 MB
1.28 MB PNG
>>
>102075886
>>102076305
>>102076368
>>102076401
Is this base flux or a new lora? Looks so cool.
>>
>>102077032
it's a lora, not new though, and yeah that's my favorite one so far
https://civitai.com/models/667307/flux-y2k-typeface
>>
>>102076914
Oh I see.
Can't wait for pose/sexual positions lora, flux doesn't understand that either.
>>
>>102077082
there's already some on civitai lel
>>
>>102077054
Thank you
>>
How do I make two characters (from different Loras) interact with each other? I don't understand this regional prompting stuff, it seems overly complicated and didn't work the one time I tried. Why is there not just a special type of brackets that can make tags stick to a specific character?
>>
>>102077102
Yeah but I hope for something more general than just "that one position", it's probably wishful thinking though...
>>
>>102077115
>How do I make two characters (from different Loras) interact with each other?
you change the strength of one or the other until you get what you want, and yeah it's hard that's why I'd want a finetune with more concepts rather than torturing ourselves with stacked up loras that simply destroy's flux base capabilities more and more
>>
>>102077139
I'm using SDXL.
>>
>>102077161
the idea remains the same, having balance on mulltiple lora is hard to achieve
>>
Anyone help me out with prompting?
I'm trying to get a bra to peak out from under a blouse, but the ai wants to have the blouse opened a lot exposing almost all the bra.
>>
File: 0.jpg (162 KB, 1024x1024)
162 KB
162 KB JPG
>>
File: 2024-08-24_00272_.png (1.52 MB, 1216x832)
1.52 MB
1.52 MB PNG
>>102077115
this >>102077139
>I don't understand this regional prompting stuff
you have to use natural language, talk to it like you would explain it to a human.. example:

>A full frontal view photo of a three skeletons. The skeletons are shown next to each other from the front.

>The skeleton on the left has a capital A painted on the skull in red.

>The skeleton in the middle has a capital B painted on the skull in blue.

>The skeleton on the right has a cpaital C painted on the skull in green.

>The middle skeleton is black. The left skeleton is yellow. The right skeleton is pink.

>The black skeleton has gold teeth. The pink skeleton has cat ears.

>They are standing in a damp cellar room. It is dark and only a dim lantern provides some light.

result pic releated
>>
>>102077115
>Why is there not just a special type of brackets that can make tags stick to a specific character?
that's what natural language prompts are for, terms close together are more likely to relate to each other.
>>102077161
you're fucked
what issue are you having with regional prompting? If you want multiple subjects your base prompt needs to mention there are multiple subjects and then you hope they land roughly in the regions you laid out
with SD I would use controlnet to force subjects into specific places so the regional prompt could work properly
>>
>>102077179
that's what loras are for, there's no ai captioner that is that accurate or detailed
>>
File: fs_0034.jpg (57 KB, 1024x1024)
57 KB
57 KB JPG
>>
>>102077201
OK but that's Flux. I would love to have Flux tier comprehension instead of this retarded tag list and hoping for the best.
>>
>>102077207
>what issue are you having with regional prompting?
In addition to being incredibly clunky (like a programmer just made a proof of concept to show it's possible without doing the slightest to make it more usable) it simply didn't yield any usable results for me. I guess I'll try it again if this retarded garbage is my only chance.
>>
>>102077232
>>102077161
ow for SDXL? Well regional prompting is probably your only hope you are right .. with two loras on SDXL its only RNG .. you are lucky if the characters dont merge into one
>>
>>102077208
>there's no ai captioner that is that accurate or detailed
the one used to train DALL-E 3
>>102077232
tbf I only used it in A1111 and the Regional Prompt Prompter extension has two ways of defining the layout that are easy once you get the hang of it
haven't used it in ComfyUI yet and haven't see a graphical editor node for the regions either
>>
>>102077179
for those hyper specific things, is best to gen something close to what you want, then open the image in photoshop/krita/paint/whatever. then you do a crude (CRUDE) paintjob and inpaint the area until you get what you want. SD can do anything if you give it some pointers.
or, as the anon above me pointed out, if you are lucky there is a lora for exactly the thing you want. (there probably is)
>>
File: 0.jpg (213 KB, 1024x1024)
213 KB
213 KB JPG
>>
>>102077299
slick
>>
>>102077187
brap
>>
>>102075829
you also didn't answer mine
>>
File: 2024-08-25_00340_.jpg (1.38 MB, 3840x2160)
1.38 MB
1.38 MB JPG
>>
https://civitai.com/models/684044/90s-sitcom-style-flux?modelVersionId=765629
>LyCoris
what's that? I only know Lora kek
>>
>>102077232
pony based finetunes can do 2 characters just fine, search danboru for the tags, for bleeding, loras help avoid that.
>but the loras are poisoned!
use block weight, figure out the blocks you need to keep the character, most pony loras are fried.
>>
File: fs_0040.jpg (51 KB, 1024x1024)
51 KB
51 KB JPG
>>
>>102077472
i wonder how it perfoms in comparison to a regular flux lora.
>>
File: fs_0044.jpg (68 KB, 1024x1024)
68 KB
68 KB JPG
>>
>>102077292
oh so you're just troll bait
kill yourself
>>
File: file.png (2.75 MB, 1417x1454)
2.75 MB
2.75 MB PNG
>>102077472
>>102077507
https://stable-diffusion-art.com/lycoris/
Based on that article it's supposed to be better than Lora, but there's probably a catch or else people would've ditched Lora for that one
>>
Seeking some advice on VLMs for captioning. TLDR: is there a VLM that can additionally take a list of booru-style tags and incorporate them into the caption it writes?

I have thousands of real images of a concept (set of related concepts actually). I have tagged them all myself. The tags only cover things directly related to the concept. I now want detailed captions so I can train flux.

So far I've only played with joycaption a bit. On it's own, it's unusable for this data. It doesn't understand the concept and misses almost everything about it. I swapped out the model for an instruction-tuned llama 3 model, and changed the prompt so it passes in the tags and instructions about how to use the tag list. This sorta works, the captions now reference things in the tags that it couldn't otherwise "see" in the image, but it's hit-or-miss. The trained projection from the CLIP space to the LLM embedding space still very strongly biases the model to write captions in a particular way, it's not very malleable by changing the prompt.

So ideally I want an uncensored VLM where I can give it the tags and tell it to incorporate everything the tags say into the caption it writes. The tags are always 100% accurate, so they should take precedence over whatever the VLM thinks it sees in the image. Is any VLM flexible enough and responsive enough to instructions to do this? Also it must be a local model, I'm not sending these images to any cloud service.
>>
>>102077567
>probably a catch or else people would've ditched Lora for that one
Probably because people are more used to Lora?
>>
>>102077558
sorry your butt gets hurt so easy just by seeing DALL-E 3 mentioned, need some ice for it?
OpenAI's caption model is more accurate and there's nothing you can do about it but cope and seethe
>>
>>102077586
yeah I don't know about that one, we were quick to change from SAI models to Flux
>>
>>102077595
>bro I'm just trolling tee hee
>>
File: cp001.jpg (402 KB, 1022x1026)
402 KB
402 KB JPG
>>
>>102077584
>So ideally I want an uncensored VLM where I can give it the tags and tell it to incorporate everything the tags say into the caption it writes. The tags are always 100% accurate, so they should take precedence over whatever the VLM thinks it sees in the image.
there's one issue though, say there's 2 characters in the image, how the caption model is supposed to know which tag character name is the good one?
>>
>>102077605
more cope and seethe but do not despair, we'll get local caption models that strong some day
>>
>>102077584
there is no ai captioner that is that accurate, uncensored vlms are typically unhinged, you will have much better and faster results simply doing a regular long caption and having your tags tacked on
>>
>>102077639
>there is no ai captioner that is that accurate
there is one
>>
>>102077633
doing this every day makes you a pathetic person
>>
>>102077659
joy caption is shit, sorry
but go ahead, use your unhinged lossy captions from a schizophenic
>>
>>102077662
and you foaming at the mouth at dall-e 3 being mentioned isn't?
I wasn't trolling anyone, openai's caption is better, that's just a fact, why can't you be a grownup about it?
>>102077673
?
>>
>>102077677
>tee hee I'm just a trolling doing the same thing every day
No gen
>>
File: 06ed.jpg (153 KB, 1024x1024)
153 KB
153 KB JPG
>>102077662
>why must I cry about DALLE3?
>>
>>102077710
>No gen
this is a blue board, anon
>>
>>102077718
>mom I did it again
>>
>>102077632
This is a dataset for training a concept, using photos. There are no characters or specific people tagged.
>>102077639
I have actually thought about something along these lines, which might end up being the best. Use a captioner to write a detailed caption. Then feed the caption + tag list into a powerful local LLM, like Mistral 123b or Llama 3.1 70b (I have 4 3090s and can run them). In my experience, powerful models like those can take long, complicated instructions and follow them autistically. Theoretically it should be able to combine the caption + tags but mention everything in the tags and have the tag info take precedence. But still it would be easier to do this with a single VLM if that is possible.
>>
I've setup my comfy ui workflow for testing out my flux loras but I want to run many different combinations of guidance, lora strength, different prompts etc. Is there some way write out a config file that will then automatically perform the runs for each combination of variables?
>>
>>102077737
I've tried that in the past and in the end the captioner ends up hallucinating, basically it focuses more on the tags and making a story around them inspired by your picture rather than consider the details of the picture and contextually applying the tags.
>>
File: 2024-08-25_00349_.jpg (1.24 MB, 2160x3840)
1.24 MB
1.24 MB JPG
>>
Imagine if danbooru had taggers also draw boundary boxes around the object, the wealth of information we'd have.
They are autistic so they'd do it no problem
>>
>>102077781
I'd like a captcha concept with hentai
>>
File: cp002.jpg (485 KB, 1022x1026)
485 KB
485 KB JPG
>>
>>102077781
>boundary boxes
florence-2
>>
Loss has been more or less the same since step 4000, currently at 6000+. But the images keep getting slightly better, so all is good. Right?
>tfw somehow hairy pussies have taken over despite they being the minority
https://litter.catbox.moe/ubx0nq.jpg
>>
>>102077326
thanks
>>102077334
yup
>>
>>102077846
I'm going to guess that semantically pussy = cat = furry
>>
File: 0.jpg (189 KB, 1024x1024)
189 KB
189 KB JPG
>>
>>102077863
The word "pussy" is not used anywhere in the captions or the sample prompts.
>>
>>102077880
Then that's your problem, T5/Flux responds extremely well to autistically tagged images. Having your images correctly captioned with "pubic hair, "shaved", "hairy pussy", "shaved pussy", etc will likely fix it.
>>
>>102077913
The captions specify whether the pubic area/vulva/genitals is natural, hairy or shaven.
Please, stop guessing randomly. If you haven't trained a LoRA and encountered this kind of bleed-in of concepts you are just going to say random stuff with no basis on anything.
>>
>>102077840
how does florence-2 deal with nudity, how many dicks can it detect in a multiple_penises bukkake image?
>>
>>102077933
You know, you're actually a faggot. Stop playing fucking games, you always do this bullshit, "I have a vague problem, please help me." "Actually gotcha, I already did that!"

Kill yourself, no one is going to help you.
>>
>>102077944
>how does florence-2 deal with nudity, how many dicks can it detect in a multiple_penises bukkake image?
maybe we should finetune florence-2 to be good at NFSW at this point
>>
File: 2024-08-25_00351_.jpg (1.18 MB, 2160x3840)
1.18 MB
1.18 MB JPG
>>
File: cp003.jpg (471 KB, 1023x1023)
471 KB
471 KB JPG
>>
>>102077958
that's what my dream of danbooru having human made boundary boxes for each tag is all about!
>>
>>102077958
>we
go ahead without me
>>
Actually, I think the problem has to do with the same phenotype of woman always having similar characteristics, because my dataset is not varied enough (ie. brunettes tend to have bushes, but blondes come out shaven). Maybe it can be solved with more training, but so far pubic hair seems to be tied to overall phenotype no matter what.
https://litter.catbox.moe/t5i6tz.png
>>
File: 1716414611407278.png (74 KB, 389x569)
74 KB
74 KB PNG
how do i avoid the flux grids?
>>
>>102077983
Never once will you post a caption you're training with, huh?
>>
File: file.png (1.7 MB, 1024x1024)
1.7 MB
1.7 MB PNG
>>
I'm going to share the LoRA because I've been at it all day and I would like to use my GPU now. Until I train the remaining 4 hours (it's been cooking for around 6 at this point).
It might be a while, because I'm going to review all the captions and see if there's a way to bias the pussy thing better.

Lewd LoRA for Flux: https://files.catbox.moe/p362yt.safetensors
I'll upload it to Civitai when I consider it done.

https://litter.catbox.moe/eose4s.png
https://litter.catbox.moe/elw3rx.png
>>
>>102078073
is there any trigger word?
>>
>>102078073
>>102078084
Example prompt:
>This is a picture featuring a nude young blond woman standing next to a tree. Her facial expression is one of contentment and peace. The scene around her is idylical. She has natural breasts with pink nipples and aureolas. Her legs are parted, showing her genitals.

Use "genital/pubic area", "buttocks", "thighs", and "breasts". Not tits or pussy. Prompt in the style of vanilla Flux. But feel free to experiment.
I didn't want to add tags and special trigger words because I wanted to just make Flux understand nudity and fill in the details when you say "nude woman" (it works to an extent).
>>
>>102078097
>genitals
>>
File: cp004.jpg (468 KB, 1024x1022)
468 KB
468 KB JPG
>>
File: file.png (21 KB, 738x289)
21 KB
21 KB PNG
>>102078113
>>
>>102078121
Great captioning anon
>>
File: file.png (47 KB, 872x411)
47 KB
47 KB PNG
>>102078121
>>102078127
lmao sorry wrong tab

The term is used by joy caption, actually. Among others. It's perfectly valid for humans.
>>
>>102078147
Let me guess, you make mistakes and then defend them to the death afterward acting like you meant to do it all along. No one is going to write genitals you retard.
>>
>>102078172
Read a book saar
>>
>>102077958
>we
SWIM*
ftfy
>>
>>102078187
You will never learn because you're stubborn. You made a clear and obvious mistake with your captioning and that's why your result is shit. I don't know why you ask for help when you clearly don't plan on taking advice.
>>
>>102077772
https://civitai.com/models/668131/flux-lora-comparison-workflow-dual-image-compare
>>
>>102077737 (me)
Okay wait I just tested the idea of using joycaption to write the caption, then feeding that + tags into an LLM. I used Mistral-123b, with a quite long set of instructions on exactly how to combine the caption and tags, and how to phrase the final caption.

HOLY FUCK it works perfectly. Do not underestimate the capabilities of large, powerful LLMs I guess. Maybe it is helping a lot that my tags are focused and completely cover the concept in question. I think I have my solution, should have just tried this first.
>>
>>102075876
Now gguf it! gguf it!
>>
>>102077567
you have no idea how many LoRA variants there are for LLMs. And in the end, people still use the original LoRA (or qlora if low vram). That's because most of the variants are snake oil or negligible improvement.

just off the top of my head:
- DoRA
- LoRA Plus
- LoRA Pro
- PiSSa
- VeRA

there's also the meme "full" finetunes
- GaLore
- badam
- adam mini
- MOD
>>
File: 1716207642366234.png (98 KB, 1773x602)
98 KB
98 KB PNG
>>102078172
What's the mistake? The use of "genitals"?
It's a valid word anon.
>>
>>102078327
This
>>
>>102078329
>PiSSa
wait that one is a valid name? lmaooooooo
>>
>>102078329
I always heard Lycoris was supposed to be better, but dunno about the others.
>>
>>102078327
>>102078350
there is no point making it a gguf
>>
>>102078209
>>102078322
Do you have to caption each image before you create the lora? Or after? Captioning it means adding the word in a text file with the same name as the image?
>>
I found the problem with my captioning:
>She is bent over with her buttocks and genitals prominently displayed towards the camera. Her hands are spreading her buttocks, clearly revealing her anus and vulva. The woman's face is partially obscured as she looks back over her shoulder, with a short, tousled hairdo.
This is from a set of five or seven images (at least) where the woman has a hairy vulva, but it is not explicitly said in the caption, so it is generalizing.
>>102078330
Don't engage him. He never posts any gens. He's just here to stir shit up.
>>
>>102078386
this, it's only a fucking 200mo model
>>
>>102078327
>>102078350
why? its 315 friggin MB .. no need to quantize that at all
>>
>>102078399
He is kind of right tho, your captioning is boomer prompting, how many people will text prompt that? If someone uses your lora and just writes normally will it work?
>>
File: ifx241.png (1.29 MB, 1024x1024)
1.29 MB
1.29 MB PNG
>>102078363
>>
>>102078425
I prompt Flux like that. But feel free to test it with the link I posted above. It is prone to produce body horror from time to time because it's not fully trained, but the result should look like a typical softcore set.
>>
>>102078443
https://www.youtube.com/watch?v=bD1yyP8GpaI&t=56s
>>
>>102078425
That is how you prompt Flux tho. The more you give it the better it gets.
>>
>>102078460
>>102078491
Can't you train the lora for both clip and t5?

Caption clip with tags and t5 with boomer prompting
>>
File: 0.jpg (203 KB, 1024x1024)
203 KB
203 KB JPG
>>
>>102078399
never once your captions mention pubic hair you dumbass
>>
>>102078491
Yeah anon you're typing "in a sensual display, the woman who appears to be in her 20s bends over in a sensual act of displaying her genitals, the image appears to be demonstrating the empowerment of women posing in pictures that expose their genitals"
>>
>>102078529
I want to know this, too. Can you do this? I think I read that with Flux you don't train the text encoder.
>>
>>102078529
it wasn't trained like that tho
>>
>>102078529
why not train the vae while you're at it
>>
>>102078568
They don't? You must've hacked into the wrong computer, because I'm pretty sure they do most of the time (but not always, which is most certainly the problem).
>She has medium-sized breasts with visible nipples, and she is posing confidently with her arms raised, hands resting behind her head, which exposes her armpit hair. Her pubic hair is also visible and natural. She has shoulder-length, light brown hair with bangs and is smiling directly at the camera, giving a relaxed and inviting expression.

Why don't you buy a cheap second hand GPU so that you can be part of this community instead of spending your time arguing with everyone? You'll have a better time.
>>
>>102078705
That's your entire caption?
>>
>>102078705
The irony of your statement is actually pretty hilarious. Anon, don't ask for help if you don't want help. You got bad results, have you forgotten why you asked for help? I already told you why you have bad results, you have inadequate and poorly made captions. The T5 likes to have things autistically detailed, that means for your genitals, it's very important that in all contexts your genitals are properly captioned. And the fact you went along with "genitals" in your captions tells me you were sloppy and it's obvious now your aggression and defensiveness is correlated with how much you fucked up.
>>
File: 1701085012907140.png (545 KB, 768x768)
545 KB
545 KB PNG
it's a bit scary how good Flux is with very minimal prompting or fine-tuning or LoRAs
>>
>>102078425
You can take any boomer prompt and distill it manually into tags. For example
>This is a masterfully drawn painting of a gracious dog swimming in a lake. The water shows the iridescent reflection of the full Moon.
Into:
>masterfully drawn, painting, dog, swimming, lake, moon reflections

And it will often produce a very similar images. Sometimes it's the very same image with some slight lighting changes.
>>
>>102078800
People are sleeping on how much Flux is capable of doing and how smart T5 is.
>>
File: file.png (72 KB, 218x261)
72 KB
72 KB PNG
>>102078724
>>102078775
(You)
>>
>>102078668
>>102078687
>>102078700
>>102079018
You can prompt both t5 and clip separately with a node

Some loras respond to both prompts, others only to t5
>>
Breathe in the smell of fresh bread...
>>102079096
>>102079096
>>102079096
>>
>>102079075
good luck on your lora you obviously know how to do it
>>
File: image181.jpg (291 KB, 1024x1024)
291 KB
291 KB JPG
>>
>>102078800
Anon, that was what image generation was supposed to be, not the Pony horrors we're used to now.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.