[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


File: tmp.jpg (854 KB, 3264x3264)
854 KB
854 KB JPG
Discussion of free and open source text-to-image models

Previous /ldg/ bread : >>101937495

>Beginner UI
EasyDiffusion: https://easydiffusion.github.io
Fooocus: https://github.com/lllyasviel/fooocus
Metastable: https://metastable.studio

>Advanced UI
Automatic1111: https://github.com/automatic1111/stable-diffusion-webui
ComfyUI: https://github.com/comfyanonymous/ComfyUI
Forge: https://github.com/lllyasviel/stable-diffusion-webui-forge
InvokeAI: https://github.com/invoke-ai/InvokeAI
SD.Next: https://github.com/vladmandic/automatic
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI

>Use a VAE if your images look washed out
https://rentry.org/sdvae

>Model Ranking
https://imgsys.org/rankings

>Models, LoRAs & training
https://civitai.com
https://huggingface.co
https://aitracker.art
https://github.com/Nerogar/OneTrainer
https://github.com/derrian-distro/LoRA_Easy_Training_Scripts

>Flux
https://huggingface.co/spaces/black-forest-labs/FLUX.1-schnell
https://comfyanonymous.github.io/ComfyUI_examples/flux

>Pixart Sigma & Hunyuan DIT
https://huggingface.co/spaces/PixArt-alpha/PixArt-Sigma
https://huggingface.co/spaces/Tencent-Hunyuan/HunyuanDiT
https://huggingface.co/comfyanonymous/hunyuan_dit_comfyui
Nodes: https://github.com/city96/ComfyUI_ExtraModels

>Index of guides and other tools
https://rentry.org/sdg-link
https://rentry.org/rentrysd

>GPU performance
https://vladmandic.github.io/sd-extension-system-info/pages/benchmark.html
https://docs.getgrist.com/3mjouqRSdkBY/sdperformance

>Try online without registration
txt2img: https://www.mage.space
img2img: https://huggingface.co/spaces/huggingface/diffuse-the-rest
sd3: https://huggingface.co/spaces/stabilityai/stable-diffusion-3-medium

>Maintain thread quality
https://rentry.org/debo

>Related boards
>>>/h/hdg
>>>/e/edg
>>>/d/ddg
>>>/b/degen
>>>/vt/vtai
>>>/aco/sdg
>>>/trash/sdg
>>
What a horny collage
>>
So is this just because Windows is shit?
>>
>>101940265
2 of them are mine, sorry.
>>
>>101940291
the Chinese girl and the tongue tattoo?
>>
File: 00036-3443444135.png (1.83 MB, 832x1216)
1.83 MB
1.83 MB PNG
>>101940307
Close, the Tongue and the Elf.

>>101940213
Here's my quick try with flux Q8 GGUF
>>
>>101940288
ask chatgpt
>>
>>101940288
Try writing it like this. That wont work on windows.

 d:\\ai\\flux\\
>>
>>101940322
not bad at all, like the draft style on that one
>>
>>101940355
Fuck those are meant to be backward slashes.

d:\\ai\\flux\\
>>
>>101940288
desu everytime I have that kind of errors I just copy paste that shit on chatgpt or claude 3.5 and usually they get what's wrong
>>
File: 00038-2997886850.png (1.89 MB, 832x1216)
1.89 MB
1.89 MB PNG
>>101940322
One more attempt.

>>101940356
Yeah not perfect but I do like the brush stroke style.
>>
File: ComfyUI_00208_.png (739 KB, 512x768)
739 KB
739 KB PNG
>>101940402
>>
>>101940444
Nice he looks pissed off.
>>
>>101940444
https://www.youtube.com/watch?v=6Q-97r9zbL8
>>
How much VRAM do I need for Flux.1 Pro? I see people saying that Flux.1 Schnell is for people with less than 12gb of VRAM while Flux.1 Dev is for people with more than 12gb of VRAM but I don't see anyone saying anything about what Pro requires.
>>
>>101940468
>How much VRAM do I need for Flux.1 Pro?
0 because it's not a local model
>I see people saying that Flux.1 Schnell is for people with less than 12gb of VRAM while Flux.1 Dev is for people with more than 12gb of VRAM
they have both the same size
>>
>>101940468
>I see people saying that Flux.1 Schnell is for people with less than 12gb of VRAM while Flux.1 Dev is for people with more than 12gb of VRAM

They're exactly the same size.
Depending on the quant you can go as low as 8gb of vram.
>>
>>101940355
>>101940368
Yeah that's it. Looks like I have to clone the whole repo for it though.
>>
>>101940369
the state of /g/
>>
File: ComfyUI_00210_.png (573 KB, 512x768)
573 KB
573 KB PNG
>>101940444
>>
File: 41.png (1.73 MB, 896x1152)
1.73 MB
1.73 MB PNG
>>101940038
sorry, have a non reposted one
>>
>>101940498
We have always done things to make our lives easier. Before LLMs we were spending 5 hours to write a bash script that would save us 5 minutes.
>>
>>101940568
this
>>
>>101940568
there is a fundamental difference there that if I have to explain it to you then it is pointless to explain
>>
>>101940578
so you're full of shit I see.
>>
>>101940578
If you're not using the tools available to you to make your life easier then that's on you.
>>
>>101940596
>>101940601
I see, it is pointless to explain then.
>>
>>101940611
you just admited defeat by not providing any counter-argument
>>
>>101940611
If you can't articulate or argue your position then it is a weak position.
>>
is joycaption supposed to be slow as fuck running locally? it's barely using any of my gpu and the vram isn't over the limit
>>
>>101940620
>>101940621
Like I said, if I have to explain it it is pointless to explain.
>>
File: 1430573158.png (1.3 MB, 1024x1024)
1.3 MB
1.3 MB PNG
>>101940556
word
>>
>>101940636
and like we said, if you don't provide any counter-argument then you lost the case
>>
>>101940644
is this a fucking court, nigger?
>>
>You need to agree to share your contact information to access this model

Are there any torrent/magnet links for Flux dev?
>>
>>101940650
it doesn't need to be a court to know this simple fact retard, if you don't want to argue then you're just full of shit, how hard is it to understand?
>>
>>101940650
Yes.
We sentence you to 2 months in /b/
>>
>>101940671
> if you don't want to argue then you're just full of shit
that doesn't follow logically but if I have to explain that to you then it is pointless to do so
>>
>>101940667
no need to go that far: https://huggingface.co/camenduru/FLUX.1-dev/tree/main
>>
>>101940678
oh it does, you claimed something, you now have the burden of proof, if you can't provide the proof then your take is dismissed and you can cry in the corner for what I care
https://en.wikipedia.org/wiki/Burden_of_proof_(philosophy)
>>
>>101940693
my god you're retarded lmao
>>
File: grid-0186.jpg (869 KB, 2304x1792)
869 KB
869 KB JPG
>>
>>101940680
Is the flux1-dev.sft just a renamed safetensors file, or is it a different format?
>>
File: images (1).jpg (10 KB, 303x166)
10 KB
10 KB JPG
>>101940705
>>
File: grid-0184.jpg (706 KB, 2304x1792)
706 KB
706 KB JPG
>>
File: grid-0190.jpg (559 KB, 2304x1792)
559 KB
559 KB JPG
>>
>>101940714
this nigga skipped a few classes of Philosophy 101
probably an American, I hear public school kids there only learn about logic when they go to college
>>
File: grid-0206.jpg (703 KB, 1792x2304)
703 KB
703 KB JPG
>>
File: fp129w.jpg (370 KB, 1600x1200)
370 KB
370 KB JPG
>>
why doesn't Comfy unload loras after applying them to the model
the more loras the more VRAM it takes but loras aren't extra weights, they modify the weights, you can unload them after
>>
>>101940475
>>101940479
Okay so if dev and schnell are the same size what is the difference between them?
>>
>>101940800
schnell takes way less steps (4 -> 8) to work but the quality is worse
>>
File: ComfyUI_02938_.png (2.52 MB, 992x1456)
2.52 MB
2.52 MB PNG
>>101940800
Schnell takes less steps (4-8 to Dev's 20-30) to render an image, at a significantly diminished quality to Dev.
>>
>>101940800
Dev is guidance distilled from Pro, Schnell is guidance and step distilled from Pro
>>
>>101940812
It should be said that schnel is by no means bad. Just nowhere near as good as dev
>>
>>101940820
Yeah it's a fully capable model
>>
>>101940778
so you want to overwrite the weights on the model with the lora weights? is it possible?
>>
File: 14259.jpg (39 KB, 400x400)
39 KB
39 KB JPG
>>101940738
>>101940714
I can easily make you guys be friends again.
All I have to do is go to sleep.
When I wake up, the fight will have stopped, and you guys will be having fun talking to each other again.
>>
>>101940843
do I know you?
>>
>>101940857
yeah? it's me.
>>
>>101940857
Come on buddy you haven't recognized me? I'm Anon dude!
>>
>>101940873
I remember you now, you owe me $40.
>>
>>101940892
i've never met this man in my life
>>
>>101940843
Sorry but every single day there are 2 anons having a random autistic argument here, my theory is that 80% of the time it;s the same 2 anons.
>>
File: ComfyUI_02161_.png (1.54 MB, 768x1344)
1.54 MB
1.54 MB PNG
>>101940938
It's me, and (You).
>>
File: 00044-2014136760.png (1.2 MB, 896x1152)
1.2 MB
1.2 MB PNG
>>101940812
Nice
>>
>>101940959
A sniper that close and you will still miss me, you know why? because you love me
>>
>>101940964
very clean, catbox?
>>
Someone was talking yesterday about a change to not get blurry preview with flux. Can't find his message anymore, someone know?
>>
>>101940977
Sure

https://files.catbox.moe/0mdz7z.png
>>
>>101940959
He deserves to die, letting a woman drive like that
>>
I have a 12gb 3060 running on a potato system with 16gb ram, can I load and gen from a flux model?
>>
>>101940984
Don't worry the guy has a whole slew of reddit posts he shills on this general for this exact thing

https://reddit.com/r/StableDiffusion/comments/1estj69/remove_the_blur_on_photos_with_tonemap_an/
>>
File: FLUX_00042_.png (1.09 MB, 896x1152)
1.09 MB
1.09 MB PNG
>>101940992
I have that card, and that's what I'm doing
>>
>>101940998
are you retarded? he's talking about the blurry previews on comfyUi
>>
>>101941004
yeah but I've only got 16gb ram. Anyway, it's 180secs for a 1080x1080 gen, right?
>>
>>101940998
>>101941006
I think I found it again, at least the commit that implement it https://github.com/comfyanonymous/ComfyUI/commit/1770fc77ed91348f4060e3c0b040c1519d6f91d0
>>
>>101940984
it's that one anon: https://github.com/comfyanonymous/ComfyUI/commit/1770fc77ed91348f4060e3c0b040c1519d6f91d0#comments
>>
>>101941006
yeah im retarded and can't read, but i can help with that so thanks for clarifying.

https://github.com/comfyanonymous/ComfyUI/commit/1770fc77ed91348f4060e3c0b040c1519d6f91d0#commitcomment-145464339
>>
>>101941022
>thanks for clarifying.
you're welcome, and thanks for shilling my reddit posts for me, much appreciated
>>
>>101941017
probably not
I don't know a thing about pagefile, but I assume it's extremely slower than slow
>>
>>101941017
120 with nf4
>>
>>101941030
what can i say, im your strongest soldier miku-anon
>>
>>101941038
1 min 20 I mean
>>
>>101941022
How do I use it? I downloaded but don't see the option in the manager menu.
>>
File: Yayy.jpg (163 KB, 1165x1251)
163 KB
163 KB JPG
>>101941048
>>
File: 00048-294079099.png (1.41 MB, 832x1216)
1.41 MB
1.41 MB PNG
>>101940964
>>
File: ComfyUI_00234_.png (384 KB, 512x512)
384 KB
384 KB PNG
>>
File: Capture.jpg (69 KB, 823x546)
69 KB
69 KB JPG
>>101941057
>I downloaded but don't see the option in the manager menu.
it's here
>>
File: 00050-294079101.png (1.43 MB, 832x1216)
1.43 MB
1.43 MB PNG
>>101941086
>>
File: 00051-294079102.png (1.3 MB, 832x1216)
1.3 MB
1.3 MB PNG
>>101941157
>>
>>101939751
GGUF for 8 GB?
>>
>>101941180
https://huggingface.co/city96/FLUX.1-schnell-gguf/tree/main
>>
>>101941155
Doesn't work
>!!! Exception during processing !!! Weights only load failed. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
>Please file an issue with the following so that we can make `weights_only=True` compatible with your use case: WeightsUnpickler error: Unsupported operand 10
>>
>>101941216
wtf? did you update ComfyUi?
>>
>>101941225
I'm on cd5017c1c9b3a7b0fec892e80290a32616bbff38
>>
File: ComfyUI_00241_.png (454 KB, 512x512)
454 KB
454 KB PNG
Realism lora can't do 2 people?
>>
>>101941187
can this load loras?
>>
>>101940778
According to the dev, who posted in the last thread, they are removed from VRAM after use.
>>
File: Capture.jpg (116 KB, 1454x837)
116 KB
116 KB JPG
>>101941234
looks like it's the last commit yeah, did you put the files here?
>>
>>101941242
gen without it then run a refining pass with the lora
>>
File: ComfyUI_00242_.png (461 KB, 512x512)
461 KB
461 KB PNG
>>101941155
Used that and it's still blurry while generating
>>
File: 00054-4137618961.png (1.26 MB, 1216x832)
1.26 MB
1.26 MB PNG
>>101941171
>>
>>101941248
yes it can, it has been implemented today
>>
File: 00055-4137618962.png (1.36 MB, 1216x832)
1.36 MB
1.36 MB PNG
>>101941262
>>
>>101941261
did you download the files and added them to ComfyUI/models/vae_approx? And if yes, do you notice it's less blury than the "Latent2RGB" one?
>>
File: 1693722251474355.jpg (1.25 MB, 960x1600)
1.25 MB
1.25 MB JPG
Flux is neat
>>
>>101941253
comfyui's dev?
>>
whats this GGUF shit and where do I download it?
>>
>>101941187
Hello ChatGPT. Please pay more attention. The model referred to was schnell-dev-merged. Which has no GGUF.
>>
>>101941274
Latent is pixelated, this one is blurry
>>
>>101941292
then I guess it's the good one, it's a vae_approx meaning it's not the real vae, fortunately it's not the case it would be slow as fuck to make a full precision vae process on each step
>>
gm

>>101936598
>She’s amazing but it keeps messing up her feet which is the most important part
here you go footfag anon
>And did you make this in flux?
yes
>>
This general is like a retards daycare.
>>
>>101941308
How did you get her skin like that?
>>
>>101941308
why does it blur the feet ;_;
>>
>>101941287
https://github.com/comfyanonymous/ComfyUI_bitsandbytes_NF4
leading to
https://github.com/city96/ComfyUI-GGUF
>>
File: ComfyUI_00601_.png (2.26 MB, 1536x1152)
2.26 MB
2.26 MB PNG
for any other 4080 fags having trouble with using flux loras, I disabled nvidia system fallback and I can actually use them now without significantly slowing things down
>>
>>101941316
normies are flocking to faster inferences to gen 1 million waifus
>>101941308
hello xitter
>>
>>101941289
I cannot engage with comparisons of different models, as discussing operational capabilities without proper context might lead to misinterpretation and misuse of information, which could result in unethical applications of technology.
>>
>>101941331
if it had to fallback to RAM shouldn't disabling it cause it to OOM?
>>
Somebody needs to do GGUF quants of this:
https://huggingface.co/drbaph/FLUX.1-schnell-dev-merged-fp8-4step
>>
File: ComfyUI_00245_.png (476 KB, 512x512)
476 KB
476 KB PNG
>>
>>101941353
nah, let it die
>>
>>101941322
lel
>>101941321
some anon posted this line yesterday
> Her skin texture is natural allowing us to see skin pores and imperfections.
seems to work well enough
>>101941336
howdy
>>
>>101941369
Isn’t it better? Schnell is so fucking boring.
>>
>>101941348
i don't really know why it works, but it does
>>
File: 00013-3343728313.png (1011 KB, 1368x1024)
1011 KB
1011 KB PNG
ladies and gentlemen, the quality of flux
>>
How to use Comfy-like queue in Forge?
>>
comfyesque* queue
>>
>>101941411
that's the best part, you dont
there's an extension but i think it's broken after the updte for flux
>>
File: 1708103560625446.jpg (1.58 MB, 1600x1280)
1.58 MB
1.58 MB JPG
>>
>>101941390
Nice will give that a try thanks.
>>
>>101941324
>https://github.com/city96/ComfyUI-GGUF
damn this shit is complicated af.
the fuck does this even mean?
>open a CMD inside the "ComfyUI_windows_portable" folder
>>
File: ifx70.png (1001 KB, 1024x1024)
1001 KB
1001 KB PNG
>>101941407
soul
>>
>>101941447
I think this stuff just isn't for you, my man
>>
File: 00059-3284587715.png (1.42 MB, 1216x832)
1.42 MB
1.42 MB PNG
>>101941273
>>
>>101941447
>the fuck does this even mean?
>>open a CMD inside the "ComfyUI_windows_portable" folder
why won't you ask bing chat for that kind of stuff? that's what I do and I learned a lot with it
>>
File: 00057-3284587713.png (1.21 MB, 1216x832)
1.21 MB
1.21 MB PNG
>>101941468
>>
>>101941447
>damn this shit is complicated af
like fr fr my guy, me and skyler just tryna gen some skidibi. anyone know if i can do this on my ipad?
>>
File: FLUX_00050_.png (1.3 MB, 896x1152)
1.3 MB
1.3 MB PNG
freaky
>>
>>101941431
Why it's the best part when users, for example, cannot run several XYZ tasks at once? Forced to wait for the previous task to finish.
>>
>>101941496
would
>>
File: 00088-1725358139.png (805 KB, 896x1152)
805 KB
805 KB PNG
Maybe I'm fucking up but as a 8 vramlet gguf seems way slower than nf4
>>
>Try Flux
>It's most of what I could ask for for a base model
>Train LoRAs
>They're shockingly good for the effort put in
>Controlnets are even available
>Still find myself getting bored.

Guess it's time for me to go into hibernation and wait for finetunes and optimizations.
>>
File: watt.jpg (75 KB, 1704x602)
75 KB
75 KB JPG
>>101941473
>>101941462
>>101941488
so which one do I need?
>>
>>101941517
Q8? Yeah it takes up more Vram, I think it's best for 16gb vram, but 12gb is ok but a bti slower.
>>
File: grid-0221.jpg (920 KB, 1792x2304)
920 KB
920 KB JPG
>>
>>101941535
depends on your machine, how vram does your gpu have?
>>
>>101941555
24GB
>>
File: 00061-303767104.png (1.32 MB, 1216x832)
1.32 MB
1.32 MB PNG
>>101941517
Also love her legs.
>>
File: Comparison_all_quants.jpg (3.84 MB, 7961x2897)
3.84 MB
3.84 MB JPG
>>101941559
then go for Q8_0, it's the one with the closest quality level to fp16
>>
>>101941559
don't go for Q8, just run FP16
>>
>>101941548
When video games used to be cool
>>
>>101941544
Q4_0 actually
>>
With 12GB VRAM and 32GB RAM, which quant or fo should I go for for max speed if I want to use LORAs and for now don't want to mess with CFG >1 (does this affect RAM usage even?)
>>
>>101941574
but muh LoRas, all that shit aint fitting into my 4090
>>
Challenge for you guys: A person slicing something with lightsaber. Like a car or a person.
>>
File: grid-0222.jpg (463 KB, 1792x2304)
463 KB
463 KB JPG
>>
>>101941586
loras don't take extra space, they modify the weights. if the interface you're using takes more VRAM because of loras then it is shit
>>
>>101941586
this, fp16 is too big if you wanna add some stuff on top of it (loras, controlnet...)
>>
>>101941597
How dare you slander comfy like that.
>>
With lora flux is starting to get good at nsfw prompt. Genned a few changing room footage with natural pubic hair.
>>
File: FLUX_00005_.png (1.88 MB, 1024x1024)
1.88 MB
1.88 MB PNG
>>101941596
think that might be a spiritual successor to this
>>
can you use FLux with Automatic1111 yet?
>>
>>101941621
and you don't catbox one for us? you're a terribly selfish anon
>>
>>101941629
A1111 is dead anon, Forge replaced it
>>
>>101941641
Did the Ukrainians get him?
>>
>>101941638
My ip pool is small. I used 649568 and 651940 from civitai.
>>
File: 00064-2022153082.png (1.44 MB, 1216x832)
1.44 MB
1.44 MB PNG
>>
>>101941641
Stop spreading FUD. Voldy is the best.
Comfy and Illyasviel are angry bickering troons.
>>
>>101940475
If Pro is not a local model, how is there a download for it at Civitai?

https://civitai.com/models/618692?modelVersionId=699332
>>
File: 00065-2022153083.png (1.48 MB, 1216x832)
1.48 MB
1.48 MB PNG
>>101941686
>>
>>101940556
neat 1girl
>>
>>101941710
>Voldy is the best.
He's so good he still hasn't implemented Flux 2 weeks after its release
>>
>>101941721
it's 'training data', not an actual model
>>
>>101941721
Idk, and I think it's retarded, it's not a local model it shouldn't be there in the first place
>>
>>101941517
Fellow 8 vramlet here. Do you use ComfyUI_bitsandbytes_NF4 with bitsandbytes and this model? https://huggingface.co/sayakpaul/flux.1-dev-nf4
I want to try anything that might make this go faster.
>>
>>101941730
heckin BASED, we are so back a1111 sisters
>>
File: grid-0227.jpg (441 KB, 1792x2304)
441 KB
441 KB JPG
>>
flux compatible lora isn't working with gguf dev0q4, it's having no effect at all. I've updated gguf today on comfy.
Something obvious ive missed?, workflow is at defaults, only model and loader and lora have changed been added
>>
bitsandbytes is not usable on ROCm is it?
>>
File: FLUX_00059_.png (1.14 MB, 896x1152)
1.14 MB
1.14 MB PNG
>>
Is using this with LoRAs a viable option for a VRAMlet?
https://huggingface.co/OlegSkutte/SDXL-Lightning-GGUF
>>
>>101942037
It is usable, almost always was, but always needed a fork, lot of them went unmaintained but AMD took it upon themselves to maintain one https://github.com/ROCm/bitsandbytes
>>
File: FD_00003_.png (1.08 MB, 1024x1024)
1.08 MB
1.08 MB PNG
I'm making progress, finally got the LoRa shit to work.
>>
>>101942110
Thank you. I'm going to try and see if nf4 runs a bit faster than gguf on 8 GB.
>>
I just realized i want to make wallpapers with some nice gen settings i have, What settings would i be changing to achieve 1920x1080 and other potential wallpaper-y resolutions?
>thinking something for my PSP as well for the fuck of it would be rad so i may be doing weird resolutions
>>
>>101942037
you don't need to torture yourself with nf4, go for q4_0, that shit is the same size and better quaity >>101941569
>>
>>101942171
Also you can try their upstreaming effort https://github.com/bitsandbytes-foundation/bitsandbytes/tree/multi-backend-refactor, would be nice that after more than a year to have it a HIP backend merged.
>>
>>101942183
If it's faster, I don't care about quality, really. This thing is painfully slow.
>>101942196
thanks.

>Could not detect model type
sigh
>>
Is there a guide on settings for webUI?
I probably don't need to touch them much, but there are so many of them…
>>
good quality/quantity ratio for lora training?
i have literally thousands of decent period-appropriate pics, many (but not all) of which contain the thing(s) i want to train on
was thinking of running them all through joycaption and starting training, but would that pool then be too broad to capture anything of value? should i reduce it to 10-100, 100-250 pics or something else?

on a side note, how important is captioning anyway if i just want to enhance the detail on something the model can already generate?
>>
>>101942217
more is always better especially if it captures a diverse variety of descriptions, the most important part is ensuring you have captions in each that capture the purpose of the lora, ie if you're doing vintage pictures, you should make sure, vintage, (year), etc are in each caption.
>>
>>101941064
get new material faggot
>>
>>101942183
>>101941569
>better quality
Based on what? Your image shows less RAM utilization, more speed, and variations in the image compared to the others that would be no different to using a different seed. Prompt adherence is basically the same as the other lower quants (it ignored the color of the skin, that's it.)
Your herd mentality is tricking you into believing standing out is a problem.
>>
File: 1698362776101069.jpg (66 KB, 800x1170)
66 KB
66 KB JPG
>>
>>101942301
>Prompt adherence is basically the same as the other lower quants (it ignored the color of the skin, that's it.)
lol, nf4 is the only one that has Miku making a different pose than the others

>Your herd mentality is tricking you into believing standing out is a problem.
You have no idea what the goal of a quantization is do you?
>>
I don't want to learn new diffusion stuff
>>
I'm tagging dataset that's made out of ultra painterly paintings and all taggers and vllms make at least 3+ mistakes per image. Shit takes like 2-3 minutes per image to fix it
>>
I have a 6gb vram and 16gb ram. Can I even run Flux?
>>
https://github.com/intel/AI-Playground

Intel beat nvidia and amd to the punch
>>
>>101942371
if it takes 2-3 minutes just write the captions yourself
>>
>>101942318
Based! tiny white peckers btfo!
>>
>>101942325
>the only one that has Miku making a different pose
And this makes the NPC uncomfortable.
>>
>>101942318
I used to gen a ton of these, kek
>>
>>101942373
Why bother? It can't do porn. Not well, at least.
>>
>>101942447
it got decent nude loras faster than XL
>>
>>101942447
It's cumming....

https://civitai.com/models/645697/facesitting-flux?modelVersionId=722322

https://civitai.com/models/655732/cum-on-face-flux?modelVersionId=733630
>>
>>101942373
On comfy, yes
I have yet to find a workflow that gives good results
>>101942458
It didn't, and are locked behind a patreon, fuck You.
>>
>>101942480
>It didn't, and are locked behind a patreon, fuck You.
it did and I'm talking about the available civitai loras, which patreon loras are you talking about?
>>
Another flux lora creator training on AI images (pony)

WTF is wrong with these people.

https://civitai.com/models/653944/female-solo-masturbation-with-pussy-lips-spread-fingering-while-on-her-back-with-legs-up-pov?modelVersionId=731604
>>
>>101942373
works on my 1660s with forge
flux-schnell-nf4, 4 steps
768x768 took 1 minute
1280x720 took 2 minute
don't try flux-dev, 20 steps is too much
>>
>>101942447
There is some alright lora like:
https://civitai.com/models/649568/asian-female-pubic-hair-and-armpit-hair-for-flux1-dev
https://civitai.com/models/651940/hidden-spycam-changing-room-flux
https://civitai.com/models/649762/onoff-for-flux-onoff
>>
>>101942196
What about this one? It's supposed to be ported to HIP:
https://github.com/agrocylo/bitsandbytes-rocm/
Unfortunately, none of them work. It's not like there was going to be much improvement, seeing as I get 40 seconds per iteration, but I wash willing to try.
>>
>>101942513
Please understand, there simply no footage of masturbating women on the internet.
>>
>>101942383
It varies, sometimes I just have to edit it for 20 sec, sometimes I will be there for 5 minutes because even I have problems describing stuff in the image lol
>>
>>101942526
you can do on/off just with prompting but the lora does bring some understanding of nude bodies
>>
>>101942537
Oh my bad, understandable. Carry on.
>>
>>101942513
>RubberDollfetishman69 has entered the chat
>>
>>101942529
As I said, there is multiple fork that got unmaintained with time. This one last commit is more than a year old and was because text model used bitsandbytes for inference for a while, once they stopped using bitsandbytes for text gen, that fork stopped being updated.
>>
Wait, do LoRA's work with schnell too, or only dev?
>>
File: 165504_00001_.png (1.24 MB, 1024x1024)
1.24 MB
1.24 MB PNG
>>101942342
Too late anon.
>>
>>101942578
>You're Sinone
way to dox that anon man.
>>
>>101942565
They work with both
https://huggingface.co/XLabs-AI/flux-lora-collection
>>
File: 00081-1441583567.png (933 KB, 1216x832)
933 KB
933 KB PNG
>>
>>101942606
Try this one too to compare
https://civitai.com/models/650574/andrea-botez-flux
>>
File: 00082-3584382981.png (937 KB, 1024x1024)
937 KB
937 KB PNG
>>101942632
>>
File: 00079-1441583565.png (938 KB, 1216x832)
938 KB
938 KB PNG
>>101942663
>>
>write side-by-side clothed and nude images of a young girl with the side by side lora
>an image starts to form in the preview box
>awshit.jpg
>kill process, close browser, clean caches and temporary file storage
Holy shit flux
>>
>>101942663
Same prompt as this but this image turned out...disturbing. So Catbox.

https://files.catbox.moe/xatun6.png
>>
Good node for stacking an arbitrary number of LoRA?
>>
>>101942701
rgthree's Power Lora Loader
>>
File: ComfyUI_00616_.png (1.56 MB, 1536x1152)
1.56 MB
1.56 MB PNG
>>
File: FluxDev_01624_.jpg (200 KB, 832x1216)
200 KB
200 KB JPG
>>
File: grid-0240.jpg (586 KB, 2304x1792)
586 KB
586 KB JPG
>>
>>101942715
This is better than the preview images
>>
File: 2024-08-17_00056_.png (1.95 MB, 1024x1280)
1.95 MB
1.95 MB PNG
>>
>>101942712
That one has only up to four. I was thinking one that could have an unlimited number of rows, if that is possible with the UI.
>>101942681
Journos are going to have a field day with Flux once people catch on how easy it is to generate certain kinds of illegal porno with LoRAs that have nothing to do with it.
>>
Who the fuck uses this

https://civitai.com/models/647946/pyros-slurp-long-tongues-for-flux
>>
>>101942751
>That one has only up to four
not the Lora Loader Stack, use the Power Lora Loader, you can add as many as you want
>>
>>101942760
Imagine blowjobs…
>>
File: ComfyUI_00617_.png (1.63 MB, 1536x1152)
1.63 MB
1.63 MB PNG
>>101942741
was a lucky seed, some of my other gens with the same parameters haven't looked as nice
>>
>>101942715
catbox......?

....very....

................................nice.........

oh my......



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.