[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: tmp.jpg (1.22 MB, 3264x3264)
1.22 MB
1.22 MB JPG
Discussion of free and open source text-to-image models

Previous /ldg/ bread : >>101746235

>Beginner UI
EasyDiffusion: https://easydiffusion.github.io
Fooocus: https://github.com/lllyasviel/fooocus
Metastable: https://metastable.studio

>Advanced UI
Automatic1111: https://github.com/automatic1111/stable-diffusion-webui
ComfyUI: https://github.com/comfyanonymous/ComfyUI
InvokeAI: https://github.com/invoke-ai/InvokeAI
SD.Next: https://github.com/vladmandic/automatic
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI

>Use a VAE if your images look washed out
https://rentry.org/sdvae

>Model Ranking
https://imgsys.org/rankings

>Models, LoRAs & training
https://civitai.com
https://huggingface.co
https://aitracker.art
https://github.com/Nerogar/OneTrainer
https://github.com/derrian-distro/LoRA_Easy_Training_Scripts

>Flux
https://huggingface.co/spaces/black-forest-labs/FLUX.1-schnell
https://comfyanonymous.github.io/ComfyUI_examples/flux

>Pixart Sigma & Hunyuan DIT
https://huggingface.co/spaces/PixArt-alpha/PixArt-Sigma
https://huggingface.co/spaces/Tencent-Hunyuan/HunyuanDiT
https://huggingface.co/comfyanonymous/hunyuan_dit_comfyui
Nodes: https://github.com/city96/ComfyUI_ExtraModels

>Kolors
https://gokaygokay-kolors.hf.space
Nodes: https://github.com/kijai/ComfyUI-KwaiKolorsWrapper

>AuraFlow
https://fal.ai/models/fal-ai/aura-flow
https://huggingface.co/fal/AuraFlows

>Index of guides and other tools
https://rentry.org/sdg-link
https://rentry.org/rentrysd

>GPU performance
https://vladmandic.github.io/sd-extension-system-info/pages/benchmark.html
https://docs.getgrist.com/3mjouqRSdkBY/sdperformance

>Try online without registration
txt2img: https://www.mage.space
img2img: https://huggingface.co/spaces/huggingface/diffuse-the-rest
sd3: https://huggingface.co/spaces/stabilityai/stable-diffusion-3-medium

>Related boards
>>>/h/hdg
>>>/e/edg
>>>/d/ddg
>>>/b/degen
>>>/vt/vtai
>>>/aco/sdg
>>>/trash/sdg
>>
File: FD_00019_.png (1.79 MB, 1920x1088)
1.79 MB
1.79 MB PNG
Miku is powerful in this model. She creeps in everywhere.
>>
File: a56.gif (1.97 MB, 498x377)
1.97 MB
1.97 MB GIF
>>101749472
>finally a nsfw lora train
>cunny
>>
official pixart bigma and hunyuan finetune waiting room
>>
>>101749472
Will it be tag like?
>>
File: file.png (168 KB, 256x256)
168 KB
168 KB PNG
>>101749586
Wrong, Pixart's architecture trains like a dream and what you can do with it will scale with consumer hardware. Every couple years you can just retrain with more parameters and better captions on your own uncensored dataset. Every year we become less dependent on companies doing massive finetunes. Only a matter of time until someone does a full sharded model that uses something like Fairscale to make training even 12B possible on a 24 GB GPU. And if rumors are correct, we're going to see yet another 50% performance bump over the 4090 with 5090s.
>>
File: GUS4tANbwAA1LV9.png (76 KB, 950x746)
76 KB
76 KB PNG
>>101749657
yes
>>
>>101749685
Do you really need the full 450W?
You'd probably get almost the same performance at 380-400W.
>>
>>101749683
I hope you're right anon, I'm tired of every new base model being retarded on purpose.
>>
>>101749685
OK.
>>
>>101749685
Oh? We can train LoRAs on a single 24gb card now?
>>
>>101749683
BIGMA HYPE
>>
File: FD_00021_.png (2.95 MB, 1920x1088)
2.95 MB
2.95 MB PNG
>>101749685
Wait, you are doing this on a single 4090?
What happened to A100 minimum.
Is this Dev or Schnell?
>>
>>101749707
I am right. Local just has to spend more time but all we have is time. We're also very much in the early days of AI, what we're dealing with is rudimentary, we have multiple breakthroughs every year which change how everything works. The only thing holding us back is consumer hardware but there's going to be a company that will break the cartel.
>>
File: 6009388889.png (1.31 MB, 768x896)
1.31 MB
1.31 MB PNG
>>101749697
I limit mine at 315 lol, iirc the performance loss is <5%
>>
>>101749623
Can't wait for the bestiality and monster rape loras
>>
File: 1680721032717806.jpg (134 KB, 1080x1080)
134 KB
134 KB JPG
>haven't used comfy in months
>do update through the manager
>seemed to update fine with no errors
>was about to praise comfy
>restart comfy to actually test things
>"RuntimeError: Found no NVIDIA driver on your system."
>i use AMD tho. this is on linux.
COMFYYYYYYYY WHAT HAPPENED FIX THIS NAOOOOO
>>
>>101749793
Did you run the nvidia_gpu bat?
>>
File: FD_00025_.png (2.36 MB, 1920x1088)
2.36 MB
2.36 MB PNG
Every road leads back to 1girl gens. I need to stop myself.
>>
File: ComfyUI_00803_.png (1.09 MB, 1344x768)
1.09 MB
1.09 MB PNG
>>
File: 1718319461684366.png (12 KB, 799x71)
12 KB
12 KB PNG
>>101749807
i just run this script to run it. and it was a working installation. it now seems to think i need an nvidia gpu?
>RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

also again, i only updated through the manager
>>
File: 2024-08-06_00222_.png (2.98 MB, 1024x1680)
2.98 MB
2.98 MB PNG
god bless it knows Junji Ito
>>
File: 1722617569961260.jpg (1.64 MB, 2576x1944)
1.64 MB
1.64 MB JPG
I'm trans
>>
with that hairline?
>>
File: ComfyUI_00806_.png (1.17 MB, 1344x768)
1.17 MB
1.17 MB PNG
>>
File: 2024-08-06_00228_.png (1.82 MB, 1024x1680)
1.82 MB
1.82 MB PNG
>>
>>101749866
comfyUI spilled its spaghetti again
Many such cases
>>
File: ComfyUI_00807_.png (1.22 MB, 1344x768)
1.22 MB
1.22 MB PNG
>>
File: FD_00146_.png (1.3 MB, 1024x1024)
1.3 MB
1.3 MB PNG
>>101749896
I like trains
>>
File: 2024-08-06_00230_.png (2.3 MB, 1280x1280)
2.3 MB
2.3 MB PNG
>>
File: FD_00032_.png (1.15 MB, 1024x1024)
1.15 MB
1.15 MB PNG
>>101749885
It does and it improved this prompt by adding him.
This is sick, now I can get flat images easier.-
>>
File: 26538.png (1.58 MB, 1208x728)
1.58 MB
1.58 MB PNG
>>
File: ComfyUI_00809_.png (1.24 MB, 1344x768)
1.24 MB
1.24 MB PNG
>>
Back from a long hiatus. Installing easydiffusion. What am I up to? I was used to 1111 with deforum, controlnet and shit.
>>
File: ComfyUI_Flux_4225.jpg (323 KB, 1360x1360)
323 KB
323 KB JPG
>>
File: 2024-08-06_00232_.png (2.18 MB, 1280x1280)
2.18 MB
2.18 MB PNG
>>101749970
nice
>>
File: FD_00035_.png (2.35 MB, 1920x1088)
2.35 MB
2.35 MB PNG
>>101750015
Consider comfy. You don't have to learn how to spaget, just copy a workflow from an anon, but you can run the newest toy Flux.
>>
File: ComfyUI_00810_.png (988 KB, 1024x1024)
988 KB
988 KB PNG
>>101750015
>>
send help pls...
>>
>>101750073
What you need?
>>
File: 2024-08-06_00231_.png (2.31 MB, 1280x1280)
2.31 MB
2.31 MB PNG
>>101750073
where to, what kinda help?
>>
>>101750072
I'll just wait until it gets A1111 support, that's less of a hassle desu
>>
>>101750095
>>101750053
>>101749962
>>101749925
These are all excellent. Can I steal your prompt?
>>
File: [flux-dev]_00338_.png (882 KB, 1024x1024)
882 KB
882 KB PNG
>>101750073
omw be right there
>>
Can anyone link me to a few guides on how and where to collect and make the best datasets for training Pony LoRAs?
>>
>>101750064
>>101750072
Where can I learn about Flux? I mean what makes it special. I'm seeing nice gens in last thread and this one but they have all high res. I don't know how long I'd take to generate something similar. I have a 3060 12gb, is it feasible?
>>
File: trumpvance2025_11.jpg (86 KB, 1000x1000)
86 KB
86 KB JPG
>>
>>101750113
u get da images
u uz somtin culled a brain
>>
>>101750113
>read the OP
>>
File: 0fxcx.jpg (1.31 MB, 1536x1536)
1.31 MB
1.31 MB JPG
>>101750037
nice anon
>>
>>101750083
>>101750095
>>101750110
see >>101749866 >>101749793
>>
File: ComfyUI_Flux_4213.jpg (300 KB, 1360x1360)
300 KB
300 KB JPG
>>101750130

thanks anon
i like the lighting and contrast in your gen, may i request the prompt?
>>
File: Flux_00020_.png (1.39 MB, 1024x1024)
1.39 MB
1.39 MB PNG
>wake up
>sprint from bed to computer
>start comfyUI

time for another 16 hours of prooooompting.
>>
>>101750144
dalle word salad
>screencap 1987 random film scene amoeba close-up clown gold and smoke hammerhead stag evil whale insect lizard scifi mould fungus fluffy mottled and fluffy clown creature lemur by shinya tsukamoto grimy thymus chlorophyll
>>
>>101750128
>read the OP
oops sorry I'm retarded
>>
File: 2024-08-06_00237_.png (2.09 MB, 1280x1280)
2.09 MB
2.09 MB PNG
>>101750103
sure in order
>in the style of Junji Ito, a ghostly nightmare cat staring with big eyes, the cat is eating a giant centipede, in the background a city of the dead
>same
>in the style of Junji Ito, a ghostly nightmare cat staring with big eyes, the cat is eating a giant centipede, in the background a city of the dead
>in the style of Junji Ito, a ghostly nightmare cat staring with big eyes down a hallway
>>
>>101750119
Flux is a 12B 24gb model that was released a few days ago, shitting all over Stable Diffusion.
It's intensive but we have already worked out how to run it on less vram.
It has excellent prompt coherence, and is the vast majority of images you are seeing in these threads are flux outputs.
Yes it is feasible, will take you about 2 minutes per image though. There's also a fast model which can gen a good image in 4 steps.
Links:
https://huggingface.co/black-forest-labs/FLUX.1-schnell
https://huggingface.co/spaces/black-forest-labs/FLUX.1-schnell
https://comfyanonymous.github.io/ComfyUI_examples/flux
>>
File: Flux_00019_.png (1.4 MB, 1024x1024)
1.4 MB
1.4 MB PNG
>>
File: 2024CG~4.jpg (220 KB, 1512x1160)
220 KB
220 KB JPG
>>
File: Flux_00083_.png (1.01 MB, 1024x1024)
1.01 MB
1.01 MB PNG
>>
>>101750119
3060-12G here, it's rough but worth it

schnell @ fp16 | 4 steps | 1024x1024
first gen for a prompt: avg 118.9s (n=31)
repeat prompt only changing seed: avg 34.0s (n=224)

dev @ fp16 | 20 steps | 1024x1024
first gen for a prompt: avg 216.5s (n=24)
repeat prompt only changing seed: avg 121.1s (n=139)

>learn more
https://blackforestlabs.ai/announcements/
https://blog.fal.ai/flux-the-largest-open-sourced-text2img-model-now-available-on-fal/
>>
File: FD_00044_.png (1.34 MB, 1024x1024)
1.34 MB
1.34 MB PNG
>>101750166
Thanks Anon.
>>
File: 2024-08-06_00238_.png (2.07 MB, 1280x1280)
2.07 MB
2.07 MB PNG
>>101750236
yw
>>
>>101750177
>>101750202
Thanks, I'm reading it now but 2mins per image seems quite high. I need to master my prompt capacities again to avoid wasting a lot of time.
>>
File: Flux_00683_.png (1.33 MB, 1368x768)
1.33 MB
1.33 MB PNG
>>101750146
....48 minutes a gen ?
>>
>>101750099
Forge is being updated atm, might get it even sooner
>>
File: Flux_00065_.png (1.19 MB, 1024x1024)
1.19 MB
1.19 MB PNG
>>101750256
42 seconds a gen
>>
File: FD_00045_.png (2.16 MB, 1024x1536)
2.16 MB
2.16 MB PNG
>>101750254
>I need to master my prompt capacities again to avoid wasting a lot of time
Here's the thing, it uses natural language prompting, You just describe the image exactly, and that's what it will shit out. You don't need to worry about stacking tags, it just does what you tell it.
>>
>>101749646
>pixart bigma
September as of the dev on Discord
hopefully they release something in the 4b range, could be the sweet spot
>>101749646
>hunyuan finetune
likely not going to happen, and if it does no one will care
>>
File: 2024-08-06_00244_.png (2.72 MB, 1280x1280)
2.72 MB
2.72 MB PNG
>>101750299
ya, but it seems kind of important to give it the style information kinda early, in my trials when I had it at the end it often just gets a slight influence
>>
File: FD_00050_.png (152 KB, 256x384)
152 KB
152 KB PNG
>>101750254
Oh, also it's very coherent at low resolutions too. You can gen faster by dropping the res. Picrel
>>101750319
Oh right, I always do the style first so I never noticed.
>>
>>101749685
DOA
I had hopes
>>
File: 3780.png (2.89 MB, 1600x928)
2.89 MB
2.89 MB PNG
>>
>>101749685
>>
File: FD_00377_.png (1.51 MB, 1024x1024)
1.51 MB
1.51 MB PNG
>>
File: 2024-08-06_00247_.png (2.54 MB, 1280x1280)
2.54 MB
2.54 MB PNG
>>
>>101750142
first things first, restart computer if possible
next try go to comfy github page and pick a previous (but still recent) version of comfy, then 'downgrade' to that and try again
>>
Is there any comparison between the different versions of flux including pro?
>>
>>101750493
from last thread >>101749013
>>
>>101750493
You can just try yourself if this is still up:
>>101748561
>>
>>101750493
desu it's not that much of a difference, and we can still improve shnell and dev with finetunes, it'll easily surpass pro level in some months
>>
File: Flux_00817_.png (819 KB, 1024x768)
819 KB
819 KB PNG
>>
>>101750532
It's honestly in the wriggle room of random seed quality. The only thing that's very obvious is text, Schnell often jumbles up text or ignores text altogether.
>>
File: out-0.png (1.66 MB, 1024x1024)
1.66 MB
1.66 MB PNG
this 21yo girl with glasses at Denny's was so shy and awkward by herself I just had to go up and say hello to her. and to my surprise she's actually very pretty, she's a freckled ginger lass of good Irish stock. Her little round spectacles are arguably fashionable, And she was very sweet, she is the very idea of beauty, utter perfection instantiated in a human person, rare and precious, with gorgeous freckles
>>
File: ComfyUI_30856_.png (1.37 MB, 1024x1024)
1.37 MB
1.37 MB PNG
>>
File: sample.jpg (375 KB, 1024x1024)
375 KB
375 KB JPG
>>101750557
same with pro version
>>
>>101750532
Interesting, how about gen time between the two public models?
>>
PSA: don't write just sign reading "text" or saying "this", EXPLICITLY say "text", e.g.:

a sign with the text "hello world"
>>
File: prosample.jpg (328 KB, 1344x768)
328 KB
328 KB JPG
>>101750543
that was schnell, this is Pro, forgot seed
>>
>>101750568
shnell can work fine on a 4-8 steps, and dev starts to work fine on 20 steps and further, so you easily get a x2 speed increase, desu I don't mind waiting a bit more on dev because the quality is there
>>
>>101750566
What about dev?
>>
>>101750572
I wonder if dev can look this good and coherent.
>>
File: Sigma_12446_.jpg (2.12 MB, 2688x1536)
2.12 MB
2.12 MB JPG
>>101750566
Interesting prompt. Testing Sigma 2k
>>
File: dev.png (1.22 MB, 1024x1024)
1.22 MB
1.22 MB PNG
>>101750557
>>101750566
>>101750585
dev

>>101750598
>Interesting prompt. Testing Sigma 2k
Ancient boomer prompt, found from a pyramid I believe
>>
I noticed that increasing the guidance scale doesn't always mean "better prompt adherance", like if you put 3.5 it works fine but if you start going into the 4 it will lose more details, that's weird when you're used to CFG kek
>>
File: 2024-08-06_00256_.png (2.69 MB, 1280x1280)
2.69 MB
2.69 MB PNG
>>101750619
3.5 mostly is optimal, just for some prompts lower is better for style following .. I wish they would explain abit further
>>
>>101750628
>just for some prompts lower is better for style following
indeed, you get more diversity of styles, but it also starts to not follow your prompts as well unfortunately
>>
Is img2img upscale and inpaint a thing with flux?
>>
File: 2024-08-06_00265_.png (3.1 MB, 1280x1280)
3.1 MB
3.1 MB PNG
>>101750647
>img2img upscale
if you got insane VRAM maybe.. on 24gb VRAM max you can reach is 1536x1536, so roughly 2.5MP .. and its good at that res mostly, some subject produce visible rastering artifacts at that res
>>101750647
>inpaint
not yet
>>
>>101750572
>>101750596
fwiw i was prompting to make it look VHS and laserdisc
>>
>>101750666
>if you got insane VRAM maybe.. on 24gb VRAM max you can reach is 1536x1536
not true, I do 2x upscales w/ sampling on a 4080
>>
>>101750146
Literally me. It's like new form of video games, wtf bros.
>>
File: 2024-08-06_00266_.png (2.74 MB, 1280x1280)
2.74 MB
2.74 MB PNG
>>101750684
Workflow pls, for me it crashed on out of vram (and my system with it)
>>
File: ComfyUI_02390_.jpg (771 KB, 1792x2304)
771 KB
771 KB JPG
>>101750684
proof:
>>
>>101750405
Damn that shit looks delicious
>>
File: ComfyUI_02391_.png (1.45 MB, 896x1152)
1.45 MB
1.45 MB PNG
>>101750698
just use a tiling upscaler like UltimateSDUpscale. Tiling was the original trick to avoid going OOM, same reason tiledVAE is a thing
>>
Flux makes me realize it's over for local models.
Is there such a thing as privacy when renting A100 from the cloud?
>>
does upscaling flux in fp8 produce the same quality as fp16?
>>
>>101750770
>Flux makes me realize it's over for local models.
>Is there such a thing as privacy when renting A100 from the cloud?
what? you can easily run flux-dev on a 3090 locally
>>
>>101749685
Godspeed, brother.
>>
>>101749685
Can I have the config file ?
I have 48GB VRAM to waste in training for flux (if it work)
>>
>>101750837
It's just not the same. You haven't run Flux until you have run it on at least an A100. It's a completely different experience.
>>
>>101750837
>easily run on a 3090 locally
>"loading lowvram mode" warning every time

No.
>>
>>101750861
>You haven't run Flux until you have run it on at least an A100. It's a completely different experience.
can you elaborate on that?
>>
>>101750861
just buy an A100 then, cheapest ones are like 8000 dollars.
what are you? poor?
>>
>>101750875
it's loading on normal-mode on a 3090 if you use the fp8 version, and if you have multiple gpu cards you can prevent that entierely, no need to overkill for an A100
https://new.reddit.com/r/StableDiffusion/comments/1el79h3/flux_can_be_run_on_a_multigpu_configuration/
>>
flux 8 bit in 4060ti16gb easy, resolutions up to 1500
>>
File: fpk001.jpg (1.66 MB, 1024x3036)
1.66 MB
1.66 MB JPG
not bad
>>
>>101750895
>the fp8 version
Do you mean Flux fp8 version or weight_dtype fp8_e4m3fn?
>>
>>101750945
>Do you mean Flux fp8 version or weight_dtype fp8_e4m3fn?
that's the same thing lol
>>
File: 00013-1805926587.jpg (186 KB, 1416x1288)
186 KB
186 KB JPG
Post a cool plush
>>
File: 70.jpg (656 KB, 1536x1368)
656 KB
656 KB JPG
>>
File: 439605330.png (1.39 MB, 1480x784)
1.39 MB
1.39 MB PNG
>>
File: 2024-08-06_00276_.jpg (849 KB, 1536x2048)
849 KB
849 KB JPG
>>101750757
yay thx that works! I was stupidly trying to just upscale, or img2img with an SDXL model.. it takes quite some time tho to upscale, but I guess thats the nature of genning such resolutions
>>
File: ComfyUI_02400_.jpg (731 KB, 1792x2304)
731 KB
731 KB JPG
>>101751065
takes me like 3-4mins to gen the base and 2x upscale, using fp8
>>
File: fp005.jpg (402 KB, 1344x768)
402 KB
402 KB JPG
>>
File: ComfyUI_00254_.png (515 KB, 768x1024)
515 KB
515 KB PNG
Trump is the best waifu
>>
File: ComfyUI_Flux_4307.jpg (176 KB, 1024x1024)
176 KB
176 KB JPG
>>101750987
>>
File: fp006.jpg (341 KB, 1344x768)
341 KB
341 KB JPG
>>
File: 2024-08-06_00279_.jpg (1.83 MB, 2560x3072)
1.83 MB
1.83 MB JPG
>>101751107
what tile size you using? with 512x512 it took ages .. running on 1024x1024 now and this one took 6 minutes for 2560x3072
>>
File: ComfyUI_Flux_4283.jpg (211 KB, 1024x1024)
211 KB
211 KB JPG
https://x.com/ostrisai/status/1820829417595076623
>>
File: 61d8a50ef60d.jpg (131 KB, 1024x1024)
131 KB
131 KB JPG
>>
>>101751155
Kek
>>
>>101751155
>fp8 lora training on a 3090
that should be the norm imo, fp8 gives almost identical output as fp16, so it should be cool to make the model more used to fp8 precision weights
>>
>>101751126
Nice
>>
File: 2024-08-06_00250_.png (2.82 MB, 1280x1280)
2.82 MB
2.82 MB PNG
>>101751155
nice! The results are good to, dat ikea instruction set .. okay the door is open now
>>
>>101751229
lets see if it as good on fp8
>>
>>101751243
it should, because it's a fp8 lora training, so for the fp8 base model that's exactly what it wants
>>
File: ComfyUI_Flux_41.png (1.23 MB, 1344x768)
1.23 MB
1.23 MB PNG
>check my ssd usage
>over 2TB written in 3 days
Being a vramlet is suffering
>>
File: opicc7w4i0hd1.jpg (127 KB, 878x868)
127 KB
127 KB JPG
Ok now that's impressive, reflections are usually hard to do for image models, but flux nailed that shit aswell
>>
File: fp007.jpg (373 KB, 1344x768)
373 KB
373 KB JPG
>>
>>101750770
>>101750861
>Tortanic doomer retard enters the tread
>>
I'm going to kill myself
>>
>>101751326
DO A FLIP
https://www.youtube.com/watch?v=QibeKQ9W1UU
>>
>>101751258
that's not how it works
>>
File: Flux_00919_.png (1.47 MB, 1024x1024)
1.47 MB
1.47 MB PNG
>>
>>101751275
imagine also being a ramlet. sad!
>>
>>101751349
it's more unnatural for a fp16 to have some of its weights overwritten by fp8 weights
>>
File: 2024-08-06_00281_.jpg (1.48 MB, 2560x2560)
1.48 MB
1.48 MB JPG
>>
File: ComfyUI_02405_.jpg (713 KB, 1792x2304)
713 KB
713 KB JPG
>>
File: SD3_130624_00003_.png (2.2 MB, 1280x1024)
2.2 MB
2.2 MB PNG
hello
due to some silliness, i ended up deleting my sd-webui-forge venv, which was working fine (the so called "old forge" version: https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/849 )
so now i'm going to just do a fresh install but should i go with regular a1111, forge, reforge or something else? i'm mainly looking for something that is actively updated to support latest models/etc (as far as webui can anyway),
>inb4 comfyui
i already have comfy set up and do use it for testing, but i'd rather use forge/something similar even if it doesnt have all the latest support (and i dont really like comfyui)

posting old sd3 gen
>>
>>101751399
>and i dont really like comfyui
I don't either, but it's not like we have much choice to run flux :(
>>
File: demos.jpg (720 KB, 1125x1828)
720 KB
720 KB JPG
Bold of Lumina to release its model during the flux honeymoon period
https://github.com/Alpha-VLLM/Lumina-mGPT
>>
>>101751407
eh, flux is nice-ish looking but even on a 3090 24gb seems to be a pain in the ass, not to mention i'd have to clear a lot of space on my drive to fit models/etc
>>
>>101751432
>eh, flux is nice-ish looking but even on a 3090 24gb seems to be a pain in the ass
go for the fp8 anon, the difference is barely noticeable
https://imgsli.com/Mjg0MDEy
>>
File: fp009.jpg (346 KB, 1344x768)
346 KB
346 KB JPG
>>
File: file.png (2.13 MB, 896x1152)
2.13 MB
2.13 MB PNG
I'm hungry, give me images to make me go mad
>>
>>101751449
which one uses less memory, e4 or e5?
>>
>>101751399
It's ok to dislike comfyui because it is objectively shit.
>>
>>101751399
Forge is getting some big update at the moment, no clue when A1111 updates. I've been using online generators because I'm not gonna install comfy
>>
>>101751477
I think it's e5, but e4 seems to be the one with the better output quality
>>
File: fp010.jpg (364 KB, 1344x768)
364 KB
364 KB JPG
>>
>>101751492
could you add a e4 and e5 comparison?
>>
So I notice that when making gens my vram usage hardly changes at all, just a few hundred megs when using comfy+flux. Unlike A1111+SDXL.
Is this a Flux or Comfy thing?
>>
File: ComfyUI_02408_.jpg (500 KB, 1792x2304)
500 KB
500 KB JPG
>>
>>101751449
well i might try it, but the question is more about which webui i should install
>>101751482
i dont disagree
>>101751488
i saw that forge was doing that but if the previous update is anything to go by, they're basically making forge an "experimental" webui (whatever that means) so probably not very stable?
i might just go back to the "old forge" but it had some issue with upscaler model types which is what I was playing with
>>
>>101751501
Sure, do you have any prompt in mind?
>>
File: ComfyUI_00027_.png (866 KB, 1024x768)
866 KB
866 KB PNG
Shit works pretty well, surprisingly fast gens for such a large model.
>A view of the inside of a huge vast factory which manufactures hamburgers. Feature hamburgers on a conveyor belt with an inspector checking them with a magnifying glass.
>>
>>101751514
could be the one from here>>101751449
>>
>>101751528
>surprisingly fast gens for such a large model.
that's because flux and shnell are distilled, if it was a "normal" model it would be way slower yeah
>>
File: fp011.jpg (411 KB, 1344x768)
411 KB
411 KB JPG
>>
>>101751535
yes, its a me
>>
>>101751541
Still, I'm getting gens on my 3090 with the 8fp model that are not that much slower then prompting SDXL. Its kind of wild.
>>
>>101751511
Yeah we'll see what the Forge update is about once he releases it. I just think it will be actually something useable since he is asking what people want updated with controlnet and such
>>
>>101751503
>Is this a Flux or Comfy thing?
It's a Comfy thing, A1111 is way less optimised, that's why people are willing to go for that hellish spaghetti software shit, its backend is really really good
>>
File: 2024-08-06_00288_.jpg (452 KB, 2048x2048)
452 KB
452 KB JPG
>>
File: fp012.jpg (367 KB, 1344x768)
367 KB
367 KB JPG
>>
File: 85.png (2.24 MB, 1480x784)
2.24 MB
2.24 MB PNG
>>
>>101751551
true true, flux checked a lot of my boxes, that's exactly what I wanted when I was waiting for a DiT savior, the only things remaining is more trivia (whtether it's about chracters, celebrities or artist styles) and this model will be close to perfection, waiting for the finetunes to finish the job
>>
File: ComfyUI_00035_.png (804 KB, 1024x768)
804 KB
804 KB PNG
>>101751575
Interesting. I just wish comfy's UI wasn't a giant pile of shit. Optimizations or not, I'll be hopping off the second webui supports flux.
>>
>>101751615
>I'll be hopping off the second webui supports flux.
I don't think if I'll be able to do the same, comfyUi has a multi gpu support, and A1111 and forge don't, that's a huge deal breaker for me
https://new.reddit.com/r/StableDiffusion/comments/1el79h3
>>
File: fp013.jpg (345 KB, 1344x768)
345 KB
345 KB JPG
>>
>>101751535
Ok, so here's the comparaison between fp16 and fp8_e4m3fn
https://imgsli.com/Mjg0MDEy

And the comparaison between fp8_e4m3fn and fp8_e5m2
https://imgsli.com/Mjg0OTU4

I think e5 is worse, I wouldn't go for that one
>>
File: file.png (264 KB, 883x1695)
264 KB
264 KB PNG
>comfy bad
Are you intimidated by the spaghetti everywhere? Because it's an easily fixable problem.
>>
>>101751789
>Because it's an easily fixable problem.
the "easily fixable problem" is to use A1111 and run normal Ui frontends, there's a reason spaggheti Ui's aren't popular at all anon, no one want that shit
>>
File: 2024-08-06_00294_.png (1.35 MB, 1024x1024)
1.35 MB
1.35 MB PNG
>>
>>101751789
Developer being shittalking faggot can't be easily fixed.
>>
File: ComfyUI_00056_.png (762 KB, 1024x768)
762 KB
762 KB PNG
>>101751789
I enjoy it for stuff like my DAW, but I don't enjoy the workflow for image generation desu
>>
>>101751824
The problem with Comfy is he stubborn so when he's told the right away he stamps his feet and makes it shit instead. Same thing happened with Blender for years because the faggots wouldn't make left click the default.
>>
File: file.png (2.12 MB, 1536x640)
2.12 MB
2.12 MB PNG
>>101751834
>>
someone you like talks shit:
>holy heckin BASED!

someone you dislike talks shit:
>shittalking faggot REEE

this is how zoomer brain actually works
>>
File: ComfyUI_00061_.png (863 KB, 1024x768)
863 KB
863 KB PNG
>>
File: 2024-08-06_00300_.png (1.37 MB, 1024x1024)
1.37 MB
1.37 MB PNG
>>
>>101751881
no, shittalking is cool... when it's based on truth, if you just shittalk and say nonsense you're just acting like a clown
>>
File: 8562200.png (963 KB, 768x768)
963 KB
963 KB PNG
>>
File: Flux_01040_.png (888 KB, 1344x768)
888 KB
888 KB PNG
>>
>>101751789
How did you make reroutes like this?
>>
File: 826263.png (1.31 MB, 1024x768)
1.31 MB
1.31 MB PNG
>it recognizes the word shota
hm
>>
File: Comparaison.jpg (865 KB, 2693x1891)
865 KB
865 KB JPG
For those wondering if loading the fp16 model in fp8_e4 mode gives the same output as loading a fp8_e4 model in fp8_e4 mode, the answer is yes, they give exactly the same result
>>
File: 57414173.png (1.2 MB, 504x1328)
1.2 MB
1.2 MB PNG
>>
File: ComfyUI_Flux_66.png (1.61 MB, 1344x768)
1.61 MB
1.61 MB PNG
So what guidance do you prefer to use? I feel like 3.0 is the sweet spot between good stylization and coherency. Anything above seems to make an image too distilled and boring, anything below often messes up the text and produces visible artifacts.
>>
File: Flux_00019_.png (1.45 MB, 1024x1024)
1.45 MB
1.45 MB PNG
>>101751471
>>
Did anyone discover how to get sharp focused background on Flux?
>>
File: 903420.png (816 KB, 504x1328)
816 KB
816 KB PNG
>>
>>101752004
I won't forgive them for scrapping Pepe
>>
>>101752016
you could use negative prompts
https://reddit.com/r/StableDiffusion/comments/1el3tnq/want_to_use_negative_prompts_with_cfg_1_on_flux/
>>
File: 451905.png (1.92 MB, 1200x776)
1.92 MB
1.92 MB PNG
>>
>>101752016
Was struggling with this as well. I could get it to place focus on one object if I described "yada yada in full focus of the camera", but no matter how I phrased it, most everything else stays out of focus. I really likes shallow focal depths.
>>
>>101751361
that's not how it works
>>
File: 231123123213.jpg (102 KB, 1158x751)
102 KB
102 KB JPG
>>101752029
>https://reddit.com/r/StableDiffusion/comments/1el3tnq/want_to_use_negative_prompts_with_cfg_1_on_flux/
>>
>>101752029
Comments are literally saying it doesn't work for shitty dof. Hell the example result itself is crap.
>>
File: 77423336396.png (2.2 MB, 1568x776)
2.2 MB
2.2 MB PNG
AI food is peak
>>
>>101752058
We need a clear background LoRA even before porn ones.
>>
File: fp014.jpg (333 KB, 1344x768)
333 KB
333 KB JPG
>>
File: midjourney.png (2.31 MB, 816x1456)
2.31 MB
2.31 MB PNG
Flux has a major weak point with styles, it tends to go into the generic clean brushed look and lacks the texture and art styles of Midjourney. I've been trying to recreate something like this image in Flux and it gives me something that looks like a render.
>>
>>101752115
yeah that's its biggest weakness, it doesn't know much styles and seems to be very biased on a specific genetic style, like a finetune
>>
File: ComfyUI_00081_.png (1.31 MB, 1024x1024)
1.31 MB
1.31 MB PNG
>>
>>101752149
All the run at home models are basically focussed on ‘a style’
>>
File: 2024-08-06_00307_.png (1.56 MB, 1024x1024)
1.56 MB
1.56 MB PNG
reflections give you a whole canvas to gen something in flux
>>
>>101752175
that's not true for base models like SD1.5, that one had a lot of trivia in it and was willing to give any style you wanted
>>
File: fs_0189.jpg (70 KB, 1024x784)
70 KB
70 KB JPG
>>
>>101752190
SD 1.5 had lots of trivia but little comprehension, it's the most gacha of the models, type in a string of random shit and hope you get something interesting.
>>
>>101752209
desu I liked when it was like that, the model shouldn't assume any style if you don't ask for it
>>
>>101752228
well you might like my Pixart 1.3B model then because I have some of that LAION flair in the my dataset especially since it's hard to get the AI captioners to do style keywords, so things like "impasto" are inferred from raw search titles.
>>
File: 2024-08-06_00311_.png (1.63 MB, 1024x1024)
1.63 MB
1.63 MB PNG
I love the movie ketchup blood FLUX creates 10/10
>>
>>101752248
I think you'd better continue the training of flux with your data, flux has insane potential, it just needs someone to finish the work with more trivia added into it
>>
>>101752209
I wish we'd go back to 1.5 times.... got it was so comfy
>>
>>101752115
Saddest thing about it is that some people deny this. Literally telling people that it is skill issue
>>
>>101752297
I don't have the hardware to train Flux and I'm not going to piss money renting and I would like a model I can throw behind a paywall and monetize if I want.

>>101752314
Just train 600m Pixart on search engine titles, would only take a month to get old SD 1.5 with 2K capabilities.
>>
>>101752058
>>101752069
I really believe that flux has negatives baked in it, so it's not really possible to use negatives at all
>>
>>101752320
No one is denying it, but we're confident we could bring those styles back with some finetunes, it's way easier to add trivia on a model rather than making it good at anatomy, lightning, prompt understanding... basically flux did all the hard work and it's up to us to finish the job
>>
>>101752329
I don't know how that makes sense, theoretically with negatives it should just be (positive_prompt - negative_prompt) = prompt
>>
File: 374857931.png (1.25 MB, 768x1344)
1.25 MB
1.25 MB PNG
>>
File: fp015.jpg (345 KB, 1344x768)
345 KB
345 KB JPG
>>
File: 2024-08-06_00297_.png (1.28 MB, 1024x1024)
1.28 MB
1.28 MB PNG
>>101752376
cuuuuuuuuuute!
>>
>>101747335
>>101747332
>>101747293
None of these are aesthetically appealing, and no, I'm not trolling. They all have an unnatural "cleanness" that makes them uncanny and doesn't suit the stylistic images at all. Every flux image has a weird plastic looking finish that makes it sterile and ugly. Not sure if you genuinely can't see that or have been looking at flux gens for so long you forgot what normal images look like. Go look at real art and photos for a hot second then look at your gens, maybe you will start to see it. The only pro of flux right now is prompt comprehension, it's aesthetically foul.
>>
File: 2024-08-06_00298_.png (1.13 MB, 1024x1024)
1.13 MB
1.13 MB PNG
>>
>>101752411
keep coping, chang
>>
>>101752350
I know, maybe flux doesn't understand the concept of blur, that's why it can't be added to the negative prompt?
>>
>>101752440
he's not wrong tho, flux has no SOVL. maybe with loras
>>
>>101752411
airbrushed is the term i use, and youre right.
>>
>>101752462
negatives just don't work at all with flux, it fucks with the image
I don't think it was trained with conditioning dropout
>>
>>101752411
the problem as always it's overtuned on aesthetic images but what makes appeal are the flaws
>>
>>101752486
>negatives just don't work at all with flux
it does a bit though, look at the example it actually removed the cars >>101752029
>>
File: image.jpg (213 KB, 1024x1024)
213 KB
213 KB JPG
GET DOWN MR. FORMER PRESIDENT
>>
>>101752490
>>101752411
what you're describing is a finetune, we didn't get a real base model, but something that is being finetuned to look good for normies prompts
>>
File: kerm.jpg (377 KB, 1344x768)
377 KB
377 KB JPG
>>101752468
ong
>>
File: 1343214.png (1004 KB, 768x768)
1004 KB
1004 KB PNG
"omg flux no nipples"
actually: it even makes nipples on the fucking back lol
>>
>>101752545
and back thumbs
>>
>>101752545
if your nipples look like that see a doctor
>>
>>101752529
anon I don't know if you understand this yet but very few people are going to front the money to finetune this and do you know who I don't trust to finetune, retards like Dreamshaper, the makers of the biggest slop

Unless someone figures out how to full finetune it on a 4090 it's not going to happen. Maybe you'll get some low quality Loras.
>>
>>101752561
>anon I don't know if you understand this yet but very few people are going to front the money to finetune this
how? it only cost a few dollars per hour on a runpod, that's what all the /lmg/ finetuners are doing because they're used to giant models (we're crying on how big is a 12b models when the llm fags are dealing with fucking Llama3-70b)
>>
>>101752550
That's kinda just how thumbs are
>>
>>101752576
>somebody except me will do it
Anon, SDXL took thousands of hours to finetune and it's a tenth the size.
>>
>>101752589
>Anon, SDXL took thousands of hours to finetune
the fuck you talk about? are you talking about a pretraining or a finetune there? because a regular finetune on SDXL never take this long
>>
>>101752589
source?
>>
>>101752616
Dreamshaper didn't cost $200 to make.
>>
File: 00000-3740610179.jpg (569 KB, 1411x1890)
569 KB
569 KB JPG
>>
File: b5248ce1ccfb.jpg (197 KB, 1024x1024)
197 KB
197 KB JPG
>>
>>101752623
source?
>>
>>101752607
I'm eagerly waiting for your finetune because it's cheap and easy just like you.
>>
File: ComfyUI_Flux_4083.jpg (167 KB, 1024x1024)
167 KB
167 KB JPG
https://x.com/bdsqlsz/status/1820845650227122306
>>
File: IDrinkYourTears.jpg (106 KB, 1024x1024)
106 KB
106 KB JPG
>>101752635
you seem to know a lot about the time it actually takes to do a finetune, are you Lykon by chance?
https://xcancel.com/Konan92_AI/status/1820518655450562588#m
>>
File: 84cce2db8e77.jpg (158 KB, 1024x1024)
158 KB
158 KB JPG
>>
>>101752645
>fp8 training
good bye text
>>
>>101752668
kek'ed
>>
>>101752652
yes anon, it turns out finetuning isn't as fast and easy as you handwave it to be, again, I look forward to your Flux finetune, you obviously are an expert. Should be trivial, just $100
>>
>>101752679
the text works fine on flux fp8 though, but yeah, T5 fp8 is retarded as fuck
>>
File: 00001-3740610179.jpg (553 KB, 1411x1890)
553 KB
553 KB JPG
>>
>>101752698
source?
>>
File: 2024-08-06_00320_.jpg (821 KB, 1536x2560)
821 KB
821 KB JPG
>>
>>101752411
I agree. A lot of synthetic data there.
Hopefully it's possible to resolve with loras/finetunes without fucking up its comprehension too much. But I wish they did better styles from the model itself.
>>
>>101752718
actual experience fine tuning multiple architectures
>>
>>101752734
larping
>>
>>101752698
>yes anon
can you give more details on how much it cost to finetune SDXL for you for example?
>>
>>101752668
Can you set the card on fire? I can't make it gen the costume..
>>
>>101752702
there isn't that big a difference between fp8 and fp16 T5
>>
>>101752772
I'm excited to see your cheap and easy Flux finetune, sounds like it's barely an inconvenience for you to do, so I'll be delighted for you to prove me wrong and deliver something end of next week, that's like 300 hours, should be easy
>>
>>
>>101752791
can you give more details on how much it cost to finetune SDXL for you for example?
>>
>>101752791
why are you retarded?
>>
>>101752812
I'm not the one hand waving the literal costs of finetuning a 12B model and saying it's going to cheap and easy
Flux will have little to no full finetunes and any that happen won't be because of you.
>>
File: ComfyUI_00116_.png (1.25 MB, 1280x1024)
1.25 MB
1.25 MB PNG
Cookin muh GPU
>>
File: 00002-3740610179.jpg (508 KB, 1411x1890)
508 KB
508 KB JPG
>>
>>101752718
nta but as a tourist from /h/:
>pony finetune cost upwards of 30k
>shitty cascade 1b param finetune already blew threw 12k on online training, still not fully trained
>furfags asking for min of 1k just to do a prelim basic test to 'check out' the flux arch

finetunes take a lot more training hours than you're giving them credit for, and you need to pay more than the 0.36cents per hour prices if you're using rented because you can't have them booting you off mid-epoch. it is not at all comparable to llms.
>>
>>101752827
but I didn't say anything about that, retard, nor did the comment you replied to
>>
>>101752841
yes you did, you implied it won't cost thousands of dollars to finetune Flux, it's going to be thousands of dollars minimum to finetune Flux
>>
>>101752851
You're so retarded you can't tell multiple people responded to you?
>>
File: DontDoIt.png (2.14 MB, 3435x1654)
2.14 MB
2.14 MB PNG
>>101752788
>there isn't that big a difference between fp8 and fp16 T5
Absolute delusion
https://files.catbox.moe/uqsii5.png
https://files.catbox.moe/olsf64.png
>>
File: 2024-08-06_00335_.jpg (508 KB, 2560x1440)
508 KB
508 KB JPG
>>
>>101752864
Everyone who says Flux won't cost thousands of dollars to train are retarded. SDXL only got training because people can dick around for free on consumer cards. Just finding the hyper parameters to even effectively train Flux will cost thousands. You dumb niggers don't realize how much training is just watching things spin off to oblivion, you retards don't even realize how many people figured shit out so that you can just type in magic numbers from a guide for your overbaked lora.
>>
File: 00003-3740610179.jpg (498 KB, 1411x1890)
498 KB
498 KB JPG
>>
>>101752889
the retard is raging out
>>
>>101752901
>oh no he said nigger he must be mad
Zoomers are very low T
>>
>>101752922
the retard is grasping at straws
>>
>>101752889
pixart and hyuanchingchong lost, get over it, chang
>>
its time for quantization of image gen models if LLMs can do insane shit like this

https://x.com/rohanpaul_ai/status/1820835882598801643
>>
>>101752941
I'm simply stating why you won't see any finetunes for Flux and how much of the SD/XL ecosystem relied on NEETs training on laptop GPUs.
>>
>>101752889
screencapped so i can watch them all cry in 3 months when still no finetunes are out. people think this shit just magically spawns out of nowhere, cryptojeet logic.
>>
File: 2024-08-06_00337_.jpg (407 KB, 2560x1440)
407 KB
407 KB JPG
>>
>>101752941
>SAI cuck trying to pull a psy-op
Just take the L Prajeet
>>
>>101752876
>not the same seed
anon, are you mentally deficient?
>>
>>101752889
>Everyone who says Flux won't cost thousands of dollars to train are retarded.
why would it cost that much? it's only a few dollars per hour on a runpod
>>
>>101752962
flux isn't SAI, lmao. idiot
>>
Any reason to use other samplers over Euler for Flux?
>>
>>101752972
Lmao dumbass, back to matrix chat
>>
>>101752970
I'm going to be very generous in guessing that a single H100 finetuning is going to be like 10s/iteration with a tiny batch size.
>>
>>101751777
thanks, yeah that looks to be the case
>>
File: ComfyUI_02451_.jpg (584 KB, 2304x1792)
584 KB
584 KB JPG
>>
>>101752986
you sound really emotional over this LOL
-10 social credit, you fail glorious leader with shameful dispray
>>
>>101752836
>pony finetune cost upwards of 30k

That's the initial gpu cost, right?
>>
File: fp016.jpg (450 KB, 1344x768)
450 KB
450 KB JPG
>>
does ComfyUI have something like A1111's seed variation?
>>
>>101752963
doesn't matter, T5 fp16 would never think Michael Jordan is a white dude, that's a big reason to never use T5 fp8 retard
>>
File: Flux_00119_.png (1.27 MB, 1024x1024)
1.27 MB
1.27 MB PNG
>>
>>101753084
never uh? wow, so you tried all the seeds, you fucking retard?
>>
File: 2024-08-06_00341_.png (3.47 MB, 2560x1440)
3.47 MB
3.47 MB PNG
brutalist architecture on flux is the bomb
>>
>>101753096
then why are you asking me to compare with the "same seed"? you would simply say "ok for that seed it doesn't work but that doesn't mean it's the case for every seed" you retarded nigger
>>
>>101753018
>initial gpu cost
lol
Buying the hardware used would cost 10x more at least
>>
>>101753112
you... really think that's not something you should have inferred by yourself from me pointing out the seeds are not the same?
Anon, I'm serious now, not name calling, I think you should see a doctor about your cognitive decline.
>>
>>101753018
in his case I believe so, but if you consider the amount of hours that would've gone into even his shitty, high LR burnt training, I can't estimate that he saved that much money in buying his own.. you also have to consider he clearly didn't test many settings and got 'lucky' on his bake (even then its burnt to hell), testing for optimal training settings will eat up a lot more than you'd expect, and sometimes you won't see that something is wrong until you're halfway through the bake (and by then its too late - this is why the WD finetuner troon has to keep throwing out his shit and has nothing to show despite half a year of work). a decently sized dataset on SDXL, on the best online rented hardware (with no interruptions), will shit out epochs at maybe a 4 day speed. then, in the case of SDXL, it takes iirc a bare minimum of 20-30 epochs to get a fully functional finetune... IF you found optimal settings. you could sink 12k in, get to epoch 10 and suddenly realize some setting fucked everything up and you have to start over. since flux is that much bigger than SDXL, I can only imagine the cost increases exponentially, since SDXL could at least be finetuned on some jerryrigged 3090s to test settings
>>
>>101753151
the fuck is this schizo talking about?
>>
>>101753171
anon... none of this is hard to follow, ask other anons.
I worry for you, anon.
>>
>>101753157
the nice thing about owning the hardware is you're no longer burning money (just time) with mistakes
>>
>>101753181
>I worry for you, anon.
you should worry for yourself instead, you're talking pure nonsense
>>
Another one ready to go...
>>101753017
>>101753017
>>101753017
>>
>>101753182
I'm also shocked there aren't more millionaire faggots doing weird shit with A100 builds. Really not that big of an investment to get an 8xA100 rig and start doing weird shit.
>>
>>101753190
anon, I pointed out the seeds are different
"ok for that seed it doesn't work but that doesn't mean it's the case for every seed" is what you should have thought then
what are you not understanding? I want to help you.
>>
>>101753182
agreed, but in the case of the pony faggot its wasted since he seems pretty clueless and doesn't do much settings experimentation
>>101753204
presumably because they're millionaire faggots who would only take up the task for the sake of profit and image AI hasn't proven profitable without potential lawsuit/legal risk
>>
>>101753217
this is easily resolved by doing an f8 grid with that prompt
>>
>>101753217
it's not that hard to understand, you claimed they don't have much difference, prove it, you're the one having the burden of proof, you know that right?
https://en.wikipedia.org/wiki/Burden_of_proof_(philosophy)
>>
>>101753230
in that case use fp10 idiot
>>
>>101753228
I dunno, they waste money on $200k Ferraris, what's an AI hobby rig? I mean I dick around but I'm no millionaire.
>>
>>101753230
that's besides the point, anon... please anon, seek help
>>101753242
you realize he claimed there was a big difference first, right? then tried to prove it but used different seeds, making is test null and void
why are you not asking him to prove there is a big difference?
>>
>>101753267
I'm not even him, stop lashing out.
>>
>>101751789
>SDXL resolutions
Anon, flux can work with any resolution as long as it's below 2k
>>
>>101753275
okay but seek help anyway, you clearly need it
>>
>>101753267
usually when someone claims something, your job is to proving him wrong with a counterargument, not by saying "nuh uh", please anon, seek help...
>>
>>101753286
that's exactly what I did by pointing out he used different seeds you retard
>>
>>101753204
Look into the cost of 8xA100 and the cost to run it, then compare that to the rental price, now you know why. Purchasing hardware for training literally makes no sense. You're not AWS, you're not amortizing it over 10 years. You want results and you want them sooner rather than later.
Inference is where it makes sense because you'd be buying used 3090.
>>
>>101753314
are you mentally deficient? saying "that's not the same seed" doesn't prove that T5 fp8 and T5 fp16 are "not so different", you should consult a doctor, you sound mentally ill, I worry for you anon
>>
>>101753331
I don't have to prove that you fucking idiot, I just have to show his examples of it having big differences are bad
grow a brain
>>
>>101753347
>I don't have to prove that you fucking idiot,
oh yes you are, if you claim something as insane as "T5 fp8 and T5 fp16 are almost equivalent", the least you can do is to back it up or else you just sound like a clown
>>
>>101753374
whose quote is that, you worm brain
I said "there isn't that big a difference" in response to "T5 fp8 is retarded as fuck"
It is infinitely easier for you to show fp8 is retarded as fuck you fucking retard
kill yourself
>>
>>101751977
Thanks anon, I'll just keep the fp16 and switch using the weight, this will save space.
>>
>>101751925
i think those are Rgthree nodes
>>
>>101753102
nice



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.