[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: tmp.jpg (1.25 MB, 3264x3264)
1.25 MB
1.25 MB JPG
Discussion of free and open source text-to-image models

Previous /ldg/ bread : >>101988457

>Beginner UI
EasyDiffusion: https://easydiffusion.github.io
Fooocus: https://github.com/lllyasviel/fooocus
Metastable: https://metastable.studio

>Advanced UI
Automatic1111: https://github.com/automatic1111/stable-diffusion-webui
ComfyUI: https://github.com/comfyanonymous/ComfyUI
Forge: https://github.com/lllyasviel/stable-diffusion-webui-forge
InvokeAI: https://github.com/invoke-ai/InvokeAI
SD.Next: https://github.com/vladmandic/automatic
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI

>Use a VAE if your images look washed out
https://rentry.org/sdvae

>Model Ranking
https://imgsys.org/rankings

>Models, LoRAs & training
https://civitai.com
https://huggingface.co
https://aitracker.art
https://github.com/Nerogar/OneTrainer
https://github.com/derrian-distro/LoRA_Easy_Training_Scripts

>Flux
https://huggingface.co/spaces/black-forest-labs/FLUX.1-schnell
https://comfyanonymous.github.io/ComfyUI_examples/flux

>Pixart Sigma & Hunyuan DIT
https://huggingface.co/spaces/PixArt-alpha/PixArt-Sigma
https://huggingface.co/spaces/Tencent-Hunyuan/HunyuanDiT
https://huggingface.co/comfyanonymous/hunyuan_dit_comfyui
Nodes: https://github.com/city96/ComfyUI_ExtraModels

>Index of guides and other tools
https://rentry.org/sdg-link
https://rentry.org/rentrysd

>GPU performance
https://vladmandic.github.io/sd-extension-system-info/pages/benchmark.html
https://docs.getgrist.com/3mjouqRSdkBY/sdperformance

>Try online without registration
txt2img: https://www.mage.space
img2img: https://huggingface.co/spaces/huggingface/diffuse-the-rest
sd3: https://huggingface.co/spaces/stabilityai/stable-diffusion-3-medium

>Maintain thread quality
https://rentry.org/debo

>Related boards
>>>/g/sdg
>>>/h/hdg
>>>/e/edg
>>>/d/ddg
>>>/b/degen
>>>/vt/vtai
>>>/aco/sdg
>>>/trash/sdg
>>
File: FD_00092_.png (822 KB, 1024x1024)
822 KB
822 KB PNG
I should go back to my old gens more often. I have them all saved and never go back to them. I genned some cool shit.
>>
File: ifx152.png (1.33 MB, 1024x1024)
1.33 MB
1.33 MB PNG
>>
File: FD_00085_.png (1.48 MB, 1024x1024)
1.48 MB
1.48 MB PNG
>>
File: ComfyUI_00938_.png (1.14 MB, 720x1280)
1.14 MB
1.14 MB PNG
Combining the input with joycaption gives the most impressive results.
>>
File: ComfyUI_32547_.png (1.23 MB, 1280x720)
1.23 MB
1.23 MB PNG
>A cinematic photo of a girl, she is cosplaying as Sailor Moon, including the Sailor Moon wig with long blonde twintails, she's attending an anime convention, she's surrounded by a crowd of illiterate Appalachian redneck mouth breathing cocksuckers
>>
File: FD_00174_.png (1.24 MB, 1024x1024)
1.24 MB
1.24 MB PNG
>>101990275
Do you have that in a workflow or are you just doing it from huggingface?
>>
File: 2024-08-20_00210_.jpg (526 KB, 2560x1440)
526 KB
526 KB JPG
>>
File: joycap.jpg (284 KB, 1681x881)
284 KB
284 KB JPG
>>101990275
I used huggingface for that gen, I am waiting for the models to finish downloading before I try it locally, the workflow should be something like this I think
>>
>>101990048
This is cool. Good gen, Anon.
>>
>>101990291
>>101990291
>>
>>101990354
Ah yup I do this a lot. I often do a single gen, take that into Joy then take the Joy caption back into Flux and get a significantly better result.
>>
File: ComfyUI_32549_.png (1.38 MB, 1280x720)
1.38 MB
1.38 MB PNG
>>
For those who wonder if switching from clip_l to VIT L is a good choice, here's a comparison:
>A realistic photo of a woman cosplaying as Sailor Moon is hugging a RTX 3090, a white subtitle text can be seen on the bottom of the image, it is written: "I knew I'd made the right choice in buying you"
https://imgsli.com/Mjg5Mzk0
>>
File: 2024-08-20_00213_.jpg (489 KB, 2560x1440)
489 KB
489 KB JPG
>>101990357
thank you, I love my futuristic bullshit
>>
>>101990392
They're the same picture.
>>
>>101990411
if you consider an asian woman to be the same as a white women then maybe yes it's the same picture I guess
>>
File: ComfyUI_32551_.png (1.38 MB, 1280x720)
1.38 MB
1.38 MB PNG
>>
>>101990421
you really think one is more asian than the other?
>>
File: ComfyUI_32553_.png (1.4 MB, 1280x720)
1.4 MB
1.4 MB PNG
>>
File: 2024-08-20_00219_.jpg (528 KB, 2560x1440)
528 KB
528 KB JPG
I wish there was a Simon Stalenhag lora for flux yet.
>>
>>101990471
nta but i think right looks significantly more asian. left looks like a white woman
>>
File: FD_00545_.png (927 KB, 1024x1024)
927 KB
927 KB PNG
>>101990485
You can always make one. LoRA training is piss easy for Flux.
>>
>>101990452
this is cool
>>
>>101990354
>>101990421
There is basically nothing that can be inferred about the quality difference between regular clip_l and vit L from the pic you posted, and the race defaulted to when you didn't specify one in the prompt (1) isn't a marker of clip quality and (2) is unlikely to be something you can predict from choice between these two clip models anyway.
Show me several side by sides where one model is noticeably excelling at something specific in the image the other one is failing at. Here I see:
>VIT L better tiara, base clip stupidly put nvidia logo on it
>VIT L did the text a bit better, base clip fucked it up
>VIT L made her look marginally less asian, base clip more asian
>VIT L got the eye colour a bit off, base clip got it right
>VIT L better wrist, still slightly fucked fingers, base clip fucked up wrist
>VIT L the red blob in top left looks more naturally part of background bokeh, base clip looks weird
>VIT L she's more attractive
this tells me sweet fuck all.
>>
File: 2024-08-20_00221_.jpg (613 KB, 2560x1440)
613 KB
613 KB JPG
>>101990507
I am to busy genning to collect Stalenhag pictures .. maybe later, but I am so fucking lazy right now
>>
>>101990559
I just spam pics on Civit for free buzz and train that way. My gpu couldn't do it.
>>
File: ComfyUI_32554_.png (1.38 MB, 1280x720)
1.38 MB
1.38 MB PNG
>>
>>101990392
>text is better but still wrong
>card is wrong
Is this the power of schnell?
>>
File: ComfyUI_32552_.png (1.56 MB, 1280x720)
1.56 MB
1.56 MB PNG
>>101990513
>>
File: ComfyUI_32555_.png (1.29 MB, 1280x720)
1.29 MB
1.29 MB PNG
>>
>>101990610
>Is this the power of schnell?
it was flux kek
>>
>>101990507
Can you train flux lora on 16GB VRAM?
>>
File: 2024-08-20_00223_.jpg (730 KB, 2560x1440)
730 KB
730 KB JPG
>>101990569
my 4090 will do
>>
File: 975.jpg (526 KB, 1344x1472)
526 KB
526 KB JPG
>>
@101990690
d*bostyle
>>
>>101990392
Could you have chosen a non-shitty non-fried image to do the comparison on?
>>
>>101990626
no
>>
File: 00014-480976874.png (1.1 MB, 832x1216)
1.1 MB
1.1 MB PNG
>>
>>101990643
I plan on getting the 5090 if it is a significant improvement over the 4090. At least 28GB VRAM.
>>
>>101990690
lmao them niggas gay af
>>
>>101990626
yeah soon
>>
>>101990626
No, sorry, only on 12GB or less.
>>
File: 2024-08-20_00232_.png (1.46 MB, 1280x720)
1.46 MB
1.46 MB PNG
>>101990727
if they fucking only give the 5090 28gb (as the rumors say) I will go for a fucking workstation card
>>
>>101990779
I don't have $15 grand to waste on anime titties.
I do all the AI shit for my work though, I am hoping to convince them I need it.
>>
File: IamTheKing.jpg (11 KB, 225x225)
11 KB
11 KB JPG
>>101990779
>if they fucking only give the 5090 28gb (as the rumors say) I will go for a fucking workstation card
that's precisely what they want you to do, go for the 10000 dollar card, mission complete
>>
I updated stability matrix and now there is no DPM 2++ Karas
>>
File: MarkuryFLUX_00150_.png (1.22 MB, 768x1280)
1.22 MB
1.22 MB PNG
Genuinely flaborgoostered by Flux, even the fp8 quantized model is just giving bangers upon bangers
>>
File: 2024-08-20_00200_.jpg (483 KB, 2560x1440)
483 KB
483 KB JPG
>>101990810
ya fuck em, but I want the VRAM I needz it, especially when BSL releases their video model
>>
>>101990779
That is why 16GB is always the softcap on any non-90 class card and then 90 class cards have bit more.
It does not cost them that much to give out more, but it would eat their workstation and enterprise AI card sales.

Monopoly is a bitch.
>>
>>101990779
They should do at least 32Gb just a 4GB increase is taking the mick!
>>
File: ComfyUI_32556_.png (1.09 MB, 720x1280)
1.09 MB
1.09 MB PNG
>>
>>101990818
he pulled?
>>
File: 2024-08-20_00238_.jpg (883 KB, 2560x1440)
883 KB
883 KB JPG
>>101990844
I just don't get why AMD and Intel are so incompetent to not be able to get something done these days
>>101990852
32GB would be okay .. I would just get a 5090
>>
>>101990854
He got that tattoo from the terrorists that captured and tortured him.

He's finally free but will bear this scar for the rest of his life.
>>
>>101990806
The RTX 6000 Ada is 8 grand or so
>>
>>101990877
But 4GB increase over 24GB, what can that even do? It's not worth it.
>>
>>101990877
AMD CEO is nvidia CEO cousin, they only exist to protect nvidia from the law. And Intel is dead.
>>
>>101990897
>But 4GB increase over 24GB, what can that even do? It's not worth it.
for flux it has 2 advantages, you can fully load the fp16, and if you had a 3090 it will be way faster
>>
>>101990897
that's almost a whole Q8 T5, anon!
>>
File: ComfyUI_32557_.png (1006 KB, 1280x640)
1006 KB
1006 KB PNG
>>
>>101990908
I see
>>
File: 00021-2572727518.png (1.15 MB, 832x1216)
1.15 MB
1.15 MB PNG
>>101990919
That's a pretty fun idea for a keyboard. Is this a thing?
>>
>>101990896
Not in dollarydoos
>>
File: 2024-08-20_00241_.jpg (772 KB, 2560x1440)
772 KB
772 KB JPG
>>101990902
>AMD CEO is nvidia CEO cousin
literal nepotism .. but ya I heard that, they divided the market: AMD gets the good CPUs, NVidia the AI
>>101990897
for FLUX? nothing really, you already can load fp16 on 24GB, you could do higher res, but upscale is better anyway cause anything above 2MP genned in flux looks like shit.. with 32GB you could do animations at hires maybe tho, once that comes out
>>
>>101990946
>Is this a thing?
nta but yea, i see alot of random and weird keycaps on sites like aliexpress
>>
>>101990946
Custom keycaps? Absolutely.
>>
>>101990960
one could hope some companies shelf them out once they upgrade to blackwell, but who knows
>>
>>101990960
highway robbery it is then
>>
Forge or reforge what’s the difference?
>>
>>101990971
I see!

>>101990977
lol cool
>>
>>101990969
Yeah I guess the extra 4GB might be able to make some future things just about work. Which is valuable, but gotta think about value vs cost. We shall see.
>>
File: ComfyUI_32559_.png (1008 KB, 1280x640)
1008 KB
1008 KB PNG
>sex horse
>>
Man, this one specific prompt is slow as fuck and I don't know why.
>>
File: 00025-976036911.png (1.58 MB, 832x1216)
1.58 MB
1.58 MB PNG
>>
>>101991017
average furfags keyboard
>>
Is 12gb vram good enough for flux?
>>
>>101991043
Good enough for using, good enough for training loras.
>>
File: ComfyUI_32560_.png (1 MB, 1280x640)
1 MB
1 MB PNG
>>
>>101991055
keep hearing that but I've yet to see any evidence
>>
File: 2024-08-20_00246_.jpg (395 KB, 2560x1440)
395 KB
395 KB JPG
>>101991043
yes with some tricks and slightly lower quality models
>>101991015
ya also the speed increase of a 5090 might still worth it comparing 3090vs4090 flux speeds

>>101991017
>>101991036
kek, never trust anyone with a pink keyboard
>>
>>101991064

Of what, training LoRAs? Even at my most conservative settings I don't really get below 18gb of vram. I don't see much hope for 12 and if it does come it will be so slow it would be quicker to mow lawns and save up for a new GPU.
>>
File: 1714696137738173.png (622 KB, 1024x1024)
622 KB
622 KB PNG
civitai has a wojak lora. truly anything is possible now.
>>
File: ComfyUI_01910_.png (818 KB, 1024x1024)
818 KB
818 KB PNG
.
>>
>>101991086
That's because it's just being figured out right now, maybe in a couple of days we will have some updates and workflows.

Even finetunes should be possible at 12gb
>>
>>101991125
Yeah sure dude, next we'll fine tune it on your calculator.
>>
>>101991098
unfunny, qa lost
>>
File: 1706543374422273.png (1.36 MB, 1024x1024)
1.36 MB
1.36 MB PNG
>>101991098
but even more impressive, there is a slime girl lora, have a slime miku.
>>
File: ComfyUI_31947_.png (1.25 MB, 1024x1024)
1.25 MB
1.25 MB PNG
>>101991064
I trained two on a 3060, taking about 12 hours each because I blindly used Adafactor and the default learning rate. They started looking really good around 4500 steps. If you find good parameters for flux so that training takes no more than 2000 steps, you'll be able to train a LoRA in 4-6 hours.
>>
>>101991137
I guess you were one of the ones saying flux can't work on GPU's under 24gb when it released. You probably have no idea what you're saying.
>>
tfw ended up downloading gguf all quant version
>>
>>101991098
does it do Sproke?
>>
if you don't mind waiting you can finetune Flux on 1GB of VRAM
>>
>>101991158
All my LoRAs so far have been less than 1500 steps, and the results are exceptional.
>>
>>101991160
The model is basically being tortured just squeezing into 24gb cards for LoRA training and finetuning, basically every conceivable trick was pulled to make it work. 12gb LoRA training isn't happening, I don't care how much false reddit positivity you radiate, there are hard limits to how much you can bust a model up.
>>
>>101991183
subject or just style loras?
>>
>>101991183
share your lora settings on catbox pls?
>>
>>101991181
You can even finetune on RAM.
>>
>>101991203
cpu cache, no ram needed
>>
>>101991158
Oh, and I used batch size 1, but with 12GB you can set it to 4, that would cut the time for processing one image in half.
>>101991183
I just copypasted kohya's command without changing parameters, the learning rate turned out to be way too low.
>>
>>101991217
Tape drive only.
>>
>>101991125
just the model weights in fp16 precision take up 12 GB. Fine tuning requires also memory for activations and gradients, I would be surprised if someone can make it work on 24 GB. It's possible to offload to RAM or SSD, but that would be too slow to be useful.

Training Lora could load model weights in lower precision, for example Q3_K and only the lora weights need to be in higher precision. That should be possible with 12 GB VRAM.
>>
File: 00031-3737855065.png (1.33 MB, 832x1216)
1.33 MB
1.33 MB PNG
>>
>>101991194
Subject
>>101991198
Oh, not local, on Civit. Sorry Anon. All I changed were the repeats, which depends on my image set. 20 for 25 images, and the size to 1024.
>>
File: 1707674837885215.png (1.53 MB, 1024x1024)
1.53 MB
1.53 MB PNG
>>101991152
>>
>>101991244
can civitai do bucketing?
>>
>>101991185
You can say whatever you want but there is always techniques to do it. The plan is to traing half the model at once, then swap and train the other half.
>>
>>101991254
Yes, all the images were random sizes and aspect ratios.
>>
>>101991256
Yes, on a 24gb card.
>>
>>101991185
I just realised you said 12gb lora training isn't happening.


HAHAHAHAHAHAHAAHAHAHAHAHAHAHA!!

It can already be done!

Even 12GB finetuning will happen you fool.
>>
>>101991304
show me
>>
>>101989536
>Yeah so anyone who is finetuning flux in the future please add the names to any auto captions you make, it doesn't take long, you can select all images of the person in Taggui and simply add the name to all of them.
I'm assuming this would also be the case for styles and concepts, right? I remember reading that the important information that you want to be read in should be at the very front of your caption.
>>
>>101991290
Sigh oh ye with little faith and knowledge
>>
>>101991304
NTA but every time I have tried to train one on my 4080 it OOMs and collapses. I could have gay settings, very likely, but I haven't been able to make it work.
I will be interested to see if it can though, but I do have my doubts.
>>
>>101991309
No need when even other anons already do it

>>101991158
>>
>>101991315
>Have faith bros! Kohya and CUMFY will save us receive many upvotes for their efforts
>The heckin' community can do whatever they put their minds to

Your endless optimism that someone is going to come along and cater to increasingly tighter and tighter vram restrictions through some kind of techno wizardry annoys me.
>>
>>101991323
Wait for the spammer guy to update his latest findings, he should get you on the right path.

https://www.reddit.com/r/SECourses/
>>
this lora only works on Q2, fuckin hell
>>
let the record show he refused to show me
>>
>>101991356
It's not faith, it's already been figured out and on the way
>>
>>101991356
>Your endless optimism that someone is going to come along and cater to increasingly tighter and tighter vram restrictions through some kind of techno wizardry annoys me.
nta but isn't that exactly what happened when flux was released? nobody could run it. then we could run it on 4090s. then we could run it on 16gb, then 12, then 8, then 6, etc.
it also couldn't be trained because it was distilled, then it could be trained but only on 2 a100s, then on a single a100, then on a 4090.
there's a pattern here.
>>
>>101991373
check your PMs
>>
File: 1692902426331051.png (1.12 MB, 1024x1024)
1.12 MB
1.12 MB PNG
>>101991252
>>
>>101991391
where do i check my pms i logged into 4chan pass but i dont see it
>>
>>101991356
well thanks to comfy I can use q8 with no memory loading lag that I get with forge, and with kohya I made a one piece nami lora that works with TEN IMAGES, no captions. just namiOnePiece as an instance prompt.
>>
>>101991416
>no captions. just namiOnePiece as an instance prompt
atrocious
>>
>>101991430
its probably bad but its worthwhile to retest all your assumptions when you start working on a new method
>>
>>101991442
you may have thrown out common sense with the bath water
>>
my flux gens have a colorless, de-saturated quality but I'm using vae, what could be causing this?

They look fine right up till they hit the VAE Decode node and then they lose all their color, I've tried flux1DevVae and ae.safetensors. I'm using flux1-dev-Q4_K_S
>>
>>101991416
>just namiOnePiece as an instance prompt.
:(
>>
File: 1720097925600334.png (1.17 MB, 1024x1024)
1.17 MB
1.17 MB PNG
>>101991430
I was just testing making a basic, basic lora. to learn the app. and it actually worked with a handful of images. of course it would be better with proper tagging and a good image set.
>>
>>101991458
post catbox
>>
>>101991387
Yeah, and it's dumb and naive to expect it go beyond that. It was already a miracle that it got that far.
>>101991382
2 more weeks trust the plan
>>
add this to the beginning of your prompts to make it look less glossy
>Movie still lomography, photo of beautiful woman, shadow on face, no light, misty, night
>>
>>101991467
>a basic, basic lora. to learn the app
share config pls
>>
>>101991495
why would I do that when I only generate men?
>>
>>101991498
It was the default lora preset, all I did was add my instance prompt and a link to the folder with my resized images, default epochs/etc, didn't change anything.

then I clicked the 3 buttons at the bottom in sequence to set the folders up and I waited, was fast cause it wasn't intensive settings
>>
>>101991518
*also I learned if you set the output to the image directory you get a recursive loop. don't do that.
>>
>>101991516
sorry i just assumed everyone was doing 1girls
>Movie still lomography, photo of homosexual man, shadow on face, no light, misty, night
>>
>>101991539
no, my men are straight
>>
can you train flux models with 16gb vram yet?
>>
File: 1713472372371330.png (1.27 MB, 1024x1024)
1.27 MB
1.27 MB PNG
>>
>>101991543
doubt
>>
>>101991356
>bloo bloo bloo why dont't computers use 512MB of RAM
Get a fucking job holy shit
>>
>>101991477
It's not dumb and niave when you actually understand how it's done. Sorry but lack of knowledge makes you think it's impossible when it's simple ignorance.
>>
Is there a shortcut key to bypass a node on comfy?
>>
You anons better still be here in a few days when everyone is making loras and finetunes with 12GB vram.

So that I can laugh at you.
>>
>>101991631
Even if through some miracle someone made it fit in 8 GB of VRAM, who the fuck is waiting 5+ minutes per image? At that point you seriously should go get a part time job and save up $700 for a 3090, it would be faster.
>>
File: 2024-08-20_00275_.jpg (570 KB, 2560x1440)
570 KB
570 KB JPG
>>
>>101991671
I'm talking about lora training with 12GB vram
>>
I'm convinced anything over 50 images for a LoRA ends up worsening the quality overall.
>>
File: 2024-08-20_00273_.jpg (638 KB, 2560x1440)
638 KB
638 KB JPG
>>
>>101991697
Yeah anon, I can't wait to see your shitty Lora that took 3 days with a dataset of 50 images. Get a part time job, it'll be faster.
>>
>>101991716
It can be done in 3-4 hours
>>
>>101991697
Better buy some buzz kid, because you're not training that LoRA on 12gb of ram, at least not at a speed that won't brick your computer for the next 12 hours.
>>
>>101991721
Yeah, I know you make shit Loras, they're all over Civit, overbaked on the same face, body type and pose. It's SDXL all over again.
>>
>>101991711
for xl lower does seem better
>>
>>101991657
The exact same anons who were saying flux was impossible to train 2 weeks ago are here right now and being very quiet about it.
They will not speak up when proven wrong. Nobody ever admits they were wrong, except that one guy who was arguing negative prompts are pointless.
>>
You anons are legit annoying, had to deal with your type a million times before, always acting like nothing can change, that nothing is possible and EVERY single time things change.

you were the same guys that thought the internet wouldn't be popular and that touch screens on phones was stupid and would never be popular.

And you probably thought Flux couldn't run on 12gb vram lol
>>
>>101991759
Exactly, typical humans.

Also respect to the nragtive prompts guy
>>
agi 2 weeks ago
>>
>>101991761
>anons
It's one guy
>>
>>101991711
picking the right kinds of images matters most, dont pick 50 images with wildly different styles (unless you want variety in aesthetics)
>>
>>101991739
Nope I have my own techniques to avoid same face.
>>
Every single comment in this thread is me btw.
>>
>>101991793
Captioning is important too, if you have lower quality images then capture it as low quality, if you have anime style images make sure you say it's anime etc
>>
>>101991761
and you're someone who thinks you can use your computer from 2010 to use Flux and all your optimism is based on OTHER PEOPLE doing work, it's "trust the science" bullshit
>>
>>101991798
I know that already because I'm you.
>>
>>101991816
I literally just put my pics into joycaption and saved the exact output to txt. That was my captioning process. My LoRAs are perfect.
>>
>>101991845
are they style loras perchance
>>
>>101991830
You sound like the ugliest sonofabitch I ever heard.
>>
>>101991849
No. 25 images of a real person.
>>
>>101991845
Yeah that sounds good, although what I said can still matter if you want to make sure the image looks like what you captions, anime, low quality screencap etc

But yeah I'm gonna use Joycaption too
>>
>>101991845
joy caption fucks up too often tho, you should double check them
>>
>>101991852
I'm just a mirror.
>>
>>101991845
>download joy-caption
>edit the script to loop the process in a folder
>>
>>101991877
perfection is stupid when the model itself isn't trained on perfection, you're not going to undo the model's miscaptions
>>
>>101991877
Yeah and add your own captions for better control of the lora. Like names and so on.
>>
>>101991884
and replace that weird character it uses in lui of an apostrophe
>>
>>101991877
Works on my machine. I did check them. Every caption was correct.
>>
>>101991902
no such thing
it can be accurate or inaccurate
>>
>>101991887
my god, you're retarded
>>101991902
your shit must be real simple then
>>
>>101991893
I just did that with a powershell script to name the subject.
>>
For LoRA training, doing
>1 epoch, 800 steps
>8 epochs, 100 steps
is the same, right?
>>
>>101991926
steps are derived, not specified
I think?
>>
>>101991887
nice logic lmao
>>
>>101991920
your shitty 12 GB VRAM Lora can only tease out from what the model already knows, but good luck, I'm sure your images aren't overbaked garbage
>>
File: 1715856944622961.png (77 KB, 527x713)
77 KB
77 KB PNG
does anyone know of a sampler that lets me randomize variation seed?
>>
>>101991920
>your shit must be real simple then
It's a human. Can't get much simpler and complex than that. It captured everything correctly.
>>
>>101991940
you waste your time generating an image every 5 minutes and hand curating a 50 image dataset all day and are always behind because you don't know how to pick and choose your battles
>no gen
>>
>>101991950
fuck are you talking about, retard
>>
>>101991917
>no such thing
>it can be accurate or inaccurate
Who gives a shit.
>>
File: cuc.png (668 KB, 1024x1024)
668 KB
668 KB PNG
>>
>>101991974
I think it's time you start posting examples of your awesome Loras
or is this another one of those "I know what's best but in practice I don't do it"
>>
>>101991973
create all the imaginary narratives you want, it won't make you right
>>
>>101991950
You are being dumb, he's not the 12GB Vram guy, I am!
>>
>>101990155
So should I replace all my SD models with flux now?
>>
>>101991988
this is not what i expected from country grown vegetables
>>
>>101992000
If you aren't genning porn then yes
>>
>>101991994
>>101991995
>no gens
>>
>>101991993
I said wait a couple of days, it's in the process of being perfected. You will see!!!!!!
>>
>>101991936
Should have worded it differently. Let's say I set things as 40_folder and 1 epoch; would I get the same results if I set it as 10_folder and 4 epochs?
>>
File: 00038-1401398568.png (998 KB, 832x1216)
998 KB
998 KB PNG
>>
>>101991993
you're a schizo, who do you think you're talking to
>>
>>101992028
what batch size
>>
>>101992028
repeats are not the same thing as steps are not the same thing as epochs
>>
File: man.png (1.46 MB, 1024x1536)
1.46 MB
1.46 MB PNG
>>101992006
cucumber flavored cardboard doesn't sound too bad
>>
>>101992035
do you just prompt for this pose? these sitting poses often fuck up when I try them.
>>
File: image.jpg (142 KB, 1536x1280)
142 KB
142 KB JPG
cyberjews
>>
>>101992036
>no gen
>>
>>101992009
Alright thanks
>>
>>101992042
3
>>101992045
I guessed as much, thank you for confirming it.
>>
>>101992069
it takes 20 minutes per gen, please understand
>>
>>101991068
>>101990877
>>101990810

haven't tried the 3090, but I have a 4090 with overkill ram and cpu and gens take around 25 secs on average. It's not like genning on sd 1.5 of course, but mostly tolerable.
If they fucking come out with a gimped ram 5090 I'm certainly either buying another 4090 and go dual or a used ada or something.
Not for genning, but running actually decent llms locally, like bloom
>>
>>101992055
yeah they mess up for me too, especially when you say spread legs they often do the opposite lol. And sometimes theyr legs get chopped in half.

Prompt is sitting on the floor
>>
>>101991650
I think there's a hotkey list on his github page. I only remember ctrl+m for mute.
>>
>>101992110
>gens take around 25 secs on average
with CFG, right? So 12 seconds with the default workflow.
>>
File: it must be this big.png (1.52 MB, 1024x1536)
1.52 MB
1.52 MB PNG
>>
>>101992151
Damn I have no chance then
>>
>>101992126
with a random workflow I found on the internet, couldn't be bothered tinkering around with comfyUI, I'm far too used to automatic1111. even getting flux to work rustled my jimmies.
>>
>>101992118
Yeah I know ctrl+m but I don't want to have to reconnect any nodes so bypass is better
>>
>>101992118
>>101992166
it's ctrl+b and it's so fucking obvious that it's that, I feel stupid
>>
File: 2024-08-20_00286_.jpg (329 KB, 2560x1440)
329 KB
329 KB JPG
>>101992110
ya 4090 is decent for pure picture gens, but upscales could be a lot faster, but mostly you are right, VRAM is what counts for AI applications, so 5090 on 28GB will be just a disappointment.. also with inflation going wild I guess even a 28GB 5090 will be $2500-3000 in the first 6 months, so a used ADA really gets to be an option for a 4090+ADA dual setup
>>
So are T5xxl gguf quants worth it? Is Q8 significantly worse than the full fp16 model?
>>
>>101992185
>but upscales could be a lot faster
Ultimate SD Upscale for tiled upscales works with Flux
>>
Reminder to make sure that tiny sweat drops on the character's face are properly captioned, or just edit them out, mere two droplets in 50 images were enough to brick my lora.
>>
>>101992185
I am kind of glad the 5090 will have 28GB so I don't have a reason to upgrade from my 4090 kek
>>
File: flux1.png (2.8 MB, 1200x1808)
2.8 MB
2.8 MB PNG
Behold, the first image I created with flux!
>>
>>101992203
The 5090 is basically a statement from NVIDIA that they no longer give a fuck about consumer GPUs. And why should they?
>>
>>101992218
gem
>>
>>101992218
I look like that
>>
File: 2024-08-20_00315_.jpg (444 KB, 2560x1440)
444 KB
444 KB JPG
>>101992197
thats what I am doing, but its still abit of time, atm I am using 512x512 tiles to upscale and they run at 2 its/s .. if the 5090 could do these at 4-5 its/s that would nice

>>101992226
kek, yaaa you can see it that way

>>101992226
if its like the rumors, its statement: if you are into AI give us money, our consumer range is for gayming
>>
>>101992226
because if they want my fucking money they would have put in more VRAM
>>
>>101992185
I never upscale using comfyUI, I use chainner and it's pretty fucking fast desu. How long and to what rez are you scaling to? what model?
>>
File: F_fP1K3aMAAgMMO.jpg (233 KB, 2459x1378)
233 KB
233 KB JPG
>>101992275
They don't care about you.
>>
>>101992311
Those fuckers
>>
>>101992311
they're essentially a monopoly so makes sense
>>
>>101992325
Maybe if AMD weren't retards there would be competition.
Fucking Intel of all companies is more primed for AI than AMD
>>
>>101992311
that uptick is insane
>>
File: 2024-08-20_00320_.jpg (832 KB, 2560x1440)
832 KB
832 KB JPG
>>101992298
I use flux itself to upscale from 1280x720 to 2560x1440 with sdultimate_upsale, takes about 120-180 seconds

probably chainner is smarter .. but I run a big q of images most times and decide on the fly if a picture should be upscaled or not and cancel or not, also I am kinda interested how flux upscales since its still kinda new
>>
>>101992341
At the beginning of this year everyone in the business was talking about AI. Everyone wants to push AI SOMETHING in their product, it doesn't even matter if it makes sense.
>>
>>101992311
they are going to release the 5060 with 8gb aren't they? no way they release it with 12gb and then release the 5070 with 16...
>>
>>101992358
yeah i noticed, i'm in it, the thing that surprises me is that nobody is even trying to compete, like what the fuck man
>>
>>101992344
cool gen
>>
File: 1710514754802353.png (1.24 MB, 1024x1024)
1.24 MB
1.24 MB PNG
now this is why loras are neat

yoji shinkawa style miku (MGS)
>>
>>101992365
only when the AI bubble pops and they can't get rid of their stock
>>
fucking fuck was organizing models folder on server via winscp, moved models folder to the folder containing models folder, thought it would merge but it overwrite the destination models folder with the empty models folder from comfy, now all models and my custom tuned loras since sd14 gone KEKW

trying ext4magic and extundelete

dont use winscp it is the biggest piece of shit, linux doesn't let you mv with files in there, instead winscp deleted without prompting me about the delete when issuing a move
>>
>>101992358
the bubble is already bursting
>>
>>101992390
Oh Anon, the bubble has only just begun.
>>
>>101992375
we will probably get a 5070 super with 16gb, so we are in the refresh hell.
>>
File: 1722481301211893.png (1.27 MB, 1024x1024)
1.27 MB
1.27 MB PNG
>>101992373
>>
The base 5060 will have 32GB, have faith.
>>
>>101992409
oh yeah this generation is fucked, but it'll improve in the future when things stabilise and people realise AI is a cash sink
>>
File: shalom.png (1.41 MB, 1536x1024)
1.41 MB
1.41 MB PNG
>>101992064
>berg
took me a while
>>
>>101992425
>it's not tho, I use it for work all the time and it has saved me lots of time and money. If you use it for gooning alone or even worse, having lewd chats, then yeah, it's a cash sink.
>>
>>101992311
>companies will save half a cent on components to save $1 million
>they don't care about 2.8B in revenue
>>
>>101992458
I think he means more about the billions all these corporations are spending, probably a lot of them are gonna lose money when competing with everyone else. That's my guess
>>
File: 2024-08-20_00323_.jpg (723 KB, 1440x2560)
723 KB
723 KB JPG
>>101992368
thank you
>>101992375
>>101992390
>>101992425
stock wise its its always a gamble, but internally the in the tech industry it only just started it changed the server backbone requirements big time, every big corp and their uncle are preordering hundreds of thousands of blackwells just to power there new shiny AI software as a service things .. I mean didn't Elon just preorder a few hundred thousands blackwells, right?
>>
File: 1723796653648890.png (1.05 MB, 1024x1024)
1.05 MB
1.05 MB PNG
>>101992413
>>
File: mji134c2fpkb1.png (339 KB, 931x521)
339 KB
339 KB PNG
>>101992459
Not when that revenue costs them $4b
>>
>>101992422
it already leaked its 28gb
>>
>>101992472
openAI (Microsoft) are spending billions on DALL-E yet it loses to open source flux + loras.

many companies are all in on "AI" just as they pushed the NFT meme. AI is here to stay but a lot of companies are not actually producing any more. (SAI)
>>
>>101992493
>openAI (Microsoft) are spending billions on DALL-E yet it loses to open source flux + loras.
You think they haven't kept working on new image diffusion models?
GPT-4o also has image generation abilities, we don't know how good it is.
>>
>>101992509
>we don't know how good it is
always promising
>>
File: POP_~6.jpg (473 KB, 1304x1304)
473 KB
473 KB JPG
Good morning local diffusioners
>>
File: hell yeah.jpg (2.5 MB, 3072x2048)
2.5 MB
2.5 MB JPG
>yurocel
>>
>>101992482
you have misunderstood this chart, the cost of revenue here is the cost of all the revenue, it just happens to be positioned with and a similar size to the gaming revenue. with that said it wouldnt surprise me if their gaming margins are lower than their datacenter margins
>>
>>101992535
Jew truck?
>>
>>101992482
retard do you know what gross profit is? holy shit you people really are 90 IQ
>>
>>101992544
i mean that's kinda obvious, the lockdown is over and things haven't changed much in a couple years for top end gaming requirements
>>
>>101992482
this nigga CANNOT READ A SIMPLE CHART
>>
>>101992535
What's in the truck!?
>>
>>101992544
>>101992556
It doesn't even matter because the point remains they literally do not give a single fuck about you. You are not their main customer anymore. It's never been more over.
>>
>>101992509
the main problem with corporate AI is them neutering it because of DEI or people wanting censorship. I can use an uncensored Llama 3.1 model to say anything. ChatGPT will censor itself. DALLE will censor itself.

Open source will always, always win, even if Microsoft have billions in AI farms.
>>
>>101992576
typical aislopper
>>
>>101992581
sorry you're 90 IQ, your opinion is useless
>>
>>101992587
Enjoy your 28gb $4000 5090
>>
>>101992581
I don't think that's how it works though, they want every penny possible. And the future is unpredictable, consumers can grow etc
>>
>>101992595
sorry you're 90 IQ, your opinion is useless
makes sense why you're black pill though, you're too stupid to understand the world, without iPhones you wouldn't even be here
>>
>>101990224
I have never saved a single gen in all these years. They disappear almost as easily as the appear.
>>
>>101992602
see >>101992587
>>
>>101990224
I always like reusing old prompts in new models just to see what happens
>>
>>101992609
>not only can't I click right I think they don't care about $2B/quarter revenue
>>
File: FD_00512_.png (1.43 MB, 1024x1024)
1.43 MB
1.43 MB PNG
>>101992608
I have a 2TB HDD full of them. I need a NAS.
>>
Where do I set clip skip in comfy?
>>
>>101992583
you think the models OpenAI gives other corpos access to are the censored ones? No, they use the best they have, any restrictions on what can be generated are hammered in the contracts.
It's different when they are offering a service to the public at large.
>>
>>101992660
You're wrong, they drink their own Kool-Aid.
>>
File: 1693852485941511.jpg (268 KB, 1024x576)
268 KB
268 KB JPG
>>
File: indeed.png (794 KB, 1024x1536)
794 KB
794 KB PNG
>>101992545
>>
File: ComfyUI_Flux_2.png (1.44 MB, 1216x832)
1.44 MB
1.44 MB PNG
Gotta love combining different character loras
>Nicolas Cage and Alex Jones fighting each other
>>
>>101992660
and this is why we need open source, cause we aren't corporations. I'm sure openAI have image models that generate tentacle porn all day, but we'll never see them. But flux/sdxl/pony exist and are free.
>>
>>101992671
sama loves dick and money too much for that to be true.
>>
>>101992676
I dont get it
>>
>>101992563
huh? i said margins, as in the profit per dollar of costs within that sector, i.e. selling a $100 card for $200 vs selling a $100 card for $250. we agree that they aren't going to prioritize gaming but you seem to be misunderstanding a lot of the basics...
>>101992598
nta but. they do, but the issue is if you give a consumer card something like 48GB of RAM and still sell it at a "consumer" enthusiast price like $2k-$4k, you damage your business sales because suddenly a bunch of businesses swap to buying that instead of the $10k+ datacenter card. that's the real issue. yes, in theory with lots of competition they should sell the cards for exactly what they cost plus 1% margin, in practice supply and demand means they can crank the price and still sell almost as many for a greater return, at least with businesses.
>>
File: 1699251340226319.png (939 KB, 1024x1024)
939 KB
939 KB PNG
not for honor, but for youuuuuuuu
>>
File: ComfyUI_01147_.png (1.68 MB, 832x1216)
1.68 MB
1.68 MB PNG
Rad
>>
>>101992685
They're closeted puritans, it doesn't mean they're 100% bought-in in her turn, the male gaze, racism, etc. They might have their own personal models they train in their basement with their H100s laying around but at the corposhlurpo it's them marching together in Puritanism.
>>
>>101992681
If you didnt tell me it was them I wouldn't have guessed. Its not a good likeness
>>
File: hmmmm.png (2.56 MB, 2048x1536)
2.56 MB
2.56 MB PNG
>>101992577
this

>>101992693
it's an israel jew truck, that's it, inspired by the cyberjews post
>>
File: 2024-08-20_00325_.jpg (907 KB, 1440x2560)
907 KB
907 KB JPG
>>101992645
yup got about as much, luckily 4gb of nvme are just 100 bucks these days and getting cheaper everyday
>>101992608
sad, what I am actually doing right now is going thru my old gens and just paste em into flux (even the schizo 1.5 prompts) and check what turns out, some are awesome
>>
>>101992715
What is a cyberjew? Did I miss the establishment of a meme?
>>
>>101992694
It's bad business practice to bend your customers over a barrel, even enterprise don't like to be gouged. If they're basically willing to take a slower card but with 48 GB of VRAM because it closer matches their needs and Nvidia is actively ignoring and abusing them, they will remember that and there ARE competitors that don't think $2B+ as chump change.
>>
>>101992707
Nah, I believe in the greed of man.
>>
What is the best FLUX model for a lower VRAM card? I have an 8GB card and have been using the nf4 but gen times are like over a minute a gen
>>
File: ComfyUI_Flux_5.png (1.68 MB, 1344x768)
1.68 MB
1.68 MB PNG
>>101992708
Yeah, chara loras overlap each other really hard
>>
>>101992734
>It's bad business practice to bend your customers over a barrel
Tell that to every business everywhere for the entire history of humanity.
>>
>>101992740
You're completely wrong, I worked at one of these companies, the peons believe in it 100%. The peons are the ones working with such fervor to ensure the model won't say nigger.
>>
>>101992747
lmao
>>
>>101992761
Yes can you fucking believe it? I bet you're too young to even remember Sears. Or IBM. Or as Intel crashes as we speak.
>>
File: file.png (2.55 MB, 832x1216)
2.55 MB
2.55 MB PNG
>>
>>101992731
>What is a cyberjew?
I don't know why this question make me laugh so hard. Anyway I was referring to this post
>>101992064
>>
>>101992660
>>101992671
OAI definitely give the "uncensored" models to other businesses, very occasionally someone scrapes a big business's key with unfiltered Dall-E 3 and gens a bunch of very realistic but shitty lewds with it. The base model is much like any other in that it usually generates flesh horrors for genitals, and often weird lumps or gems for nipples, just due to the dataset and tagging stuff. Despite that it's still a very horny model by default.
Also, they don't really drink their own kool-aid, they very obviously barely give a fuck about filtering/"safety" from a lewds and racism perspective, they do exactly what they need to to avoid getting completely BTFO by journalists and public opinion. The best example of this is serving up ChatGPT as the main source of GPT for the general public, which refuses to engage with anything lewd or illegal or race related or whatever to the point where even turbonormies get frustrated with it, but if you scratch the surface and pay a pittance to generate with an api key (or scrape/proxy), you can trivially launch right into your isekai raceplay gangrape tentacle pregnancy fantasy with any basic 2 sentence jailbreak from the sys role.
>>
I can't get over the lack of control I feel with Flux compared to XL.
>>
>>101992784
Okay I guess I'm missing something because that's just a car
>>
File: file.png (2.64 MB, 1344x768)
2.64 MB
2.64 MB PNG
Fluxbros...
>>
>>101992779
I sure can't wait for the housing market to crash and for rent prices to come down and for fucking FOOD to be affordable. Thank you for your wisdom
>>
Another discussion heavy thread hits the bump limit so here we go...
>>101992797
>>101992797
>>101992797
>>
>>101992817
why is the discussion so heavy tho
>>
>>101992715
OK inspection of the truck is complete, you may continue...
>>
>>101992734
Agreed, and it may bite them one day, but it's not exactly "bending them over a barrel" to price according to supply and demand as long as you're genuinely trying to produce as much as you can. It might seem like stuff should theoretically be sold for something like "cost plus 10%" or whatever % is needed to cover risk in that business, but it's simply not true. If everyone wants the thing, and selling it for cost plus 100% results in people still competing to fight over the thing, then selling it for cost plus 10% simply promotes scalping, discourages/shuts out competition, prevents you from using those profits and reinvesting capital into further production, etc. Artificially limiting supply to fuck the supply/demand curve into giving you insane margins is terrible behaviour, but selling according to market demand is not really going to build any resentment.
Also, if anything they engage a lot in the opposite with consumers for PR reasons w/ the whole "get the cards into the hands of gamers" stuff; lots of deliberately selling GPUs in bundles or in prebuilts so that it's harder for crypto miners to buy a GPU without ending up with useless extra crap.
>>
>give the customers a 48GB card
Do you guys think businesses will only buy workstation cards or something? Make a cheap card with a shitload of vram and I guarantee you no ordinary person will ever see these cards in the light of day
>>
>>101992794
the sign in the background has "berg" at the end, which is a common jewish last name suffix, or surname, whatever. It took me a minute to realize why the anon wrote "cyberjew" with his image, it must have been since he only prompted for a car and got a random jewish sounding neon sign in the background unprompted, and once the realization hit I laughed and was inspired to generate trucks with the colors of the flag of israel and the star of david
>>
>>101992882
Sounds like you need to go outside man
>>
>>101992865
It is bending over a barrel to purposely underserve a market segment essentially forcing a $2000 upgrade.
- 4090 (24 GB) is $1600
- A6000 ADA (48 GB) is $8000

We both know there should be a 4090 with 48 GB of VRAM for $2400
>>
File: ComfyUI_00056_.png (1.3 MB, 768x1344)
1.3 MB
1.3 MB PNG
I am an idiot. Only been messing with AI for a couple of weeks. That said...
I'm at the planning phase for a new machine. I'm fairly confident that the 50 series is going to disappoint (as someone mentioned above, less vram so you have to buy the enterprise cards) and I'd bet the top-end 50 card is gonn be over 3 grand. I saw the GN news vid mentioning a 48gb 4090 spotted in China priced at 2500 and it made me think.
Nvidia SLI is long dead, but is there such a thing as AMD SLI? You could buy two 7900xtx for under 2 grand with a bit of shopping around, could that be a sneaky way to big boi vram?
Another idiot question; I'm currently running a 1080ti, which does support SLI. What If I got 3 more secondhand 1080tis and ran a quad SLI setup? Has anyone tried that? OR would it be a terrible idea for other reasons?
>>
>>101992747
kek poisoned lora leaking on the other subject
>>
>>101992919
oh i thought you were talking about the businesses getting bent over a barrel by the margins being bigger. Yes, i agree completely, high end enthusiasts specifically get very fucked by this decision, people who want a card on the consumer pricing curve but further out into a 48GB VRAM range without jumping up to an $8k+ workstation card have no options, and are sacrificed in favour of not letting businesses have a cheap 48GB card. feel free to feel some resentment.
with that said, as >>101992875 points out, let's pretend they *do* try to sell a 48GB consumer card for $2500. businesses will buy this card in droves and do anything they can to get their hands on it for anything a dollar below $8000, and scalpers will grab whatever they can knowing the market value is what businesses will pay for it. next to none will find their way into the hands of actual enthusiasts, even with "sold in a gaming PC" tricks.
>>
File: image.png (541 KB, 1536x1024)
541 KB
541 KB PNG
>>101992893
>>
>>101993011
I just don't think it's that big of a deal, enterprise cards are more about efficiency, a 4090 with 48 GB of VRAM is going to be higher maintenance than an A6000 as well as being bigger. And going back to my point, if businesses actually want the 4090 with 48 GB of VRAM over the enterprise cards so bad, then it is Nvidia fucking them over.
>>
>>101991531
teehee, you silly sausage.
>>
>>101992743
q4 is good I think or nf4 v2
>>
>>101993151
Where can I get q4
>>
>>101993162
https://civitai.com/models/647237/flux1-dev-gguf-q4-q5-q8

repo link on the page should have it, q4_0 should be it, use it with the unet loader gguf node
>>
>>101993173
Its unusable on forge?
>>
>>101993249
forge works just make sure you have the encoder + clip l + vae selected
>>
>>101993329
I dont see either of those in the model files on the page
https://huggingface.co/city96/FLUX.1-dev-gguf/tree/main
>>
>>101993395
https://huggingface.co/city96/FLUX.1-dev-gguf/resolve/main/flux1-dev-Q4_0.gguf?download=true

they should be in order from q4 to q8
>>
>>101993451
Yes I see and I'm downloading now I dont see the encoder, or vae etc,only the q4 gguf
>>
>>101993478
https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main

vae

https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main

4gb text encoder
>>
>>101993541
Thank you so so much
>>
>>101993548
np, yeah the models are scattered all over, the main comfy guide uses an all in one file (model/encoder/vae in one)

https://comfyanonymous.github.io/ComfyUI_examples/flux/
>>
>>101993559
Thanks again man. Which of each file do I download? I'm.prettue sure I need the ae.safetensors but what about the clip
>>
File: musk.jpg (206 KB, 1728x864)
206 KB
206 KB JPG
elon musk gives you a guy who looks nothing like him while Elon Musk is pretty close. You'd think he would be well known enough to not have weird tokenization issues.
>>
>>101993597
clip l, and the fp8 t5xxl file, both are needed for the encoders

it would be nice if this was all in one place but yeah, only need the files once.
>>
>>101992944
quad sli 1080 idea is based, but unfortunately Comfy doesn't have multi gpu support for inference (aside from offloading clip and vae to another gpu vram)

maybe another client does
>>
>>101993638
Thanks again. And where do those go? Theres no clip folder in the models folder
>>
>>101993674
clip l and t5 in text_encoder, vae in vae, model in stable-diffusion I think
>>
>>101992695
Based. I love that song.
>>
>>101993819
i like this one
>>
>>101993763
Thanks so so so much again anon.
>>
>>101993846
np, yeah there are a billion variations of the same models and its initially confusing what you need for the various guis.
>>
>>101993861
Thanks again. Its very confusing and I'm a newbie to flux. I'm very grateful and I hope you have a good day.
>>
>>101993914
you too, have fun genning
>>
>>101993673
It's my reluctance to throw away/sell old hardware that spawned the idea. That 1080ti has been so good to me for many years, I'd feel bad if it didn't get a new life doing something cool. And using it in a home media centre (my go-to for old rigs) seems like it would be a bit of an insult to the greatest GPU ever made.
I might do some digging around to look deeper into the quad SLI 1080 for AI idea, but I'm not savvy enough to figure out the issues that may arise.
I mean, the enterprise folk are running multi-GPU stuff all the time (admittedly not for genning pictures but still), so there must be a way.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.