[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: icegun.png (1.4 MB, 1024x1024)
1.4 MB
1.4 MB PNG
Discussion of free and open source text-to-image models

Previous /ldg/ bread : >>101891791

>Beginner UI
EasyDiffusion: https://easydiffusion.github.io
Fooocus: https://github.com/lllyasviel/fooocus
Metastable: https://metastable.studio

>Advanced UI
Automatic1111: https://github.com/automatic1111/stable-diffusion-webui
ComfyUI: https://github.com/comfyanonymous/ComfyUI
Forge: https://github.com/lllyasviel/stable-diffusion-webui-forge
InvokeAI: https://github.com/invoke-ai/InvokeAI
SD.Next: https://github.com/vladmandic/automatic
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI

>Use a VAE if your images look washed out
https://rentry.org/sdvae

>Model Ranking
https://imgsys.org/rankings

>Models, LoRAs & training
https://civitai.com
https://huggingface.co
https://aitracker.art
https://github.com/Nerogar/OneTrainer
https://github.com/derrian-distro/LoRA_Easy_Training_Scripts

>Flux
https://huggingface.co/spaces/black-forest-labs/FLUX.1-schnell
https://comfyanonymous.github.io/ComfyUI_examples/flux

>Pixart Sigma & Hunyuan DIT
https://huggingface.co/spaces/PixArt-alpha/PixArt-Sigma
https://huggingface.co/spaces/Tencent-Hunyuan/HunyuanDiT
https://huggingface.co/comfyanonymous/hunyuan_dit_comfyui
Nodes: https://github.com/city96/ComfyUI_ExtraModels

>Index of guides and other tools
https://rentry.org/sdg-link
https://rentry.org/rentrysd

>GPU performance
https://vladmandic.github.io/sd-extension-system-info/pages/benchmark.html
https://docs.getgrist.com/3mjouqRSdkBY/sdperformance

>Try online without registration
txt2img: https://www.mage.space
img2img: https://huggingface.co/spaces/huggingface/diffuse-the-rest
sd3: https://huggingface.co/spaces/stabilityai/stable-diffusion-3-medium

>Maintain thread quality
https://rentry.org/debo

>Related boards
>>>/h/hdg
>>>/e/edg
>>>/d/ddg
>>>/b/degen
>>>/vt/vtai
>>>/aco/sdg
>>>/trash/sdg
>>
>no collage
damn its really over isnt it
>>
File: ifx20.png (1.18 MB, 1024x1024)
1.18 MB
1.18 MB PNG
>>
File: file.png (986 KB, 1344x800)
986 KB
986 KB PNG
>>
>>101896290
makes me want to go back to a time before 4chan existed
>>
File: file.png (1.45 MB, 1344x800)
1.45 MB
1.45 MB PNG
>>
File: ComfyUI_02207_.png (1.06 MB, 1344x768)
1.06 MB
1.06 MB PNG
>>101896347
>pedo is now pretending that he's another anon to get people to stop

hahaha, you're so transparent
fucking idiot. get off this board
>>
>the day /ldg/ died
>>
File: 1712900101129336.png (2.66 MB, 1024x1024)
2.66 MB
2.66 MB PNG
>>
>>101896411
kek, so i was right

again, just so utterly transparent in your feeble attempts
>>
>>101896360
Nice
>>
What caused the melty to get to this point? the pastebin? I can't fucking believe that one little phrase, almost 2 years lurking here, and one little phrase after flux's release started all of this. It's amazing how much one single seed can net you.
>>
>>101896466
im gonna let you in on a secret anon

i ain't even a janny. i just like to upset you.

just stop being a pedo and we'll be fine
>>
File: 1713401277945146.png (2.55 MB, 1024x1024)
2.55 MB
2.55 MB PNG
>>
>>101896452
Anon simply posts cunny in all ai related threads all over 4chan. It's not even related to ldg. If you lurk in lmg or degen threads, he posts there all day too
>>
Pedo poster all confirmed troons btw
>>
>>101896360
Funny it thinks the NES is the SNES.
>>
>>101896267
oops I'm dumb, anon, if you do this method you need to add
>accelerate launch "flux_train_network.py"
to the start
I forgot the "flux_train_network.py" part
I am still troubleshooting, but I think its possible that the ="" part is not used anymore too...
>>
>>101896498
>reddit spacing
Sounds like a prime janny material
>>
>>101896533
>i thought people who talked like this weren't real and all the neckbeard "while you were partying" memes were exaggerated wtf ew

anon what part of "i just like to upset you" don't you understand. it's all a game

anyway glad to see you stop posting bestiality and pedoshit now. welcome back to society
>>
>>101896551
where do you think "reddit spacing" came from? I'll give you a hint - it's not reddit.

4chan literally invented the enter key, before then everyone just spammed space until they reached a newline.

how new are you?
>>
>>101896557
It's a game where you are pissing and shitting your pants like a newfag tourist
>>
>>101896600
Yet you're the one who stopped posting your kids and bestiality

So it's a game I won.

You can try and insult my integration into "4chan culture", but I've been here far too long for that to really hurt me.
>>
File: lolle (1).jpg (252 KB, 858x721)
252 KB
252 KB JPG
>>
>>101896598
>>101896615
kek deleted at the same time
>>
>>101896626
>stopped
>he still thinks he is in charge
Oh, sweet summer child.
>>
see if there's anything you can learn from today /ldg/ it's that ignoring the pedoposter doesn't work.

you just need to bully him into submission and then he'll start embarrassing himself by pretending to be other anons and stop posting

pretty fucking funny thread imo
>>
>>101896667
>reddit memes

Weren't you calling other people redditors 5 minutes ago? Please lurk more.
>>
Read the rentry and ignore he has 20+ hours of this
>>
>>101896667
>in charge
>4chan
fucking kek my sides
>>
he's literally writing paragraphs and feeling smug in his dead thread
>>
>>101896698
you guys didn't do a good enough job of bullying him off this board and just let him fester and shit up ldg threads. that's not my problem.

like i said, >>101896676 works
>>
>>101896734
How do we know you aren't that poster, what if this is your plan to get more attention, if everyone bullies you then you feel better or some weird shit.
>>
>>101896717
Sorry, I'm just doing a victory lap. Hopefully people start posting gens again now that the pedo has gotten so mad he stopped posting pedo shit and bestiality.

Also if this is you pedoposter, then get fucked. Now you're just whining over my victory. Buy more IPs.
>>
>>101896758
it is him, don't bother, he's falling apart
>>
>>101896366
cool gen
>>
Lol, janny is that mad.
>>
>The pedo schizo is still at it
Thank god for the other stealth general
>>
File: file.jpg (136 KB, 391x1300)
136 KB
136 KB JPG
>>101896598
>Starts breaking down once he realizes he was being manipulated all along.
>>
>>101896543
I'm so fucking confused
from what I can tell the bmaltais GUI makes a .toml config file that adds commands that for whatever reason the flux lora training script can't parse, and so it just infinitely errors. I have no idea why it can't parse these commands, perhaps the format is slightly different in the .toml? I've got other stuff to do for now, but I'll test some stuff later tonight to see if I can get it working still. if anyone knows what the fuck the lora config file -should- look like to successfully run with kohya flux, please spare me and post an example because this is retarded
>>
File: 1720029726316069.jpg (69 KB, 675x699)
69 KB
69 KB JPG
>>101896867
>bullying works
>>
It's wild trying to make something with XL after playing with Flux, the prompt adherence isn't even remotely comparable.
Feels like going from the latest fine tuned XL models to some random 1.5 model.
>>
File: 1703598491805865.png (2.65 MB, 1024x1024)
2.65 MB
2.65 MB PNG
>>101896950
Some day people will go back to these models out of pure nostalgia.
>>
>>101896239
>add d*bo to the op
>thread now filtered
i was wondering where we'd gone kek
>>
>>101896516
maybe he's working for the seething artists
>>
File: 1709306800784933.png (1.29 MB, 1024x1024)
1.29 MB
1.29 MB PNG
>>
>>101897000
Inside job?
>>
File: ComfyUI_00153_.png (1.29 MB, 1024x1024)
1.29 MB
1.29 MB PNG
>>101896975
I love how Flux can into cars
>>
>>101897063
the corpos are too big to take down, so you tarnish the people using the services instead.
>>
I want Flux+Animagine uguu
>>
Cleaned out
>>
Using flux and all I can generate is women with big ol giant bolted on tits. Anyone know how to control breast size?

Prompt here:
https://pastebin.com/8F6wmvB2

From what i understand you want to do more explainy prompts rather than tag based right? Also distilled models can't have negative prompts? Lame af.
>>
>>101897219
flux only makes small breasts unless you schizo prompt for non binary women (check earlier threads for the prompt for this)
>>
How can I use flux to generate multiple images of a character it made? The face looks different each time. I just want different angles/poses.
>>
File: 00039-3831777697.png (1.68 MB, 1160x1496)
1.68 MB
1.68 MB PNG
>>101897219
>>101897240
Am retarded forgot the image.

I guess i should have said more like implant looking or push-up bra.
>>
>>101897082
Agreed
The model general ability to just "understand" is so good
>>
>>101897245
Try character sheet or expression sheet.
>>
>>101897274
Oh, I see. Then just generate multiple views of the same character in one go? How many views would I need to train it to be able to do the character in the future? Hundreds I'd imagine, right?
>>
File: Untitled-1.png (32 KB, 962x400)
32 KB
32 KB PNG
>>101896893
I doubt it will be of much help to you anon, here's my modified toml with added arguments https://files.catbox.moe/b9ntxg.toml
My launch command is
x:\AI\kohya_ss\kohya_ss\venv\Scripts\accelerate.EXE launch --dynamo_backend no --dynamo_mode default --mixed_precision fp16 --num_processes 1 --num_machines 1 --num_cpu_threads_per_process 2 x:/AI/kohya_ss/kohya_ss/sd-scripts/flux_train_network.py --config_file x:/AI/kohya_ss/kohya_ss/outputs/config_lora-20240815-021931.toml

It does something, fills 9.4 out of 12GB of VRAM, then starts filling RAM up to about 60% (I have 64GB), then I get this error, with both t5xxl_fp8_e4m3fn and t5xxl_fp16.
>>
>>101897303
I had trained models on 1.5 on just 20 images and made a consisten character
>>
I want boob slider and every part of body's slider
and unique tags of 10000 different poses
fuck controlnet
>>
>>101897321
thanks anon! in exchange for your config/toml I gift you this for your error:
https://github.com/kohya-ss/sd-scripts/issues/1453

user posted the solution if you want to edit it in manually, however kohya hasn't implemented it yet
>>
File: x.jpg (215 KB, 1024x1024)
215 KB
215 KB JPG
>>
>>101897245
Are you brand new to this?
>>
>>101897507
Oh this is neat. Is it flux?
>>
File: Capture.jpg (48 KB, 1312x345)
48 KB
48 KB JPG
>>101896418
How do we hide replies to (Dead) posts on 4chanX?
>>
>>101897303
Flux LoRAs can train on 25 images in about 1500 steps from what I've seen so far.
>>
>>101897507
cute
>>
Hibernation mode
>>
>>101897584
>installing monitoring software to shitpost on a uzbeki basket weaving forum
>>
>>101897810
? I genuinely don't think I understood what you just said.
>>
>>101897584
recursive hide in options
>>
File: 00354-00109-4143080673.png (1.72 MB, 1272x1704)
1.72 MB
1.72 MB PNG
These models seem smart but they've only ever learned to arrange patterns of pixels in 2D. Rendering an object from different angles or rearranging the joints of a human would require them to think in 3D and understand human anatomy, which they can not.
If you want humanoid#23523 from different angles, you have to teach the AI how humanoid#23523 is supposed to look from different angles.
>>
File: Capture.jpg (182 KB, 2520x687)
182 KB
182 KB JPG
>>101897859
No I have this activated it's still showing the replies to (Dead) post
>>
File: 00540-01580-1017764070.png (2.25 MB, 1296x1664)
2.25 MB
2.25 MB PNG
>>101897862
>>101897245
>>
>>101896979
considering how dead the thread has become, i don't think you're the only one
>>
>>101897876
ah, must be broken then. good luck, there hasnt been an update to 4chan-x in what, over a year now? maybe that other one 4chanXT works
>>
>>101897890
unfortunate, but goes to show how much of a nuisance he is kek
>>
>>101897903
there's really no point feeding him more attention by putting him in the op.
>>
>>101897914
newfag
>>
>>101897891
I just tried 4chanXT and it also doesn't work, oh well, thanks for your help though anon.
>>
File: fs_0066.jpg (76 KB, 768x768)
76 KB
76 KB JPG
>>
File: download (92).jpg (248 KB, 1024x1024)
248 KB
248 KB JPG
>>
>>101897966
last time I lurked like that I got arrested, nice try officer
>>
i don't get it
so this "debo" = the pedotroon that's been shitting up the recent threads? and is he german or californian? did he even mention that?
gonna need a qrd here
>>
>>101897507
Prompt?
>>
>>101897979
Please don't do that ever again, do you know how it feels to think someone is following you all the time?

Wait for VR simulated worlds to be a thing and stalk people there instead.
>>
>>101897988
Is he the toddler guy from /b? If so he's the reason I never go there.

Also there was some weird guy that constantly posted some weird creepy blue doll girls.
>>
File: 00009-642114346.png (2.58 MB, 1280x1920)
2.58 MB
2.58 MB PNG
>>101897988
>so this "debo" = the pedotroon that's been shitting up the recent threads?
That's what I suspect and if you know his motives, it would make sense why he would do so. He's Californian.
>>
>>101897862
>>101897886
nice img desu
>>
File: 2024-08-15_00034_.png (2.08 MB, 1024x1024)
2.08 MB
2.08 MB PNG
>>
>>101897988
He goes on hour long posting bouts and debo is active immediately after he gets banned enough
>>
>>101898139
What does one have to do to get range banned these days?
>>
>>101898153
No idea desu but he's been at this for weeks now
>>
File: 00035-3235286566.png (1.03 MB, 1024x1024)
1.03 MB
1.03 MB PNG
>>
File: file.png (2.35 MB, 1024x1024)
2.35 MB
2.35 MB PNG
>>
File: 00053-1057082095.png (1.07 MB, 1024x1024)
1.07 MB
1.07 MB PNG
>>
File: 00059-2703047023.png (1.43 MB, 1024x1024)
1.43 MB
1.43 MB PNG
>>
>>101898121
noice
>>
File: 00060-56130419.png (1.46 MB, 1024x1024)
1.46 MB
1.46 MB PNG
>>
>>101898304
Every clothing model on clothes websites looks like this now.
>>
>>101898318
the ai model understands
>>
File: 00062-1427647445.png (1.36 MB, 1024x1024)
1.36 MB
1.36 MB PNG
>>
>>101898318
Instead of hiring a model, gen a model wearing something, then just make the clothes from the image and sell that.
>>
why do these models always learn the glossy slop look as their default style?
>>
File: 00063-3242048947.png (1.4 MB, 1024x1024)
1.4 MB
1.4 MB PNG
>>
>>101898345
shitty DPO
>>
File: 00068-2821997320.png (1.5 MB, 1024x1024)
1.5 MB
1.5 MB PNG
>>
>>101898361
nice colors
>>
File: 00075-1701154594.png (1.05 MB, 1024x1024)
1.05 MB
1.05 MB PNG
>>
>>101897886
Ultrawoman radically distorted American boys in the 1960s
>>
File: 00077-1888955011.png (1.45 MB, 1024x1024)
1.45 MB
1.45 MB PNG
>>
>>101898400
Based

Burn motherfucker burn!
>>
File: 00079-3394792530.png (1.37 MB, 1024x1024)
1.37 MB
1.37 MB PNG
>>
>>101898304
>no butt chins
how did he do it
>>
>>101898502
made them black
>>
>nf4 with loras works in forge but not comfy
back to forge then I guess
>>
File: ComfyUI_00062_.png (1.39 MB, 1024x1440)
1.39 MB
1.39 MB PNG
It's only been a few days and the loras for Flux are brilliant
>>
>>101897862
>If you want humanoid#23523 from different angles, you have to teach the AI how humanoid#23523 is supposed to look from different angles.
Is anyone doing this? Training characters/objects with both input and desired outputs from different perspectives and then generalizing that?
>>
>>101898545
Jesus christ anon
>>
>>101898549
czechcasting has a 360° rig
>>
>>101898157
Is that real? Flux can really do this?
>>
>>101898545
whats the point of a flux lora if you're going to do a completely generic 1girl gen? use the prompt comprehension for something, at least
>>
>>101898545
More
>>
>>101898545
Even managed to capture her flat ass
>>
>>101898577
KEK they unironically are providing the perfect dataset for consistent human anatomy.
>>
>>101898583
even the "default" poses of flux outclass xl
>>
>>101898583
this
at least combine 1girl poses with something interesting, like at a weird place, or in a non-standard outfit

but nah, it's just 1girl in white void
>>
>>101898595
female only tho
>>
>>
>>
>>101898648
very, very nice
good feel to this, wallpaper material right here
>>
>>
>>101898674
:D
>>
File: 00081-3063437162.png (1.44 MB, 1024x1024)
1.44 MB
1.44 MB PNG
>>
File: Untitled-2.png (111 KB, 1903x955)
111 KB
111 KB PNG
>>101897354
I made it werk anoooon
But I doubt it's working as intended as I had to remove --split_mode, it was unrecognized.
40 hours is a bit too much.
Here's the command, without the toml file:
accelerate launch  --mixed_precision bf16 --num_cpu_threads_per_process 1 "x:\AI\kohya_ss\kohya_ss\sd-scripts\flux_train_network.py" --pretrained_model_name_or_path "X:/AI/ComfyUI_windows_portable/ComfyUI/models/unet/flux1-dev-fp8.safetensors" --clip_l "X:/AI/ComfyUI_windows_portable/ComfyUI/models/clip/clip_l.safetensors" --t5xxl "X:/AI/ComfyUI_windows_portable/ComfyUI/models/clip/t5xxl_fp8_e4m3fn.safetensors" --ae "X:/AI/ComfyUI_windows_portable/ComfyUI/models/vae/ae.sft" --cache_latents_to_disk --save_model_as safetensors --sdpa --persistent_data_loader_workers --max_data_loader_n_workers 2 --max_train_epochs 3 --seed 42 --gradient_checkpointing --mixed_precision bf16 --save_precision bf16 --network_module networks.lora_flux --network_dim 1 --network_args "train_blocks=single" --optimizer_type adafactor --optimizer_args "relative_step=False" "scale_parameter=False" "warmup_init=False" --learning_rate 1e-4 --network_train_unet_only --cache_text_encoder_outputs --cache_text_encoder_outputs_to_disk --fp8_base --max_train_epochs 4 --save_every_n_epochs 1 --output_dir "x:/AI/kohya_ss/kohya_ss/outputs" --output_name flux-lora-name --timestep_sampling sigmoid --model_prediction_type raw --guidance_scale 1.0 --loss_type l2 --train_data_dir "X:/AI/kohya_ss/flux tags" --resolution "512,512"

Not gonna continue training as I'll need VRAM for work in Photoshop.
>>
>>
I wonder how SAI feels about all of Twitter now using Flux to generate images on Elon's dime.
>>
>>
File: 00085-2461671252.png (997 KB, 1024x1024)
997 KB
997 KB PNG
>>
>>
>>101898436
One of these please
>>
File: 00087-3764503525.png (1.3 MB, 1024x1024)
1.3 MB
1.3 MB PNG
>>
>>101898581
I guess those pictures were cherry picked, but you can quickly get decent gymnastic poses from Flux yeah
https://files.catbox.moe/nn223j.png
>A man participing in a gymnastics competition, he's doing a pose where his hands are on the floor and his feet are up in the sky
>>
>>101898545
Any that can help do a wider variety of body types?
>>
>>101898712
It's all okay, bro. Lykon said Flux was no threat to SD3, because it requires too much vram and it cannot be trained, bro. The community will rally behind SD3, bro. They have the name recognition. It will be fine, bro.
>>
>>101898712
Saar please do the needful and download sd3.1 and enjoy the latest dreamshaper merge
>>
File: 00095-146572191.png (1.39 MB, 1024x1024)
1.39 MB
1.39 MB PNG
>>
ipadapter for flux?
>>
File: ComfyUI_00037_.png (2.1 MB, 1024x1440)
2.1 MB
2.1 MB PNG
>>101898583
>>101898612
Relax, it was just to test the likeness of the lora with a simple gen. On further testing it's not too great with distant shots
>>101898760
People were experimenting with schizo non-binary prompts and getting success with more body shapes last night
>>
>>101898794
damn son
>>
have there been any good finetunes for flux yet? base flux gens look so fucking shit for anything besides memes, it's absurd.

>>101898794
this one looks quite nice, good gen.
>>
>>101898840
I think people are just fucking around right now. Good news is, not only does flux seems to be trainable, it's extremely responsive to training.
>>
>>101898712
It's insane how much populer Flux got in less than 2 weeks than what SAI has achieved in 2 years
>>
>>101897082
Can you even run Flux on a RTX 3060 with 12GB of VRAM? Assume I have lots more of normal ram.
>>
File: temp_xldey.png (1.07 MB, 1440x1120)
1.07 MB
1.07 MB PNG
>>
>>101897862
>>101898549
Unironically, why isn't img2img training a thing?
>>
File: 1719925477556968.png (1.28 MB, 1024x1216)
1.28 MB
1.28 MB PNG
nope, still generating cool knights and skulls
>>
>>101898861
All they had to do was release a good product and let their team cook.
I don't know why they refused to do the very clear thing they needed to do to succeed.
>>
File: 00097-4166286742.png (1.32 MB, 1024x1024)
1.32 MB
1.32 MB PNG
do the loras work with forge? when i try it says loras for nf4/fp4 models are under construction
>>
>>101898767
kek
gotta love how people immediately doomed that it was 24GB and within what, a couple hours, that was cut down in half
>>
File: ComfyUI_31671_.png (1.31 MB, 1024x1024)
1.31 MB
1.31 MB PNG
>>101898699
Btw I just copied and slightly modified the first command from https://github.com/kohya-ss/sd-scripts/tree/sd3
Some arguments could probably be changed to make it faster. Anyway, a 1500 steps flux lora can be trained in 25 hours on 3060 12GB which is kinda okayish.
>>101898872
You can even train loras.
>>
>>101898911
>when i try it says loras for nf4/fp4 models are under construction
I think that clear enough no? It's written it's under construction so it's not working yet
>>
File: ai pepe laughing.jpg (177 KB, 1024x1024)
177 KB
177 KB JPG
>>101898925
>in 25 hours
rip my pc
>>
>>101898896
>Release dogshit censored handicapped retard model
>Make the license so restrictive no one can fine tune it or do anything with it
Bravo SAI
>>
>>101898896
Because they were laundering money and scamming investors
>>
>>
>>101898935
i mean.. are there any other models or whatever that you can do it with
>>
>>101898945
To be fair, the dev licence is as worse as the SD3 one, yet there's a shit ton of dev loras on civitai now, when a model is really good, people won't care that much about that licence
>>
>>101898861
i feel like SAI achieved more with 1.4/1.5 and then for some crazy reason they sabotaged their own models to the point of destroying the company. finally flux has arrived to move things forward from those days.
also conspiracy time, is it possible that black forest is a side venture from a big AI firm (maybe xai) so they can release better imagegen models with limited liability?
>>
File: 00104-1785701779.png (1.57 MB, 1024x1024)
1.57 MB
1.57 MB PNG
>>
>>101898794
>experimenting
Oh? Anything that would help with more voluptuous manga styles?
>>
>>101898964
1.5 was Runway, not SAI
>>
File: 00105-1534159715.png (1.51 MB, 1024x1024)
1.51 MB
1.51 MB PNG
>>
>>101898964
>i feel like SAI achieved more with 1.4/1.5 and then for some crazy reason they sabotaged their own models to the point of destroying the company.
1.5 was supposed to be cucked, Runaway released it anyway and Emad was fuming and did everything in his power to make them remove their model from huggingface. SAI was always ultra cucked, they were just lucky Runaway released the uncensored version and by consequence made them popular in the first place. Emad is such a retard he didn't learn any lesson from this
>Hmm, everyone seems to love SD1.5 even though it's way more uncensored than what we wanted... OUR NEXT MODEL WILL BE MORE CENSORED THAT'S MY CONCLUSION
>>
>>101898984
whoa, pretty cool
>>
>>101898960
Didn't Civitai take down all the SD3 loras after they got their lawyers to go over the license? I wonder if the same will happen with Flux then
>>
>>101898996
so why did emad leave the company before the exceptionally shit SD3 came out and they still released it anyway. change of leadership didnt make their models any less shit post 1.5.
>>
>>
File: ComfyUI_13364_.png (988 KB, 1024x1024)
988 KB
988 KB PNG
>flux1-dev-Q4_0.gguf
>>
>>101899013
It won't happen, that was a hypocritical move (but a move I liked nonenthless). To me, civitai just wanted to give a message to SAI:
>We have the power, and your SD3 model fucking sucks, at least make your license good so that this piece of garbage stays on our website"
>>
>>101899021
They fired Emad, because the investors wanted someone to take accountability for losing so much money, and the one to blame was obviously Emad anyway, but he got replaced by someone just like him and the investors are too much of retarded boomers to understand it will lead them to the same path.
>>
>>101899031
Kek. But unironically if Llama.cpp is going to support image-outputting models one day because multimodal is the future. A shame about the current state of things but it's early.
>>
>>101899021
He didn't leave he got boot out, SAI's finances are disastrous, this man doesn't know what to do with the investor's money
>>
>>101898947
I've kind of leaned into the schizo theory that SAI is a money laundering company that accidentally released a good model.
>>
>>101899047
>one day
Thankfully gguf is just a storage format kek, and dequantizing legacy kernels in pytorch is faster than waiting for the model to reload.
>>
>>101899052
>that accidentally released a good model.
It was indeed an accident, they didn't expect Runaway to release the uncensored model, and they didn't expect the NAI leak. Those alone made their product way better than what was anticipated
>>
>>101898925
Can you please provide the full script so even a brainlet like me can follow?
>>
>>101899047
It supports it already:
https://github.com/leejet/stable-diffusion.cpp
>>
>>101899068
wait what the fuck? we can do GGUF quants on flux? that's insane! Fuck nf4 then, why not for something like Q5_K_M
>>
>>101899068
wtf
>>
>>101899087
Stable diffusion implementation is bad because the model is too small/CLIP is broken, but hopefully georgi pays attention to it now and not some third party devs.
>>
>>101899052
>>101899048
>>101899043
thanks, i didnt know that. i remembered SD 1.4 and 1.5 coming out with emad in charge but maybe im fucking getting it wrong.
i prefer to remember SAI as a company that actually did something great regardless of emad who does seem to be a bit of a tard, hopefully more guys like black forest (does anyone know who the fuck these guys are?) take the baton and run forward with it from here
>>
>>101899052
When you realize Stability AI is british suddenly everything makes sense. It's part of their culture to be like this. They are turbo prudes. They genuinely think it's their duty to release a censored product.
>>
>>101899068
Holy shit? It works with a GPU right? Is Q8_0 better than fp8? I should try this shit! Where do you download flux ggufS?
>>
>>101899087
>>101899068
Huh. I thought the only things that supported GGUF were Llama.cpp wrappers and vLLM. Good to see someone's doing this. Should make future multimodal integration in Llama.cpp more straightforward as well.
>>
>>101899107
>It's part of their culture to be like this. They are turbo prudes.
It's true, that's why Monty Python got in a lot of trouble when they made The Life Of Brian, those men had serious balls to make such film in the Uk
>>
>>101899087
>read it as lejeet
>>
>>101899021
>so why did emad leave the company
He was chased out. There were attack articles in places like Forbes to help force him away. They worked.
>>
>>101898911
realism lora works with nf4 for me, dunno about others
>>
>>101899152
i did see those
>>
>>101899068
What the fuck? I have so many questions right now:
- How do you convert this shit to GGUF?
- What node should I install to get the Unet Loader GGUF?
- What's the speed? Can it run on a GPU?
- Do you feel Q4_0 is better than nf4?
>>
>>101899106
SD was developed by Runaway, but trained with SAI's money, so it belongs to SAI, whose CEO was Emad. Emad didn't want to release the models yet, because you could use them to make porn, but Runaway did it anyway. Emad fumed about it and went full "DO YOU HAVE A LOICENSE FOR THAT?!", but he got so much backlash that then he pivoted and pretended he supported it the whole time and he was behind it. Also don't forget that time they tried to do a coup on the Stable Diffusion subreddit and Discord official server. SAI has always been garbage.
>>
>>101899106
>does anyone know who the fuck these guys are?
the inventors of latent diffusion and the people who made SAI
Emad was just the money man
they left, the company went off the rails
>>
I can see it in our future anons. Almighty 30b+ imagegen models with extremely efficient quantization making them run on 12gb of vram.
>>
>>
>>101899068
EXCUSE ME?? WE CAN USE GGUF ON FLUX? HOLY FUCK WE'RE SO BACK
>>
>>101899178
>gens shitty plastic 1girls with it
>>
File: Comfy_Flux1_dev_Q4_0_GGUF.png (1.27 MB, 2485x1109)
1.27 MB
1.27 MB PNG
https://imgsli.com/Mjg3Nzg3

>>101899095
I only have F16 (duh), Q8_0, Q5_0 and Q4_0 working, I'll have to look into C++ stuff for the K quants.
Seems like there's very little loss if you do variable bitrate quants and don't fuck with the in/out/text layers + 1 dim tensors.
>>101899116
>>101899122
Currently quantizing and uploading, will post github/HF link when done. The key names are probably completely non-standard so pretty bad as a base for llama.cpp but I just wanted to get it working on 10GB vram
>>101899158
>What's the speed? Can it run on a GPU?
Yeah, runs on GPU. Actually compute bound on my 3080 instead of being bottlenecked by shuffling to/from memory. Down to under 1 min per image instead of the 3 I was getting.
>Do you feel Q4_0 is better than nf4?
I'd say yes since this is a variable bitrate quant but I also didn't play much with nf4
>>
>>101899194
Yes.
>>
File: 1697305749703542.png (1.47 MB, 1024x1216)
1.47 MB
1.47 MB PNG
>>101899194
and what's wrong with that
>>
>>101899068
I guess like nf4, gguf models doesn't work with loras right?
>>
>>101899194
Good
>>
>>101899206
You guessed it, I'd be avatarfagging if they did.
>>
>>101898699
nice work desu
but 40 hours, fug me... better than nothing ig, hopefully it can be further optimized or there are settings that aren't so wild. I don't mind an overnight run, but jeez!
>>
>>101899198
>I only have F16 (duh), Q8_0, Q5_0 and Q4_0 working, I'll have to look into C++ stuff for the K quants.
>Q8_0
Can ou make a Q8_0 vs fp8 on a imgsli, I wanna know if we can get closer to fp16 with that one
>>
>>101899198
what's up with the clip-vit-large-patch14.bin?
>>101899206
if it becomes the standard way to run things like it is in the LLM community, people should be making loras work just fine
>>
What is GGUF?
>>
>>101899161
>>101899170
that's pretty interesting, i had no idea.
>>
>>101899198
>Currently quantizing and uploading, will post github/HF link when done.
Is there a github so that we can quantize those models by ourselves? And how to install the "Unet Loader (GGUF)" node?
>>
>>101899236
Great Gay Ultimate Furry
>>
>>101899236
>What is GGUF?
It's the standard quant method on LLMs (Large Language Models), unlike fp, they are really optimized and can give performance similar to the real deal on low bitrate (6bit is almost equivalent to fp16 for example on a GGUF)
>>
>>101899236
GGUF is this thing that makes more advanced AIs run on consumer hardware ad a better speed. but if you GGUF them too far they start being retarded and dont work properly
heres a quick explainer of the basic idea https://www.theregister.com/2024/07/14/quantization_llm_feature/
>>
llm bros are so smart i love you llm bros
>>
are GGUFs pronounced guh-uffs or guh-oo-fs like goofs?
>>
File: 1612419376786.jpg (207 KB, 692x1100)
207 KB
207 KB JPG
Based ggoofer.
>>
>>101899278
gee guffs
>>
>>101899278
Guh goofs!
>>
>>101899198
What script did you use to quant a fp16 into a GGUF quant?
>>
>Gens suddenly 2x slower
>Can't figure out why
>Realise I left a Civitai tab open that's auto playing like 15 gifs and animated avatars
I hate that shitty fucking site
>>
File: ComfyUI_00225_.png (1.05 MB, 1344x768)
1.05 MB
1.05 MB PNG
While the LLM guys have suffered endlessly, desperately trying to compress and optimize their absurdly oversized models to be usable, we are just now reaching the point we have the same problem and get to leech off of all the progress they've made.
>>
File: temp_xldey.png (1.86 MB, 960x1600)
1.86 MB
1.86 MB PNG
>>
>>101899278
It's 'goo-goo-uufs'
>>
>>101899296
>While the LLM guys have suffered endlessly, desperately trying to compress and optimize their absurdly oversized models to be usable, we are just now reaching the point we have the same problem and get to leech off of all the progress they've made.
BitNet will save us all
>>
>>101899252
>>101899293
The quant scripts will be part of the repo, it uses the llama.cpp version of gguf-py instead of the pip one to quantize hence why I'm pre-quantizing. The pip version works for inference but is missing the python native quantization code.
>>
Can t5xxl be easily ggufed or that would require more work?
>>
File: 00103-1743312581.png (1.43 MB, 1024x1024)
1.43 MB
1.43 MB PNG
>>101899296
kek
>>
>>101899306
Lmao.
/lmg/ + /ldg/ memes are going to be real.
>>
what will happen to /lmg/ and /ldg/ when the two hobbies fuse and multimodal llms capable of image gen become the norm?
>>
>>101899313
So you're releasing the github repo that will make it work on flux soon? What about the GGUF node, it'll be included on it aswell?
>>
>>101899324
we will kiss
>>
>>101899204
the gen-to-gen coherence has to be solved before there can be serious applications. although i did manage to craft a full storyline with pony gens, every panel took hundreds of rolls, controlnet, and copious editing/inpainting. and still the consistency is so-so.
>>
>>101899324
Frankenmerge?
>>
File: poolsclosed.png (352 KB, 1024x1024)
352 KB
352 KB PNG
>>101899253
Kek
>>101899265
>>101899268
Will this affect/improve image quality over FP8?
>>
>>101899319
it will be more easy to gguf t5, because that's a text encoder, and gguf were made for LLMs in the first place, so they are way more similar than LLMs vs imagegen for example
>>
>>101899343
>improve
No
>>
>>101899343
>>101899347
>Will this affect/improve image quality over FP8?
It will, Q8_0 is the same size as fp8 and is more accurate
>>
Please LoRA Anons share your kohya .json training script, particularly if you are training on 16GB or less.
>>
>>101899052
honestly, I don't even think it's money laundering, it's just, how do you ACTUALLY monetize AI in a way that makes sense?

Like, what could SAI realistically do? Set up another paid AI image gen online? Try to scam and upsell some even bigger investor?
Like seriously, I can't even think of anything you could do with generative AI that would make money, other than games/recreation.


And you just know those boomer investors don't even understand what AI is, and they're expecting a 100000000x return on investment because "muh ai"
>>
>>101899068
It has negative prompting?
>>
I already run fp16 but this should make things a bit faster at Q4, plus I might finally have some room to run an LLM at the same time on the same PC. Nice.
>>
File: file.png (112 KB, 1125x675)
112 KB
112 KB PNG
i don't think my system can run flux dev on forge, most of the videos say to just grab the fp8 and drop it on the checkpoints folder, so i did that and whenever i try, i always get this error on the console, any tips?
>>
>>101899319
https://github.com/ggerganov/ggml/pull/12
>>
>>101899366
nf4 has negative prompting, I don't think negative prompting depends on the format/quant of the model, it's something unrelated
>>
>>101899319
make sure you have consent before you gguf a model anon
>>
Comparison between F16 / Q8_0 / FP8. Seems pretty lossless https://imgsli.com/Mjg3Nzkx/0/1

>>101899319
It's supported in llama.cpp and I think they have embedding extraction support, might look into it at some point.
>>101899328
Obviously lol, what other use would it be.
>>
>>101899068
give us your workflow anon, I wanna get the settings and do a comparaison between nf4-v2, fp8 and that Q4_0
>>
>>101899353
Yeah, q8_0 from what I understand is the lossless version of all the q quants so that doesn't need a k quant.
>>
>>101899365
in one sense, you're right, but look at it this way. what else is the tech sector working on that has actual potential for humanity ? since the iphone came out what have they done besides make their services increasingly shitty and predatory? other than medical/energy research and basic necessities, i would say AI development is the *only* thing we're working on collectively that is meaningful. thinking about it in terms of immediate monetization is myopic, not unlike Krugman saying the internet was just a fad that wouldn't make money.
>>
>>101899462
>Comparison between F16 / Q8_0 / FP8. Seems pretty lossless https://imgsli.com/Mjg3Nzkx/0/1
looks like Q8_0 is closer to Fp16 than FP8? How much bpw is Q8_0? I know that fp8 is 8bpw
>>
>>101899085
>>101899355
json is for the GUI, right? I only managed to make it work with command line >>101898699
>>
File: 00127-1709853816.png (972 KB, 1024x1024)
972 KB
972 KB PNG
>>
File: ComfyUI_00166_.png (1.72 MB, 1024x1024)
1.72 MB
1.72 MB PNG
>>101899462
>Comparison between F16 / Q8_0 / FP8. Seems pretty lossless https://imgsli.com/Mjg3Nzkx/0/1
cool stuff m8
keep up the good work, I love seeing these comparisons
>>
>>101899483
Yeah gui I am far to brainlet for CLI.
>>
If anyone is curious about relative GGUF quant quality.
*MMLU is a standard benchmark that measures how much intelligence a language model has.
>>
>>101899462
that's so fucking cool, Q8_0 is way closer to fp16 than fp8, we're so fucking back!
>>
The only catch is how fast is gguf on GPU? HF with flash attention and also tensorRT will always be faster, so an equivalent quant on HF will always be preferable
>>
>>101899485
Brian Cox as castro?
>>
>>101899520
Though gguf is always great news for vramlets as it means stacking cheaper GPUs to run flux.
>>
File: 00130-671809673.png (783 KB, 1024x1024)
783 KB
783 KB PNG
>>101899522
kek
>>
>>101899462
Damn
>>
>>101899465
Here you go: https://litter.catbox.moe/l80e3j.png
>>101899482
Picrel are the sizes
>>101899520
Well, it's faster than memory offloading. 1.8 s/it for 1024 bs1 on a shitty 3080 with pytorch dequant kernels written at 4am kek.
A lot of smaller weights are in FP16 and dequantizing that is a no-op (instant)
>>
>>101899554
>Picrel are the sizes
So it's the same exact size between fp8 and Q8_0? Are you sure about that? I've seen somewhere that Q8_0 is a 8.5bpw compared to fp8 (8bpw), I could be wrong though I''m looking at the information on the internet
>>
>>101899504
Would be nice if it had fp8
>>
File: GGUF_Sizes.png (6 KB, 361x85)
6 KB
6 KB PNG
>>101899554
>>101899575
Image didn't attach
>>
File: 00098.png (1.27 MB, 1024x1024)
1.27 MB
1.27 MB PNG
is this anime?
>>
>>101899504
Now that we have gguf with exl2 we'll be eating good.
>>
File: Capture.jpg (131 KB, 2246x1399)
131 KB
131 KB JPG
>>101899575
>>101899585
Yes, Q8_0 is a 8.5bpw, so it's slightly bigger than fp8 (12gb vs 11.9gb) but I don't care, it gives way more accurate outputs and is way closer to fp16 so we're winning hard on that one
>>
>>101899365
>how do you ACTUALLY monetize AI in a way that makes sense?

Right now BFL is generating for Twitter. That seems like a pretty good way to monetize the service.
>>
>>101899600
Anime just means animation so yes.
>>
>>101899612
>Now that we have gguf with exl2 we'll be eating good.
exl2 is non deterministic, I don't want my image to change even with the same seed, would be a disaster for comparaisons
>>
Any word for LoRAs working with NF4?
>>
File: ComfyUI_01376_.png (1.27 MB, 1024x1024)
1.27 MB
1.27 MB PNG
>>101899600
about as much as this
>>
File: fs_0148.jpg (513 KB, 3072x3072)
513 KB
513 KB JPG
>>
File: 00055.png (917 KB, 1024x768)
917 KB
917 KB PNG
>>101899194
>girls
ew... gross.
>>
File: ComfyUI_00171_.png (1.44 MB, 1024x1024)
1.44 MB
1.44 MB PNG
>>
File: 1696527221808.jpg (1003 KB, 2048x1024)
1003 KB
1003 KB JPG
>flux
hm kinda cute
parser seems great? i dunno how this works but i repeated an old bing prompt and got damn near the same thing, conceptually
i'd like to get a divorce from booru tags someday
>>
>Runpod upload speed is almost as shit as my home connection
That's what I get for picking the cheapest instance kek
>>
File: 00002-3761617876.png (2.43 MB, 1280x1920)
2.43 MB
2.43 MB PNG
>>
>>101899636
Would still be useful in cases where returning to the same seed is not needed and those speed gains
>>
File: 00002.png (1.1 MB, 1024x1024)
1.1 MB
1.1 MB PNG
>>101899652
>>101899623
So this is anime too?
>>
>>101899683
what prompt do you use to give the character such a big package?
>>
>>101899278
gee gee you eff
>>
File: temp_mzgpt.png (2.26 MB, 1120x1440)
2.26 MB
2.26 MB PNG
>>101899684
insane
>>
>>101899690
>sovl vs. no sovl

Yes it's almost Dalle tier at prompt following, (when the prompt is safe that is), but the average image quality sucks.
>>
>>101899623
but its a still picture?
>>
>>101899636
Exl2 quantizes very poorly too. It uses calibration and overfits the calibration dataset violently. Don't use exl2, I know what I'm talking about. Stick to gguf.
>>
>>101899701
kek!
>>
File: 1498272904926.jpg (102 KB, 580x675)
102 KB
102 KB JPG
Man, we couldn't be more back huh. This summer has been insane for local models, from text gen to image gen.
>>
>>101899198
Did that slide image use the same seed & prompt for the generation? It's not really close enough.
>>
>>101898577
what like it spins around the actress?
>>
File: 1707629159915.jpg (1.01 MB, 2048x1024)
1.01 MB
1.01 MB JPG
>>101899734
yeah
it's cute but needs more mass artist/style training injection
i wish we could afford an mkultra machine and train an operative to move up the ranks of MS and leak their database
like the departed but ai
>>
>>101899750
>It's not really close enough.
Are you kidding? fp8 has a totally different outfit compared to the fp16. Q8_0 only has a different ribbon compared to fp16
>>
>he didnt see miku in various artstyles
>>
>>101899750
It did, I get the same difference with Q4_0 even on 1.5 with only a few valid layers to quantize, think the precision loss on the v (?) keys is messing with it.
>>
>>101899767
One could always use a lora and achieve much better results with Flux, and hopefully finetunes will help with this but yeah default styles are bad.
>>
File: 1711200209919.jpg (879 KB, 2048x1024)
879 KB
879 KB JPG
muscle mommies still pending it seems
>>
File: 00157-576912861.png (1.41 MB, 1024x1024)
1.41 MB
1.41 MB PNG
>>
File: 00003-1695770391.png (2.58 MB, 1280x1920)
2.58 MB
2.58 MB PNG
>>
File: 00003.png (1.94 MB, 1152x1536)
1.94 MB
1.94 MB PNG
>>101899702
what do you mean?
>>
>>101898940
honestly not even that bad
>>
>>101899810
oh my
>>
>>101899788
Yeah I need to get into experimenting with lora training soon. Would be neat if I could create some off the dall-e stuff I've already proompted
>>
>>101899810
would
>>
File: ComfyUI_00178_.png (1.4 MB, 1024x1024)
1.4 MB
1.4 MB PNG
>>101899710
>>
>>101899810
Please stop
>>
File: Capture.jpg (43 KB, 2109x419)
43 KB
43 KB JPG
>>101899198
>I only have F16 (duh), Q8_0, Q5_0 and Q4_0 working, I'll have to look into C++ stuff for the K quants.
Yeah, would be cool to test out Q8_K, I'm sure that one will be virtually equivalent to Fp16
https://github.com/ggerganov/llama.cpp/wiki/Tensor-Encoding-Schemes
>>
wtf is this gay shite

>cp spam all day
>tranny bulge

this general needs to just burn
>>
File: 103043-tmp.png (3.2 MB, 1536x1728)
3.2 MB
3.2 MB PNG
>>
File: 00162-4288512355.png (1.12 MB, 1024x1024)
1.12 MB
1.12 MB PNG
>>101899810
>>
>>101899198
>Currently quantizing and uploading, will post github/HF link when done.
but how will we be able to run those without the GGUF loader node? Is there a way to get this shit somewhere too?
>>
>shite
>>
>>101899838
>this general needs to just burn
That's what the feds want you to act, demoralized, it's a psychop tactic to stop people trying to make this field advance further, don't fall to their trap anon
>>
File: 00087.png (2.08 MB, 1152x1536)
2.08 MB
2.08 MB PNG
>>101899848
hell yeah dude
>>
File: 00093-1095656954.png (1.26 MB, 1024x1024)
1.26 MB
1.26 MB PNG
>>101899838
>>
>>101899848
Now that's a Gunt with a double G
GGunt
Granny Gut Cunt
>>
File: 00010.png (1.9 MB, 1536x1152)
1.9 MB
1.9 MB PNG
>>101899838
I have plenty of bmwf if you'd prefer to see those gens, they are from the late 1.5 days though so the quality leaves a bit to be desired.
>>
>>101899701
ZAMN
>>
>>101899884
Why can't stable diffusion do shadows??
>>
>>101899897
>bmwf
let's see them
>>
File: ComfyGGUF.png (1.3 MB, 1024x1024)
1.3 MB
1.3 MB PNG
>>101899068
It's up, quantization code in a sec
https://github.com/city96/ComfyUI-GGUF
https://huggingface.co/city96/FLUX.1-dev-gguf
>>
Which way?
>>
>>101899897
nope I don't wish to see any of your homo shite
>>
city blessing us once again
>>
>>101899925
You're a fucking legend dude, thanks a lot for your work, I'm downloading the Q8_0 right away
>>
are we being raided?
>>
>>101899927
go right miku
>>
>>101899773
The hair is a totally different length.
>>
>>101899940
As usual, everytime something grounbreaking arrives in this field, the feds will do anything in their power to demoralize people and nuke the thread
>>
>>101899870
daaaamn
>>
>>101899939
Please report back kek, curious to see how well it'll work.
>>101899925
Almost forgot, you'll also need to update ComfyUI to a recent-ish commit.
>>
>>101899925
>To install the required gguf python package on the standalone ComfyUI, use the following:
>.\python_embeded\python.exe -s -m pip install -r .\ComfyUI\custom_nodes\ComfyUI-GGUF\requirements.txt
in which folder should I do this cmd command though? you need to specify it on your github
>>
>>101899897
let'm rip
>>
schnell is described as the "fastest" model, but it's still 12B, same as dev.
So what is the actual difference between schnell and dev if the model is practically the same?
>>
>>101899975
anon...
>>
>>101899983
optimized for fewer steps duh
>>
>>101899953
depends, the hair is the same compared to comfyUi fp16, and like I said, Q8_0 is more accurate than fp8, that's already a win, and we could go further than that with Q8_K >>101899835
>>
>>101899975
:/
>>
>>101899983
Schnell converges in like 4 steps, though the quality is lower.
>>
>>101899988
>>101900001
so it went through a second round of rectification?
Is that like training it for a second epoch?
>>
File: Capture.jpg (67 KB, 3204x167)
67 KB
67 KB JPG
>>101899985
>>101899993
>>101899975
;_;
>>
>>101899975
>/g/ - Technology
>>
>>101900008
It's distilled from the pro model to cover the same ground it does at 20 steps in 4 steps or something like that.
>>
File: file.png (313 KB, 3840x2088)
313 KB
313 KB PNG
>>101900026
lol just like, install gguf
>>
File: temp_xldey.png (1.66 MB, 960x1600)
1.66 MB
1.66 MB PNG
>>
>>101899870
dem knockers is heavy
>>
File: 00174-3401648671.png (1.1 MB, 1024x1024)
1.1 MB
1.1 MB PNG
>>
>>101899975
>>101900026
Changed the git clone command in the readme because it was one folder off (cloned to outside ComfyUI)
I also don't know if the required update is on stable yet, because I think those standalone installs use release tags to fetch updates now.
>>
>>101899949
>>
>>101900043
all right I got it:
>git clone https://github.com/city96/ComfyUI-GGUF custom_nodes/ComfyUI-GGUF
dunno why you want us to create a "custom_nodes" folder, we already have something like that, what it did is to create a custom_nodes folder inside of the custom_nodes folder
>>
File: 00165-1302741454.png (1.18 MB, 1024x1024)
1.18 MB
1.18 MB PNG
>>
>>101900073
Because I copied that line from my other shit repo and ctrl+f replaced the name kek
>>
File: 000011111.png (1.71 MB, 1152x1536)
1.71 MB
1.71 MB PNG
>>101899955
>grounbreaking
>feds
>demoralize
time for your meds
>>
>>101900088
You should also say on your readme that both of the cmd command must be done on the ComfyUI_windows_portable folder imo
>>
>>101900099
Don't you have anything more important to do Sergent Johnson?
>>
Miku's journey through the endless City continues.
>>
>>
>>101899838
>>101899955
>likening tranny images to actual cp
mental illness
>>
>>101900126
is there a specific artist you're referencing for these, cuz now I wanna see more of this shit
>>
>>101900099
this flux?
>>
>>
>>101900140
anon shared a catbox the other day
>>
>>101900105
is something bothering you Mr Richard Smoker?
>>
>>101899925
Isn't it funny that forge made his nf4 code a closed licence so that he can have some moat on ComfyUi, and then 2 days laters, the SOTA quants (GGUF) are just ready to be used for everyone
>>
>>101899797
right has better anatomy though
>>
>>101899701
Never understood the appeal of Simpsons nsfw.
>>
File: 00177-213607287.png (1.12 MB, 1024x1024)
1.12 MB
1.12 MB PNG
>>
>>101899925
Would cuda kernels doing the math with the quantized weights directly be significantly faster?
>>
>>101900164
im pretty sure it's supposed to be funny, not sexy...
>>
>>101899846
Nice to see you hace finally made the jump into the real thread.
>>
File: 19232.png (1012 KB, 1024x1024)
1012 KB
1012 KB PNG
>>101900164
I don't find any of the lewd ai gens to be legitimately arousing personally, I find it all to be hilarious. The more degenerate the better but obviously that stuff can't be posted here.
>>
>>101900164
It think it's generally not used for anything beyond shock value. Nobody goes out of their way to fap to simpsons stuff, simply the tools exist to make it so they do.
>>
>>101900139
it's made by the same type of people
>>
File: 1698648763646075.webm (430 KB, 400x640)
430 KB
430 KB WEBM
>>101899194
I for one want better AI so my 1girls can do things other than the most basic actions imaginable
>>
>>101899925
I attempted to just replace the diffusion loader with the bootleg loader into the default FLUX workflow that you get from the comfyui examples and got this error:
!!! Exception during processing!!! module 'comfy.sd' has no attribute 'load_diffusion_model_state_dict'
Traceback (most recent call last):
File "D:\Desktop\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Desktop\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Desktop\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Desktop\ComfyUI\custom_nodes\ComfyUI-GGUF\nodes.py", line 54, in load_unet
model = comfy.sd.load_diffusion_model_state_dict(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'comfy.sd' has no attribute 'load_diffusion_model_state_dict'
>>
>>101900200
this reminds me of that one specific scene during the beach landing in saving private ryan.
>>
>>101898925
anon if you're still around, can you check if the lora training works using the gguf models or if its totally incompatible? its not working for me but I couldn't get it working in the first place so not sure what level of retard I am
>>
>>101900174
The dequantization happens on the GPU so it's already as fast as native pytorch can go unless you split it into rows and apply the dequant in parallel.
Actual cuda kernels would definitely be faster except moving between pytorch and whatever you hack together would add a lot more delay.
>>101900203
Update your comfy, that function was added 2 days ago
https://github.com/comfyanonymous/ComfyUI/commit/a562c17e8ac52c6a3cb14902af43dee5a6f1adf4
>>
>>101900140
Kind of. The clip prompt for that last one was
>Drawn in pencil by Tsutomu Nihei in an extremely detailed gritty style.
The model doesn't really know Tsutomu Nihei's style. However, in my testing, it's somewhat associated with dark monstrosities, so that's why I use it (pic related didn't even have "monster" or anything in the prompt). The "Drawn in pencil" part of the prompt is doing most of the heavy lifting here, as well as the img2img base image I'm using, which is black and white right now. These are cherry picked, also. Some images come out with a more 3D and not really great/intended art style.
>>
>>101900213
>not sure what level of retard I am
These GGUF files are only for inference and even then they need custom code to work in ComfyUI (or anything for that matter).
There's another way to quantize the model that lets you train on it, I think it's in simpletuner (didn't work for me) and ostris's thing, not sure about sd scripts.
>>
>>101900215
ok yeah it works now
so much faster on Q8_0
no idea what the quality drop is like though since I've never been much of a prooompter
>>
File: 1723472329471.jpg (437 KB, 2048x1024)
437 KB
437 KB JPG
>>
Do loras work with these ggufs?
>>
>>101900248
>Q8_0
I just finished work is there already a new Flux compress? Where do I get it?
>>
>>
File: 00183-752341871.png (1.02 MB, 1024x1024)
1.02 MB
1.02 MB PNG
>>
>>
>>101900230
if you want to make a Nihei LORA there's another artist you can use
https://mangadex.org/title/3938ba22-eb15-480c-9063-f9be37cb8179/grand-metal-organs
>>
File: 00186-2085165758.png (1.08 MB, 1024x1024)
1.08 MB
1.08 MB PNG
>>
>>101900267
Is this Flux on forge?
>>
Interesting, so the model was trained on it but yet doesn't do it? More hints of censorship I guess. "Drawn in pencil" is interesting. While testing Flux I found I was able to get a more rough manga feeling similar to Hunyuan from "1980s manga style", though I didn't look into it much.
>>
>>101900260
>>101899925
>>
>>101900230
>>101900310
Forgot to quote.
>>
File: 1718538005070740.webm (220 KB, 400x640)
220 KB
220 KB WEBM
>>101900310
>More hints of censorship I guess.
It's blatant
>>
>>101900312
Thanks. What the hell man, community is breaking down the shit out of Flux. It's over for SD
>>
File: fixeditforyou.png (2.45 MB, 1536x1536)
2.45 MB
2.45 MB PNG
>>101900230
I ran your gen through img2img and added
>fully clothed simpsons futa, family guy dinosaur monster truck rally, inflation pregnant ahego"
to the prompt. Let me know if you need help fixing any more of your gens.
>>
>>101900310
badass hunyuan gen
>>
>>101899925
Will the gguf work with LoRAs, controlnet, etc...?
>>
>>101900242
rip, good to know though, thanks anon. hoping some more support for optimized models gets added for lora training, I don't mind a bit of quality loss if I can get training under 12 hours on 12gb
>>
>>101899710
>>101899729
>>101900045
these are fuckin sick would you be willing to share prompts
>>
File: Comparison_fp8_Q8_0.png (3.79 MB, 2974x2897)
3.79 MB
3.79 MB PNG
>>101899925
It's working, here are the results
>>
>>101900366
Q8_0 looks like the winner to me
>>
File: fs_0260.jpg (145 KB, 1280x728)
145 KB
145 KB JPG
>>101900230
Kentaro Miura and Tsutomu Nihei are usually my favorites to add, but yeah flux doesn't understand them as deeply as other models
>>
>>101900331
jesus christ, that bart is a thing of nightmares. that head "hair" texture is just foul, utterly foul

nicely done
>>
File: ComfyUI_HunyuanDiT_00177_.png (1.31 MB, 1024x1024)
1.31 MB
1.31 MB PNG
>>101900333
thanks
>>
>>101900343
Not atm and I can't think of a super easy way to do it without requantizing on the fly but I'll try to think of something.
>>101900366
Could you do FP16 as well in the comparison? The goal is to see which one is closer, Q8_0 should be 1:1 from my test.
>>
>>101900310
>>101900392
Is this /the/ (or one of?) the finetunes?
>>
>>101900395
>Could you do FP16 as well in the comparison?
Ok wait a sec
>>
>>101900139
sigh another clown pretending to be on /ldg/ for an hour

there is a retard posting cp all day
there is also another retard posting dick bulge

where did I say these were related? nowhere? oh hey! looks like you are a faggot retard! couldn't have guessed.
>>
>>101900230
ahhhh, I did feel a bit of BLAME in there
>>
>>101900331
oh wow.. that is art
>>
File: 00187-2996472719.png (1 MB, 1024x1024)
1 MB
1 MB PNG
>>
>>101900398
It's the base model
>>
The night is late but the bread is fresh...
>>101900433
>>101900433
>>101900433
>>
>>101900279
Eh, I don't have a 4090 so I guess it'd take pretty long to train, plus I'm not really too keen on spending time making a dataset. I know people made Nihei loras for SD so I think I'll just let those guys or someone else handle it perhaps.

>>101900310
>1980s manga style
Cool, I'll give that a try later. For now I'm off to bed.

>>101900428
:)

>>101900331
(:
>>
>>101900460
still got like 22 images left in this one
>>
>>101900475
I do have a 4090 but I am also too lazy to make a dataset lel
>>
File: 00194-2077144937.png (1.11 MB, 1024x1024)
1.11 MB
1.11 MB PNG
>>
File: 00192-2166810055.png (1.1 MB, 1024x1024)
1.1 MB
1.1 MB PNG
>>
File: temp_xldey.png (1.67 MB, 960x1600)
1.67 MB
1.67 MB PNG
>>101900354
its not a very good catbox unfortunately
https://files.catbox.moe/bu1fcg.png
>>
File: 00201-264167477.png (1.04 MB, 1024x1024)
1.04 MB
1.04 MB PNG
>>
File: 00206-3880559271.png (1.21 MB, 1024x1024)
1.21 MB
1.21 MB PNG
>>
File: 00207-1282229249.png (1.17 MB, 1024x1024)
1.17 MB
1.17 MB PNG
>>
File: 00211-3379901149.png (984 KB, 1024x1024)
984 KB
984 KB PNG
>>
File: 00214-1002762218.png (1.04 MB, 1024x1024)
1.04 MB
1.04 MB PNG
>>
File: 00215-2077796287.png (1.05 MB, 1024x1024)
1.05 MB
1.05 MB PNG
>>
>>101900751
do we need to fill since we hit the post limit?
>>
File: 00216-1486409932.png (1.06 MB, 1024x1024)
1.06 MB
1.06 MB PNG
>>
File: 00218-1750454770.png (985 KB, 1024x1024)
985 KB
985 KB PNG
>>
File: 00219-355043560.png (993 KB, 1024x1024)
993 KB
993 KB PNG
>>
File: 00221-1931247959.png (1.1 MB, 1024x1024)
1.1 MB
1.1 MB PNG
>>
File: 00222-4178767101.png (1 MB, 1024x1024)
1 MB
1 MB PNG
>>
>>101900759
yes i do
>>
File: 00224-1029258037.png (1.04 MB, 1024x1024)
1.04 MB
1.04 MB PNG
>>
keke
>>
File: 00225-2143756112.png (1.3 MB, 1024x1024)
1.3 MB
1.3 MB PNG
>>
File: 00227-4078534773.png (1.36 MB, 1024x1024)
1.36 MB
1.36 MB PNG
>>
File: 00229-3376893394.png (1.2 MB, 1024x1024)
1.2 MB
1.2 MB PNG
>>
File: 00232-887056933.png (1.25 MB, 1024x1024)
1.25 MB
1.25 MB PNG
>>
File: 00234-375636596.png (1.03 MB, 1024x1024)
1.03 MB
1.03 MB PNG
>>
File: 00235-99764510.png (1.18 MB, 1024x1024)
1.18 MB
1.18 MB PNG
>>
File: 00237-1764620084.png (1.19 MB, 1024x1024)
1.19 MB
1.19 MB PNG
>>
>>101901082
This one is interesting



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.