[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: tmp.jpg (1.23 MB, 3264x3264)
1.23 MB
1.23 MB JPG
Discussion of free and open source text-to-image models

Previous /ldg/ bread : >>101777029

>Beginner UI
EasyDiffusion: https://easydiffusion.github.io
Fooocus: https://github.com/lllyasviel/fooocus
Metastable: https://metastable.studio

>Advanced UI
Automatic1111: https://github.com/automatic1111/stable-diffusion-webui
ComfyUI: https://github.com/comfyanonymous/ComfyUI
InvokeAI: https://github.com/invoke-ai/InvokeAI
SD.Next: https://github.com/vladmandic/automatic
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI

>Use a VAE if your images look washed out
https://rentry.org/sdvae

>Model Ranking
https://imgsys.org/rankings

>Models, LoRAs & training
https://civitai.com
https://huggingface.co
https://aitracker.art
https://github.com/Nerogar/OneTrainer
https://github.com/derrian-distro/LoRA_Easy_Training_Scripts

>Flux
https://huggingface.co/spaces/black-forest-labs/FLUX.1-schnell
https://comfyanonymous.github.io/ComfyUI_examples/flux

>Pixart Sigma & Hunyuan DIT
https://huggingface.co/spaces/PixArt-alpha/PixArt-Sigma
https://huggingface.co/spaces/Tencent-Hunyuan/HunyuanDiT
https://huggingface.co/comfyanonymous/hunyuan_dit_comfyui
Nodes: https://github.com/city96/ComfyUI_ExtraModels

>Kolors
https://gokaygokay-kolors.hf.space
Nodes: https://github.com/kijai/ComfyUI-KwaiKolorsWrapper

>AuraFlow
https://fal.ai/models/fal-ai/aura-flow
https://huggingface.co/fal/AuraFlows

>Index of guides and other tools
https://rentry.org/sdg-link
https://rentry.org/rentrysd

>GPU performance
https://vladmandic.github.io/sd-extension-system-info/pages/benchmark.html
https://docs.getgrist.com/3mjouqRSdkBY/sdperformance

>Try online without registration
txt2img: https://www.mage.space
img2img: https://huggingface.co/spaces/huggingface/diffuse-the-rest
sd3: https://huggingface.co/spaces/stabilityai/stable-diffusion-3-medium

>Related boards
>>>/h/hdg
>>>/e/edg
>>>/d/ddg
>>>/b/degen
>>>/vt/vtai
>>>/aco/sdg
>>>/trash/sdg
>>
File: delux_flebo_00057_.png (1.32 MB, 1216x832)
1.32 MB
1.32 MB PNG
>mfw
>>
Thank you for the proper bake
I reported the debo spam thread
>>
>101779868
>starting or participating in a flame war
>>
>>101779876
Just use the report function and don't engage. Give him a taste of his own medicine.
>>
Is anon's bigma still alive
>>
File: file.png (58 KB, 892x360)
58 KB
58 KB PNG
here is how you can have your prompts automagically in your file name in ComfyUI
helps me remember prompts i've done and what works and what doesn't
>>
>>101779926
If you want to forego S&R you can put the prompt in a text box and pipe that into both the encoder and prefix input
>>
File: FD_00218_.png (308 KB, 640x480)
308 KB
308 KB PNG
>>
File: Capture.jpg (428 KB, 3308x1636)
428 KB
428 KB JPG
that's fucking impossible to make miku old, even with CFG :(
>>
File: FLUX_00082_.png (898 KB, 896x1152)
898 KB
898 KB PNG
>>
File: ComfyUI_30949_.png (1.03 MB, 1024x1024)
1.03 MB
1.03 MB PNG
>>
File: ComfyUI_01094_.png (910 KB, 1280x824)
910 KB
910 KB PNG
>>
>>101779942
where is a text box other than note? i swear i looked through every node on the list
>>
File: ComfyUI_01095_.png (956 KB, 1280x824)
956 KB
956 KB PNG
>>
File: ComfyUI_00480_.png (1.35 MB, 752x1800)
1.35 MB
1.35 MB PNG
>>101779991
I just wanted to say that I appreciate your yuri post, yuri enjoyer anon
>>
File: ComfyUI_01097_.png (273 KB, 768x512)
273 KB
273 KB PNG
>>
>>101780006
Search the manager.
>>101780038
>Obay
>>
File: file.png (108 KB, 916x484)
108 KB
108 KB PNG
>>101779942
i mean i can use a primative but it will only let me string to text encode prompt and not to filename_prefix too
>>
>>101780038
is this the initial res or did you down scale?
>>
File: file.png (89 KB, 826x420)
89 KB
89 KB PNG
>>101780044
like one or the other
>>
File: ComfyUI_01111_.png (312 KB, 512x512)
312 KB
312 KB PNG
>>101780069
Initial res, why would I need 1024x1024 to generate an error message?
>>
File: 1721074120518135.jpg (123 KB, 800x1170)
123 KB
123 KB JPG
>>
>>
File: ComfyUI_30957_.png (1.2 MB, 1024x1024)
1.2 MB
1.2 MB PNG
>>101780031
:3
>>
File: ComfyUI_01611_.png (1.68 MB, 1024x1024)
1.68 MB
1.68 MB PNG
>>
why the fuck did you make it red
>>
File: 1712256313131519.jpg (87 KB, 800x1198)
87 KB
87 KB JPG
>you can make lora and controlnets of flux
Sdcells was wrong again kek
>>
File: ComfyUI_30961_.png (1.16 MB, 1024x1024)
1.16 MB
1.16 MB PNG
>>
>>101780409
in a few days we managed to bring every single SD feature on flux:
>CFG
>Negative Prompt
>Loras
>Finetuning
>Controlnet
feelsgoodman
>>
File: 1723103027.png (1.39 MB, 1024x1024)
1.39 MB
1.39 MB PNG
>>
File: Capture.jpg (121 KB, 2078x964)
121 KB
121 KB JPG
https://www.reddit.com/r/StableDiffusion/comments/1emrprx/comment/lh21sxd/?utm_source=share&utm_medium=web2x&context=3
Oh shit, didn't know about that node, let's try it out
>>
File: 1723103591.png (1.45 MB, 1024x1024)
1.45 MB
1.45 MB PNG
>>
File: FD_00102_.png (1.46 MB, 1024x1024)
1.46 MB
1.46 MB PNG
>>101780501
>>101780516
These are sick.
>>
File: 1723103871.png (1.47 MB, 1024x1024)
1.47 MB
1.47 MB PNG
>>
File: file.png (1.81 MB, 1289x699)
1.81 MB
1.81 MB PNG
Which of these looks better?
>>
File: 2024-08-06_00073_.png (1.22 MB, 1280x1024)
1.22 MB
1.22 MB PNG
>>101779967
>>
>>101780596
both have their good sides, left looks more like a glam shot, right looks more natural
>>
>>101780596
Somewhere in the middle imo
>>
Who gives a shit about whatever you think is important right now this is the most important thing
https://github.com/ataylorm/FluxAIGridComparisons/blob/main/LingeriePrompts/FullGrid.jpg
Flux has an extremely comprehensive understanding of women's lingerie
>>
>>101780596
I feel like the difference is just 10kg kek
>>
File: 1723103985.png (1.28 MB, 1024x1024)
1.28 MB
1.28 MB PNG
>>101780567
thanks
>>
>>101780596
right, has less of the "obvious AI" look
>>
>>101780665
Where is the fuck hole?
>>
File: 2024-08-08_00151_.png (1.53 MB, 720x1680)
1.53 MB
1.53 MB PNG
>>101779967
also anime style if you prefer
>>
>>101780638
>>101780640
>>101780653
>>101780667
Left is default Flux, right is using that new realism Lora.
>>
File: 1717077451651299.png (307 KB, 1838x226)
307 KB
307 KB PNG
>>101780596
They both still have the same butt chin that Flux likes to give all women.
>>
>>101780690
https://huggingface.co/XLabs-AI/flux-RealismLora
>>
>>101780690
>new realism Lora
source?
>>
>>
File: 1723104698.png (1.65 MB, 1024x1024)
1.65 MB
1.65 MB PNG
>>101780669
There's one on her belly.
>>
>>101780713
22MB .. what did they do? flip three tensors?
>>
>>101780720
>source?
-> >>101780713
>>
>>101780736
That one is too small and I doubt it'd stretch
>>
For all you AutoNiggers while you wait.
https://x.com/cocktailpeanut/status/1821219466769777134
>>
>>101780720
>>101780713
Also make sure you update comfy, update for it only pushed earlier today.
>>
File: 1723104780.png (1.55 MB, 1024x1024)
1.55 MB
1.55 MB PNG
>>
Flux does THE most realistic screenshots, news bulletins, zoom calls, powerpoint presentations... Did we really need a fucking """realism""" lora?
>>
can you stay in your containment thread please
>>
>>101780768
Yes, for photos
>>
>>101780764
>>101780753
>>101780751
>>101780739
>>101780738
>>101780736
>>101780732
>nogen
I see you debo.
>>
File: FLUX_00096_.png (1.35 MB, 896x1152)
1.35 MB
1.35 MB PNG
>>
>>101780768
>Did we really need a fucking """realism""" lora?
the difference seems huge though, pictures 1 and 3 are with realism lora
https://reddit.com/r/StableDiffusion/comments/1emrprx/feel_the_difference_between_using_flux_with/
>>
>>101780764
>>101780775
>>101780784
>saas trash
rope
>>
>>101780781
Half of the posts you linked have gens attached though?
>>
I'm not that impressed by the fabled prompt adherence of dev. In reality once you've played for it a while you notice that it actually regularly just ignores all kinds of stuff. It's a supremely "overconfident" model.
>>
>>101780795
The fuck are you talking about? This is a lora that you can actually can ONLY run locally. Are you retarded?
>>
>>101780797
kys debo apologist
>>
>>101780798
>I'm not that impressed by the fabled prompt adherence of dev. In reality once you've played for it a while you notice that it actually regularly just ignores all kinds of stuff.
use CFG, it improves the prompt adherance even more
https://reddit.com/r/StableDiffusion/comments/1emow5p/finding_the_sweet_spot_between_guidance_and_cfg/
>>
>>101780768
yes, otherwise all realism gens have the glossy ai slop look
>>
>>101780785
Refrain from cross posting your reddit. None of those images look half as realistic as some of the ones anons posted since its release.
>>
>>101780798
And what are you comparing it to?
>>
It's always fun when the schizo shows up
>>
>>101780824
https://www.youtube.com/watch?v=WrURMW3DEps
>>
>>101780824
This is a thread for discussion of models, not an image gallery. Posts without attached generations are fine and on-topic, and you are a faggot.
>>
>>101780830
>>101780831
Cope nogens
>>
File: 2024-08-08_00160_.png (1.62 MB, 720x1680)
1.62 MB
1.62 MB PNG
>>101780828
ya popcorn time

>>101780713
>>101780753
it doesnt work even after updating comfy, I get
>lora key not loaded: double_blocks.9.processor.proj_lora2.up.weight
>lora key not loaded: double_blocks.9.processor.qkv_lora1.down.weight
>lora key not loaded: double_blocks.9.processor.qkv_lora1.up.weight
>lora key not loaded: double_blocks.9.processor.qkv_lora2.down.weight
>...

can you share your workflow, or know how to fix that?
>>
>>101780895
https://huggingface.co/comfyanonymous/flux_RealismLora_converted_comfyui
>>
File: 1723105886.png (1.55 MB, 1024x1024)
1.55 MB
1.55 MB PNG
>>101780744
Here, stick it in her mouth.
>>
>>101780933
Make melted metal
>>
>>101780895
>>101780933
>saas trash
>>
It's so goddam hard to force a style onto flux:
>A 1700s painting of Hatsune Miku holding an iPhone and wearing sunglasses. She is walking over a giant multicolored glass ball
The X scale is the guidance and the Y scale is the CFG
>>
>>101780948
Yeah, they messed up a bit with characters, they overwrite the styles too much. Most other models aren't that stubborn when trying to draw Miku in other styles.
>>
File: 2024-08-08_00161_.png (1.63 MB, 720x1680)
1.63 MB
1.63 MB PNG
>>101780906
thanks that worked, gave her 6 fingers tho
>>
Has anyone figured out bpw quants for the flux unet? Maybe that could allow 24gb vram chads to run Flux at a higher precision than the gimped fp8 without the need to offload or reload the model for every gen.
>>
File: ComfyUI_00122_.png (675 KB, 768x1152)
675 KB
675 KB PNG
wtf i love flux now
>>
Hatsune miku dressed as a doctor. Her patient is laying in a hospital bed and above his head is a speech bubble with the text "what's the diagnosis?" Above Hatsune minus head is a speech bubble with the text "incurable advanced faggotry" and she looks sad
>>
>>101780948
Maybe be more specific about the painting
1700s painting can mean anything, and likely isn't something the VLM would know when tagging the images.
>>
>>101781031
NTA but tricks like using technical terms for the art style like "baroque painting" or "impasto" or "impressionism" etc. don't really work either. If the model really doesn't want to do a paint style for a particular subject, it just won't. That's part of what I meant above when I said it's overconfident. Reminds me of Mistral's language models a bit.
>>
>>101781089
What about the guy who was posting Miku Picasso?
>>
>>101781089
I wouldn't say it's overconfident, it's more like it's a great model when it knows the concept, but it doesn't know a lot of important things unfortunately
>>
Is this guy right?
>>
>>101781140
No, Vincent Van Gogh is a known person
>>
File: tmpw_1a9js8.png (1.06 MB, 1024x1024)
1.06 MB
1.06 MB PNG
>>
>>
>>
File: 2024-08-08_00029_.png (1.07 MB, 1024x1024)
1.07 MB
1.07 MB PNG
>>101781203
>>
File: 1721421870703503.png (954 KB, 1024x1024)
954 KB
954 KB PNG
>>101781012
More difficult than expected. This is the closest I got. Seems to get either the text, or Miku and the patient, but never both.
>>
File: 2024-08-06_00017_.png (2.37 MB, 1280x1024)
2.37 MB
2.37 MB PNG
>>101781089
skill issues
>>
tl;dr is it viable to run comfyUI flux on 1 VM and llama3 70B on another? is there any way to just... use one VM for this?
==================
im trying to run 3 VMs on a 24 core cpu
1 3060 Ti 12GB windows 10 backup image
2 a4000 16GB ubuntu 20 fresh install
1 4060Ti 16GB windows 10 fresh
==================
when running the 4060 or 3060 on ubuntu it slows down 90%
when running the 4060 on windows 1 with the 3060 it doesnt start
when running the a4000 on windows 10 it doesnt start
==================
i just got the 3rd VM running
Why doesnt the 4060 work with anything else?
why do the a4000s only work in ubuntu?
>>
>>101781231
yes if you only had one 24GB card in that mix it might work
>>
File: Capture.jpg (305 KB, 2699x1384)
305 KB
305 KB JPG
>>101781221
Did you try it out at higher CFG?
>>
>>101781175
nice old fantasy feel
what model?
>>
Local inpainting model that doesn't need you to draw masks when?
>>
>>101781244
>one 24GB card
does it not run on 2 gpus?
>>
>>101781140
T5 is case sensitive, CLIP isn't. So, yes.
>>
>>101781244
you are seriously telling me 60GB VRAM is not enough to run comfy UI?
capcha: RP GG
>>
File: 1723096800356720.jpg (229 KB, 1024x1024)
229 KB
229 KB JPG
>>101781012
>>101781221
>>
>>101781258
no it doesnt, you can with some custom nodes offload the text encoder to a second card, but that wont do much since the model is just big
>>101781268
see above
>>
File: Capture.jpg (459 KB, 3236x1633)
459 KB
459 KB JPG
>>101781221
That's as close as I could get
>>
>>101781245
No, I haven't really messed with CFG or guidance yet. I'll give that a try. Not used to using Comfy.
>>101781273
>>101781293
Well, I guess skill issue on my part.
>>
>>101781279
so what is the average resolution in this thread? 512?
SD only needs like 24 for 4k
if it requires 24GB for 512 does it need 100GB for 4k? does that mean its impossible to generate hi res images without tiling?
>>
>>101781293
ya boi can't prooompt for shit
"above his head" means nothing. you would have to specify in artistic terms the AI matrix actually understands as to where each bubble goes.
>>
>>101781311
>No, I haven't really messed with CFG or guidance yet. I'll give that a try. Not used to using Comfy.
to use CFG you need a new node, here's the tutorial:
https://reddit.com/r/StableDiffusion/comments/1ekgiw6/heres_a_hack_to_make_flux_better_at_prompt/
>>
File: 1697816976191808.jpg (171 KB, 1024x1024)
171 KB
171 KB JPG
>A small cat standing near a puddle of water on a street. In the water, the reflection shows a big, dangerous tiger.
Made by dalle , flux cant do it .
>>
>>101781342
>street
>>
File: 2024-08-06_00412_.jpg (1.23 MB, 2560x1440)
1.23 MB
1.23 MB JPG
>>101781314
I can 1536x1536 native on a 4090 (not smart tho cause at 2MP+ artifacts appear on flux, just like with SD) .. with UltimateUpscale the sky is the limit .. 4k? 8k? name it you can make it
>>
Guys, I think the realism lora makes flux better at nudity
>>
>>101781314
>if it requires 24GB for 512 does it need 100GB for 4k?
the diffusion image size aint that much of a trouble, but the model parks at at gentle 14-20 GB alone in your VRAM depening on fp16/fp8
>>
>>101781371
Catbox
>>
File: 2024-08-08_00165_.png (1.58 MB, 720x1680)
1.58 MB
1.58 MB PNG
>>101781371
I don't think so .. also it destroys anything that isnt a human face, pic related with realism lora and..
>>
>>101781373
>14-20 GB
im guessing this is from 512 to 2k?
so a normal, every day 720p controlnet should be no problem on a 16GB card?
Whats the gen time for 512/1k/2k on a 4090? iirc for 1k SD1.5 was about 15s on 4090 and A100 was 3s
>>
>>101781342
And Dalle is a SaaS so who gives a shit?
>>101756726
>>
File: 2024-08-08_00164_.png (1.56 MB, 720x1680)
1.56 MB
1.56 MB PNG
>>101781387
.. without realism lora
>>
File: 2024-08-08_00175_.png (2.62 MB, 1536x1536)
2.62 MB
2.62 MB PNG
>>101781393
>on a 4090
512 less than 10s
1024x1024 ~15-20s
1536x1536 ~ a minute
on FLUX.dev, also the VRAM gets filled to brim at 23.4GB/24GB used regardless of resolution, no idea why and how, ask comfy-anon

pic related 1536x1536 gen, as you can see it aint great, for hires tiled upscale is a must, which ofc takes considerably longer
>>
>>101781447
>>101781371
>>101781293
why won't my model load? do i need a yaml?
A11111 doesn't need any yaml bs...
>>
>>101781473
no you don't need a yaml, what's the error on your console anon?
>>
>>101781447
gen seems ok, what if you do a fix pass rather than an upscale?
>>
>>101781255
Flux dev
>clip-l prompt: Oil painting by Ivan Kramskoi
>t5 prompt: An impasto painting of a dragon.
guidance 3 cfg 1, no thresholding
>>
File: image.jpg (89 KB, 1024x1024)
89 KB
89 KB JPG
>>101781209
straight to jail

anyways what are anons thoughts on Flux.1 merged? any use cases where it would be better than just dev or schnell? i tried it a bit and i wasnt too impressed, seems like worst of both worlds
https://huggingface.co/sayakpaul/FLUX.1-merged
>>
File: ComfyUI_00142_.png (1.72 MB, 1024x2048)
1.72 MB
1.72 MB PNG
>>101781012
I'll take it
>>
>>101781505
based Caew Mrcu
>>
>>101781505
When will Flux gen loss?
>>
File: a.jpg (57 KB, 928x560)
57 KB
57 KB JPG
forgot pic
>>101781473
>>
File: ComfyUI_01657_.png (1.3 MB, 1024x1024)
1.3 MB
1.3 MB PNG
>>
File: Screenshot (5420).png (2 KB, 267x39)
2 KB
2 KB PNG
these quant experiments are going great
>>
>>101781484
>>101781533 (me)
>>101781473 (me)
derp
>>
>>101781533
Use this tutorial anon
https://www.youtube.com/watch?v=stOiAuyVnyQ
>>
File: 2024-08-08_00179_.png (2.96 MB, 1536x1536)
2.96 MB
2.96 MB PNG
>>101781487
>what if you do a fix pass rather than an upscale?
ya

>>101781533
wrong node, you need the "Load Diffusion Model" node not "Load Checkpoint"
>>
File: UGboDDb.png (60 KB, 393x326)
60 KB
60 KB PNG
>>101781547
>Load Diffusion Model
what is this? it wasnt mentioned in the half dozen tutorials i watched while eating breakfast
I downloaded flux manually. just like every model i used in SD since 2021
>>
File: 1710774048057409.png (1.41 MB, 1024x1024)
1.41 MB
1.41 MB PNG
Holy shit this is actually impressive
t. 12gb vram
>>
File: FD_00428_.png (1.29 MB, 1024x1024)
1.29 MB
1.29 MB PNG
>>101781523
>>
File: 2024-08-08_00181_.png (2.87 MB, 1536x1536)
2.87 MB
2.87 MB PNG
>>101781393
as addition to >>101781447
tested fp8 model, and t5 loaded into CPU, VRAM usage maxed out at 16.5/24 GB used at 1536x1536 on bloated windows, so if you use linux headless or minimal window manager you probably can run with a 16GB card on fp8 without vram swapping and the t5 encoder on CPU (makes the text encoder abit slow tho, but bearable)
>>
File: ComfyUI_01660_.png (299 KB, 512x512)
299 KB
299 KB PNG
>>
File: 2024-08-08_00023_.png (1.04 MB, 1024x1024)
1.04 MB
1.04 MB PNG
>>101781560
dude, really don't post this here
>>
>>101781545
lol im not using someone elses installer.
https://civitai.com/models/618997
how do I use this?
>>
>>101781560
There's a few other weirdnesses like (in terms of bodies) it'll generate whatever the fuck age your image is or mid/late teens and nothing in between.
Also loli outside /b/ etc, at least use catbox or something jeez anon
>>
>>101781599
>Also loli outside /b/ etc
anon, it's literal CP, its not just loli
>>
/ldbt/ was a mistake
>>
File: ComfyUI_01662_.png (1.13 MB, 1024x1024)
1.13 MB
1.13 MB PNG
Flux doesn't seem to know how to make real low poly characters like the ones on n64 and ps1
>>
>>101781627
nah thats hatsune miku for you again, she overrides style hardcore.. scroll back a few threads and you will see low poly characters
>>
File: ComfyUI_00148_.png (1.82 MB, 1024x2048)
1.82 MB
1.82 MB PNG
>>101781523
>>
>>101781627
That is low poly though. But, I get what you mean. I wonder if most of the struggle is us not knowing how to talk to it. Who knows how the VLM thinks.
Try take a real N64 image into copilot or something and see how it describes it
>>
>>101781603
are you American?
if that was pizza, then people would get constantly arrested for taking photos on beaches
>>
>>101781655
>t. retard
>>
File: a.jpg (31 KB, 451x261)
31 KB
31 KB JPG
what is the text encoder for SD i need to copy to comfyUI to make the AI work?
>"Load Diffusion Model"
where can I find that? >>101781557
>>
>>101781599
>it'll generate whatever the fuck age your image is or mid/late teens and nothing in between
9-12 is the least consistent age, but i've had results with "young teen girl" vs just "teen girl"

toddlers/little little girls you get just by prompting "little toddler girl"

>>101781603
>literal CP
a topless child is not CP, a fake child is not CP
you should be slapped for being this american
>>101781680
*slaps you in the testicles*
>>
>>101781709
>you should be slapped for being this american
I'm not american, and it is illegal in most countries anyway anon, doesn't matter if it's a "fake" child.
>>
>>101781653
kek that's fun to do not gonna lie
>>
File: 2024-08-08_00006_.png (1.29 MB, 720x1280)
1.29 MB
1.29 MB PNG
>>101781695
under advanced -> loaders
or just use the default workflow:
>https://comfyanonymous.github.io/ComfyUI_examples/flux/
>>
>>101781695
>Load Diffusion Model"
>>101781547
ANYBODY?!
/////////////
I apologize for any confusion earlier. Let's go through the steps clearly:
1. Load Diffusion Model in ComfyUI
In ComfyUI, the "Load Diffusion Model" functionality is typically handled by nodes that load models for use in workflows. The specific nodes you might be looking for are "DiffusersLoader" or "Load Checkpoint." These nodes are used to load diffusion models into your workflow.
2. Add a Diffusers Loader
Here's how you can add a Diffusers Loader node in ComfyUI:
Step-by-Step Instructions

Open ComfyUI:
Launch ComfyUI on your computer. This usually opens in a web browser.
Access the Node Library:
Right-click on the canvas area (the main workspace) where you build your workflows. This should bring up a context menu.
Search for the DiffusersLoader Node:
In the context menu, look for an option like "Add Node" or "Search Node."
Type "DiffusersLoader" into the search bar that appears.
Add the Node to Your Workflow:
Once you find the "DiffusersLoader" node in the search results, click on it or drag it onto the canvas to add it to your workflow.
Configure the Node:
After adding the node, you'll need to configure it. Click on the node to open its settings.
Specify the model_path where your diffusion model is located. This is the path to the model you want to load.
Connect the Node:
Connect the DiffusersLoader node to other nodes in your workflow as needed to complete your setup.

If you don't see the DiffusersLoader node, make sure it is installed correctly. You might need to restart ComfyUI after installing new nodes to refresh the node library. If you installed it manually, ensure that it is placed in the correct directory for custom nodes.
//////////////
>>
File: 2024-08-08_00019_.png (1.09 MB, 720x1280)
1.09 MB
1.09 MB PNG
>>101781770
>>
>>101781770
It really is, anon. Why are you being so retarded and stubborn? No country differentiates "real" and AI-generated CP in law, at least not yet.
>>
the fucker is ban evading to
>>
>>101781786
He's deleting the posts himself because he does know that it's illegal in the US and not allowed on 4chan. It's teebs from aicg, see https://rentry.org/sweetbots to know what he does
>>
>>101781653
>That is low poly though.
It's not
That hair alone has enough polys to overwhelm a PS1's hardware
>>
>>101780647
kek
>>
>>101781803
at least it didn't render a gorrila for the niggeress kekekekek
>>
Is it possible to run flux dev on 12gb vram or i should stick to schnell?
>>
>>101781734
now what the fuck do i do? why do i have to specify 53489*10^32 different folders for my models instead of just 1 like A11111?
>>
>>101781802
I am aware it's not PS1 or N64 low, but the polygon count will still be low.
>>
File: ComfyUI_01668_.png (344 KB, 512x512)
344 KB
344 KB PNG
Doom render seems to be working fine though
>>
>>101781820
read the guide .. its just 3 folders, clip and t5xxl into "clip" folder, flux model into "unet" folder, and VAE called ae.sft into "vae" folder .. it can't be that hard to put a few files into 3 different folders
>>
>>101781840
The gun is too high-poly, doom literally had sprites
>>
File: a.jpg (115 KB, 1118x575)
115 KB
115 KB JPG
>>101781820
>>101781842
forgot pic again due to simmering rage
>>
File: ComfyUI_01670_.png (339 KB, 512x512)
339 KB
339 KB PNG
KEK
>>
>>101781828
No it won't
You can't see a single polygon in the picture, and even the tie is modeled out for fuck's sake (low poly style would have it painted on)
>>
>>101781865
>you're not even informed anon, Colombia already has laws on this (all AI output is legal)
Okay, that doesn't mean that it's legal in other countries. Why would I know where AI-generated CP is legal or not? Are you in Colombia?
>>
>>101781854
you are just missing two things:
the ae.sft in the vae folder (thats for load vae) and the t5xxl text encoder to load clip 1 in the DualCLIPLoader (goes into the "clip" folder"
>>
File: 2024-08-08_00020_.png (967 KB, 720x1280)
967 KB
967 KB PNG
>>101781865
>>
File: 1717214266017783.png (701 KB, 1024x1024)
701 KB
701 KB PNG
>>101781333
Took a while to get the hang of, but I got a pretty impressive result eventually.
>>
>>101781873
>ae.sft
how many GB is this file?
where is the START button?
this UI is not comfy at all.
>>
>>101781902
nice man! but why is it so white? it's too desatured imo
>>
File: 2024-08-07_00378_.png (1.57 MB, 1024x1024)
1.57 MB
1.57 MB PNG
>>101781906
335MB
>https://huggingface.co/black-forest-labs/FLUX.1-schnell/blob/main/ae.safetensors
right side "Queue Prompt"
>>
File: a.jpg (41 KB, 890x219)
41 KB
41 KB JPG
>>101781920
ill... try.... =<
>>
>>101781902
is this a flux gen? how'd you get that style?
>>
File: ComfyUI_temp_flux_00088.png (1.45 MB, 1024x1024)
1.45 MB
1.45 MB PNG
>>101781342
flux cant what?
>>
>>101781950
wtf that's insane
>>
>>101781560
test
[REDACTED]
>>
>>101781920
>black-forest-labs/FLUX.1-schnell/
why is this in a different github to my flux model?
>https://comfyanonymous.github.io/ComfyUI_examples/flux/
>Flux Dev
>You can find the Flux Dev diffusion model weights here. Put the flux1-dev.safetensors file in your: ComfyUI/models/unet/ folder.
>You can then load or drag the following image in ComfyUI to get the workflow:
i followed these instructions but i have no file called "ae.sft"
also my model is called flux_dev not flux1_dev since i got it from the link in >>>/g/lmg - does that matter? It's 23.8GB
>>
File: 2024-08-08_00206_.png (1.3 MB, 1024x1024)
1.3 MB
1.3 MB PNG
flux is great at making 2D games that never existed

>>101781933
you got em all there, for some reason the default workflow has em in a sub folder called "flux_text_encoders" now, either make that folder and move them there, or just reselect in the node and choose from the list, top one "t5xxl_fp8_e4m3fn.safetensors", bottom one "clip_l.safetensors"

you are nearly there just a few clicks and the ae.saftensors in the vae folder in models folder then you got it
>>
>>101781868
I model in blender man, I know what poly counts are.
The tie is 12 quads, or 24 polys. This character has only 2500 polys. Anything under 10k is considered low poly
The whole miku will be less than 10k easily.
I know this isn't the look you are going for but it is indeed a low poly model.
>>
>>101781963
Can you please make your image names something unique to you so I can filter you without filtering other phone posters. Thanks.
>>
File: 2024-08-08_00207_.png (1.11 MB, 1024x768)
1.11 MB
1.11 MB PNG
>>101781958
>why is this in a different github to my flux model?
doesnt matter the ae.safetensor is the same for FLUX.dev and FLUX.schnell .. also sft == safetensor .. if you are paranoid get
>https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/ae.safetensors
but I assure you its the same file

>>101781958
>It's 23.8GB
yes thats the one
>>
>>101781967
>Anything under 10k is considered low poly
Source: my ass
>>
File: 1703985271067230.png (926 KB, 1024x1024)
926 KB
926 KB PNG
>>101781916
Seems to be a side effect of using CFG with the model I'm using, it's a Dev Schnell mix. Turning it down helps, but then the text quality suffers. Might be worth it, though. This one is at 2.2 CFG.
>>101781949
Luck, I guess. I have no idea what I'm doing. lol. Definitely a nice style, though. Here's some catboxes:
https://files.catbox.moe/t5sln9.png
https://files.catbox.moe/jgyejy.png
https://files.catbox.moe/3qyc73.png
>>
>>101782009
>it's a Dev Schnell mix.
go for normal dev anon, schnell sucks and will make the merge worse, dev can easily go on higher CFG like 6-7
>>
File: 2024-08-08_00211_.png (1.49 MB, 1024x768)
1.49 MB
1.49 MB PNG
>>101782024
this
>>
>>101782024
I know, but then I'll have to wait like 4 minutes per gen or longer and I'm impatient enough as it is. Hopefully better solutions for low VRAM cards will be found eventually.
>>
File: Capture.jpg (433 KB, 2987x1426)
433 KB
433 KB JPG
ehh that's not bad
>>
>>101782009
>incunable
>>
has it been a week already
>>
>>101782009
What if you gen the image first and the text in a 2nd low noise pass so it only has to focus on that?
>>
File: a.jpg (137 KB, 1010x572)
137 KB
137 KB JPG
>>101781873
>t in the vae fold
>>101781988
uhhhhhhhhhh
nothing happened???????
did it forget to use GPU?
i just
source /venv/bin/activate
python main.py
>>
>>101782062
Not quite yet
>>
>>101782009
>Seems to be a side effect of using CFG with the model I'm using
the parameters on DynamicThresholdingFull were for dev, if you want more saturation, you can decrease the "interpolate_phi" value and see if it helps
>>
>>101782007
This guy has 2536 polygons and 1318 vertices. He is low poly.
Low poly is a technical term before it is an art style.
>>
File: ComfyUI_01682_.png (1.35 MB, 1024x1024)
1.35 MB
1.35 MB PNG
>>101782050
Flux is so good at prompt understanding, especially with high CFGs, feelsgoodman
>>
File: 2024-08-08_00213_.png (1.01 MB, 1024x768)
1.01 MB
1.01 MB PNG
>>101782066
omy you got 16gb? okay then you need two do to things:
1. in "Load Diffusion Model" click the small arrow on the right side of "weight_dtype" till you see "fp8_e4m3fn"
2. in "DualCLIPLoader" click the small arrow on the right side of clip_name1 till you see "t5xxl_fp8_e4m3fn.safetensors" (or .sft)
then try again, it also could work without step 2. but you gotta try .. also I hope you got 32GB of system ram
>>
File: a.jpg (73 KB, 550x537)
73 KB
73 KB JPG
>>101782066
>>101781873
>>101781988
wait nvm i did ..smi not `watch ..smi`
its fucking slow. i cant wait til 8 paychecks down the line when i can afford my first a6000
i-is it bad at porn? I think it might be bad at porn :<
>>
>>101782091
Changing the CFG causes it so go so slow though. I can't gen on anything but 1.
>>
File: 1710312213255828.png (1 MB, 1024x1024)
1 MB
1 MB PNG
>>101782083
Ah, good to know. I'll give that a try. Well, I've spent the whole day messing with Flux, now it's time to sleep.
>>
>>101782110
>I think it might be bad at porn :<
ya porn is not in the data set, so it just can't make it
>>
Usually, on SD models, using clip skip 2 helps on the prompt understanding, is it the case too for flux?
>>
>>101782137
I don't think t5xxl can do clip skip.. ?
>>
>>101782165
but we got clip_l, maybe putting clip skip on that one will help, idk
>>
>>101779850
flux schnell 5bit when?
>>
File: 1682940784664756.jpg (72 KB, 680x558)
72 KB
72 KB JPG
>>101782108
trying to figure out what all these letters mean
>weight_dtype
>fp8_e4m3fn
>DualCLIPLoader

im a nigger man. i need pictures or sumn shiiiit
>I hope you got 32GB of system ram
nah its fine. i i just unloaded Grok_q4_M and and swapped to miku103B for the time being
>>
wow dude. /b/ exists for a reason
>>
>>101782172
yea, but you can not differentiate the clip and the t5xxl output, even with the CLIPTextEncodeFLUX node that has input fields for both t5 and clip it has only one stream .. well Ill just test it
>>
>>101782137
that is only true for models trained with clip skip, not for the base model
>>
File: Capture.jpg (30 KB, 657x630)
30 KB
30 KB JPG
>>101782205
>yea, but you can not differentiate the clip and the t5xxl output, even with the CLIPTextEncodeFLUX node that has input fields for both t5 and clip it has only one stream
yes you can
https://files.catbox.moe/h7bbhy.png
>>
>>101782228
anon, are you retarded? read what you replied to again
>>
what a no mark faggot
>>
File: ComfyUI_00160_.png (2.02 MB, 1024x2048)
2.02 MB
2.02 MB PNG
>>101781963
anon why would you ever think I'm jewish
>>101782089
NTA but they probably mean like Tomb Raider where the entire model is like 230 polys, which is something Flux absolutely refuses to do.
>>101782186
that baby is caked up
>>
god damn it anon stop posting little girls
im in australia and i hovered over a couple of pics and now i have to burn my computer
>>
could this guy just not? where the jannies when you need them
>>
File: ComfyUI_temp_flux_00110.png (1.76 MB, 1024x1024)
1.76 MB
1.76 MB PNG
>>101781950
>>101781952
workflow
https://files.catbox.moe/izoamo.png
>>
>>101782285
better to just burn down australia
>>
File: test.png (3.18 MB, 3455x1135)
3.18 MB
3.18 MB PNG
Does anyone know what the ModelSamplingFlux node do? it changed the image greatly but Idk if it can be controlled or something, seems random to me
>>
File: a.jpg (112 KB, 691x752)
112 KB
112 KB JPG
>>101782135
>>101782159
where is the black man?
where is the giant neon text saying fuck 4chin?
i thought this was meant to be a superior instruct/text gen model?!
please say LoRAs exist for this!
at least teach me how to proompt for text! =<
>>
>>101782329
just read the code? https://github.com/comfyanonymous/ComfyUI/blob/master/comfy_extras/nodes_model_advanced.py#L174
>>
File: 1723096800356731.jpg (210 KB, 1024x1024)
210 KB
210 KB JPG
>>101782282
>probably mean like Tomb Raider where the entire model is like 230 polys
I completely understand what they mean and what the style they are going for is.
But what I am saying is Flux probably isn't considering that when it sees a low-poly model.
It was clearly trained by a VLM, and a VLM might have tagged any simple 3D model as low poly
>>
>>101782364
the fuck do you think i am? a fucking data scientist?
>>
>>101782285
>australia
>>101782306
A skitz hippy girl at work the other day make a comment about "the people on the forum i browse are like 4chan" and i had to stop myself from asking what she meant by that.
Luckily she was skitz but is everywhere not like this?
>>
File: ComfyUI_01691_.png (1.17 MB, 1024x1024)
1.17 MB
1.17 MB PNG
>>101782349
>where is the black man?
>where is the giant neon text saying fuck 4chin?
here anon
>>
>>101782228
>>101782247
doesnt matter, the problem is the dual clip loader, its where you would put in CLIP skip .. and if you do you get errors, and single clip loader doesnt support FLUX, so you can't load t5 seperate from CLIP, use clip skip just on CLIP then mix the streams .. atleast not with the nodes I see in my comfy, maybe one of you got a solution ..

pic related is the error msg

>>101782349
its just a week old anon, first LORAS were released yesterday! give it time for your coomer needs, until then enjoy superior gens
>>
>>101782398
I see, thanks for trying it out anon, much appreciated
>>
when will you nerds make something free that's on the level of midjourney
>>
File: 1685636700598334.gif (971 KB, 300x225)
971 KB
971 KB GIF
>>101782387
HOWE
>>
File: Capture.jpg (244 KB, 1854x1040)
244 KB
244 KB JPG
>>101782364
So basically it's a node to make the model more creative and less overcooked?
>>
>>101782457
is that shitty AI generated documentation?
>>
>>101782462
it's claude 3.5 Sonnet interpreting the code
>>
>>101782282
>caked up
one of the biggest issues with flux is that is sticks to human proportions for breasts and ass, it's "too smart" in that regard

>>101782285
it's only illegal in australia if the little girls aren't smiling (seriously, there was an ad campaign from H&M that was pulled for being too "provocative" in australia because the girls were doing the deadpan model face expression)
>>
>>101782430
The way to prompt Flux is describe the image exactly. Go full boomer prompt.
Literally type
>With text that says "whatever you want"
>>
File: ComfyUI_00008_.png (1.15 MB, 1024x1024)
1.15 MB
1.15 MB PNG
>>101782398
>until then enjoy superior gens
tanx
now how do i change my existing request code to work with this? do i need to specify the workflow or what?!
        result = requests.get(
url = self.URL + "/sd-models"
)


self.body = {
"hr_negative_prompt": prompt_negative
,"prompt": prompt #+ " illustration, art by artgerm and greg rutkowski and alphonse mucha,"
, "seed": -1
#TODO: commenting out JSON may cause bugs
# , "batch_size": 4
, "batch_size": 1
, "steps": 25
, "cfg_scale": 7
, "width": 1024
, "height": 960
# , "width": 1920
# , "height": 1080
# , "width": 1280
# , "height": 720
, "tiling": False
, "do_not_save_grid": True
, "negative_prompt": prompt_negative
# , "sampler_index": "UniPC"
, "sampler_index": "DPM++ 2M Karras"
, "send_images": True
, "save_images": False
}

response = requests.post(
url = self.URL + self.URL_txt2img
, json = self.body
)

niqqa i DID
cute anime girl with massive fluffy fennec ears and a big fluffy tail with a bubble saying "4 chIn blue board" blonde messy long hair blue eyes wearing a maid outfit  mouth open next to a fancy black forest cake with candles on top in the kitchen of an old dark Victorian mansion lit by candlelight with a bright window to the foggy forest and very expensive stuff everywhere

but i couldnt get this until i removed the bit about
sucking a giant black cock that is censored with a black box saying "4chan blue board"
>>
>>101782457
>>101782474
read this instead
>https://www.runcomfy.com/comfyui-nodes/ComfyUI/ModelSamplingSD3
basically this shit just shifts model by some float between 0.01 and 100.. the higher the more "different" from the intended converged output your output will be .. I don't see much value in this, unless you have a gen that just needs to be "abit" different .. its no magical make my gens better tool
>>
>>101782549
>basically this shit just shifts model by some float between 0.01 and 100.. the higher the more "different" from the intended converged output your output will be ..
So it has nothing to do with undercooking the output like this ledditor suggested? >>101780514
>>
>>101782549
lots of snake oil ideas with flux going on, and it's hard to tell what's worth it and what's not, all I know is that Fluxguidance works to get close to the style I want, but it's no replacement for real style recognition (IPAdapter, Lora or Finetune, whatever comes first).
>>
I noticed that at 512x512 the model follows the prompt way more easily, compared to 1024x1024, which is weird I guess?
>>
lets go and rate images
https://artificialanalysis.ai/text-to-image/arena
>>
File: ComfyUI_01698_.png (1.63 MB, 1024x1024)
1.63 MB
1.63 MB PNG
How much VRAM will the 5090 have in your opinion?
>>
File: ComfyUI_00009_.png (1.06 MB, 1024x1024)
1.06 MB
1.06 MB PNG
soooooo god daaaaaamn sloooooooow
i changed the euler and easy thing to karas and unipc like usual in SD and it came out looking like a smudge. not sure if its because karas and cpu need like 50 steps instead of 16
>>
>>101782642
plenty for the most demanding of video games
>>
File: 2024-08-08_00232_.png (1.24 MB, 1024x1024)
1.24 MB
1.24 MB PNG
>>101782598
he starts his argument with
>I feel ..
argueing tech with feelings, forget him

what that the node does is just throw some random numbers between base_shift and max_shift onto your weight .. will change output ever so slightly at low settings, but its nothing you can predict on how it will change the output or push it in what direction, high settings just destroy the output .. I would only use this node as very last resort when a gen has like maybe a finger wrong and you want nearly exactly the same and you add a shift of lets say 0.2 base, 0.4 max and if you are lucky the hand will then be correct
>>
File: ComfyUI_00013_.png (1.27 MB, 1024x1024)
1.27 MB
1.27 MB PNG
yay a usable one
loli poster go away
come again a
NEVER
>>
File: ComfyUI_01699_.png (1.21 MB, 1024x1024)
1.21 MB
1.21 MB PNG
>Kasane Teto
No Flux, not like this...
>>
>>101782685
I see, thanks for the detailled answer anon, I can remove this node from my workflow now
>>
File: ComfyUI_00003_.png (1.12 MB, 1024x1024)
1.12 MB
1.12 MB PNG
>>101782675
catbox? I haven't been able to get that body type, it's either child or adult
You've still got the same problem I had where the face doesn't match the body though, and they're always long as hell.
I was trying to generate shortstacks and it was like 'nope, you get a long goblin'
>>
File: ComfyUI_00001_.png (1.21 MB, 1024x1024)
1.21 MB
1.21 MB PNG
>>101782696
back to the LoRAing board
>cute anime girl patchouli knowledge with 2 long demon horns and a demon tail with a love heart at the end with a speech bubble saying "4 chin blue board" ... victorian furniture touhou theme

the fact it cant do touhou or porn suggests it will be able to do both as soon as the chinese finetune it on their uni's supercomputer
>>
>>101782723
prompt good sir?
Is there porn of this?
capcha: XX4H
>>
>>101782743
https://files.catbox.moe/62alp0.png
Literally default settings
>>
>>101782696
Flux knows that's an anime character with red hair, but doesn't know the details, it has probably not seen enough pictures of Teto to accurately render her
>>
>>101782767
>it has probably not seen enough pictures of Teto to accurately render her
It probably has, they just weren't all properly tagged as such
>>
>>101782764
how do you inspect image prompts in comfy?
>>
File: 2024-08-08_00241_.png (2.48 MB, 1280x1280)
2.48 MB
2.48 MB PNG
>>
A young girl petting a Eel. In the style of Cardcaptor Sakura.
>>
I saw the new SAM 2 model and I would like to try it out.
Is it compatible with some of the UIs we know? Comfy, or auto1111?
>>
/ldg/ when they see a child unprompted for 5 seconds
>>
File: ComfyUI_01113_.png (960 KB, 1288x824)
960 KB
960 KB PNG
>>
File: 8662.png (2.23 MB, 800x1280)
2.23 MB
2.23 MB PNG
>>101782904
>>
File: 2024-08-08_00255_.png (2.23 MB, 1280x1280)
2.23 MB
2.23 MB PNG
>>101782992
kek, I honestly can live without pepe seen him way to much, but I guess its just 2 weeks from now when the first pepe lora appears
>>
File: 1706356034832529.jpg (120 KB, 1024x1024)
120 KB
120 KB JPG
>>101782992
Dalle won
>>
File: 1706335062286252.jpg (253 KB, 1024x1024)
253 KB
253 KB JPG
>>101783026
And artstyle lora
>>
File: 2024-08-08_00258_.png (1.27 MB, 1280x768)
1.27 MB
1.27 MB PNG
>>
>>101782675
Sir plz srop posting cunny
>>
File: 2024-08-08_00263_.png (1.52 MB, 768x1280)
1.52 MB
1.52 MB PNG
>>
File: 0.jpg (214 KB, 1024x1024)
214 KB
214 KB JPG
>>
File: 1700265994915552.jpg (64 KB, 1292x738)
64 KB
64 KB JPG
>>101783066
>>
>>101782928
nvm, just saw it's possible to use it in ComfyUI.
Fuckin hell, I really dont like comfy but now I will have to learn it.
>>
File: ComfyUI_02442_.png (2 MB, 1280x1024)
2 MB
2 MB PNG
>>
File: buymoarvram01.png (1.34 MB, 1024x1024)
1.34 MB
1.34 MB PNG
seems legit
>>
File: 654314036.png (1.11 MB, 768x768)
1.11 MB
1.11 MB PNG
>>101783036
Local wins
>>
>>101783073
thank god you don't post your retard shit in sdg anymore
>>
File: 2024-08-08_00270_.png (1.33 MB, 1280x768)
1.33 MB
1.33 MB PNG
>>101783077
cool
>>
File: 2024-08-08_00273_.png (1.54 MB, 1280x768)
1.54 MB
1.54 MB PNG
the elite star trek crew we never had
>>
>>101782642
24GB so they can keep selling their enterprise A100 cards for big bucks.
we will never have 100GB VRAM on a affordable gaming card and they will keep this artificial bottleneck forever so poor people cant misuse these AIs.
>>
File: FLUX_00116_.png (1.18 MB, 896x1152)
1.18 MB
1.18 MB PNG
off to bed
>>
File: ComfyUI_temp_sprbe_00121_.png (1.29 MB, 1024x1144)
1.29 MB
1.29 MB PNG
>>
File: ComfyUI_00434_.png (1.05 MB, 768x1344)
1.05 MB
1.05 MB PNG
wow controlnet flux is so good
>>
File: 1716406427881645.png (135 KB, 1157x1049)
135 KB
135 KB PNG
LMAO this is why i hate those types of rankings, they don't work properly (this is based only on my own votes), they just pair schnell with wrong models and somehow it's "higher" than dev/pro, coz they never pair flux models with each other
>>
File: 0.jpg (178 KB, 1024x1024)
178 KB
178 KB JPG
>>
The one blessing of ai art, being able to gen a goth gf without needing to pay a commission (I'm a poorfag)
>>
I downloaded a bunch of random ass Pony checkpoints that were released in the past month and it seems most of them are generally better quality than AutismMix and original Pony Diffusion V6, which are both 6 months old at this point.

It's interesting to see model quality slowly improve by random users iterating.
>>
>>101783671
your presence stinks this place up
>>
>>101783787
fuck off faggot, everything that gatekeeps this place from the normalfag twatter leddit crowd is Always good.
>>
File: fac004w.jpg (711 KB, 1600x1600)
711 KB
711 KB JPG
>>
>>101783856
I don't know what animal that is, but I wish it was real.
>>
File: ComfyUI_temp_sprbe_00131_.png (1.38 MB, 1024x1144)
1.38 MB
1.38 MB PNG
>>101782723
oh this uses the prompt from my flux part from my autistic workflow nice
altough I prefer how they come out in pony
>>
File: file.png (1.96 MB, 1024x1024)
1.96 MB
1.96 MB PNG
>>101783671
>>
>>
>>
>>101781655
>>101783819
How spot a pedo: by the weird excuses they make
>>
>>
>>101783989
how to spot a retarded newfag - no fucking guide necessary.
>>
File: 1723096800350738.jpg (155 KB, 1024x1024)
155 KB
155 KB JPG
>>101784000
>>
File: 1709866140929371.png (1.48 MB, 1024x1024)
1.48 MB
1.48 MB PNG
>>
Oven fresh...
>>101784055
>>101784055
>>101784055
>>
File: 1692103194686543.png (929 KB, 720x1280)
929 KB
929 KB PNG
>>
File: 1715592539820125.png (902 KB, 1312x736)
902 KB
902 KB PNG
>>
File: 1699226570469838.png (1.18 MB, 1024x1024)
1.18 MB
1.18 MB PNG



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.