[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


File: tmp.jpg (1.12 MB, 3264x3264)
1.12 MB
1.12 MB JPG
Discussion of free and open source text-to-image models

Previous /ldg/ bread : >>101920553

>Beginner UI
EasyDiffusion: https://easydiffusion.github.io
Fooocus: https://github.com/lllyasviel/fooocus
Metastable: https://metastable.studio

>Advanced UI
Automatic1111: https://github.com/automatic1111/stable-diffusion-webui
ComfyUI: https://github.com/comfyanonymous/ComfyUI
Forge: https://github.com/lllyasviel/stable-diffusion-webui-forge
InvokeAI: https://github.com/invoke-ai/InvokeAI
SD.Next: https://github.com/vladmandic/automatic
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI

>Use a VAE if your images look washed out
https://rentry.org/sdvae

>Model Ranking
https://imgsys.org/rankings

>Models, LoRAs & training
https://civitai.com
https://huggingface.co
https://aitracker.art
https://github.com/Nerogar/OneTrainer
https://github.com/derrian-distro/LoRA_Easy_Training_Scripts

>Flux
https://huggingface.co/spaces/black-forest-labs/FLUX.1-schnell
https://comfyanonymous.github.io/ComfyUI_examples/flux

>Pixart Sigma & Hunyuan DIT
https://huggingface.co/spaces/PixArt-alpha/PixArt-Sigma
https://huggingface.co/spaces/Tencent-Hunyuan/HunyuanDiT
https://huggingface.co/comfyanonymous/hunyuan_dit_comfyui
Nodes: https://github.com/city96/ComfyUI_ExtraModels

>Index of guides and other tools
https://rentry.org/sdg-link
https://rentry.org/rentrysd

>GPU performance
https://vladmandic.github.io/sd-extension-system-info/pages/benchmark.html
https://docs.getgrist.com/3mjouqRSdkBY/sdperformance

>Try online without registration
txt2img: https://www.mage.space
img2img: https://huggingface.co/spaces/huggingface/diffuse-the-rest
sd3: https://huggingface.co/spaces/stabilityai/stable-diffusion-3-medium

>Maintain thread quality
https://rentry.org/debo

>Related boards
>>>/h/hdg
>>>/e/edg
>>>/d/ddg
>>>/b/degen
>>>/vt/vtai
>>>/aco/sdg
>>>/trash/sdg
>>
It's called I do a little bit of pulling myself up by my bootstraps and learning ComfyUI and letting go of the massive updooter retardation of Forge
>>
>>101923922
Cope
>>
>>101923925
What did he mean by this?
>>
also could someone post the workflow for this >>101920957
thoughbeitever i'm using XL.
>>
Every new /lmg/ thread I feel so happy knowing that none of the avatar fags have migrated here and just jerk each other off in the other thread. It's like they don't even realize everyone left.
>>
>>101923951
>/lmg/
>>
>>101923951
OK but 3 of my gens are in the collage, does that make me an avatarfag?
>>
File: file.png (2.62 MB, 1024x1024)
2.62 MB
2.62 MB PNG
>>101923801 reposting since new thread
THREAD THEME: edible looking cats
>>
File: FLUX_00032_.png (1.17 MB, 896x1152)
1.17 MB
1.17 MB PNG
the right honourable lady has the floor
>>
>>101923951
wth
>>
File: file.png (199 KB, 2221x884)
199 KB
199 KB PNG
>>101923771
More experienced anons may correct me if I'm wrong, but this is the minimal gguf setup (schnell).
>>
File: icon-van.jpg (37 KB, 600x600)
37 KB
37 KB JPG
>>101923951
Hey buddy you got into the wrong door, the /lmg/ door is 2 threads down
>>
I'm hungry. Give me your best bacon burger image, with fries and cola.
>>
File: ComfyUI_00013_.png (473 KB, 504x896)
473 KB
473 KB PNG
dev
>>
>>101924044
Sure. I will start generating it right away. It will take >9000 minutes.
>>
File: FD_00355_.png (1.39 MB, 1024x1024)
1.39 MB
1.39 MB PNG
>>101924044
You don't need the side dishes, Anon.
>>
File: ComfyUI_00014_.png (732 KB, 504x896)
732 KB
732 KB PNG
>>101924048
shnell 1:29 , dev 1:30
>>
File: 0.jpg (111 KB, 1024x1024)
111 KB
111 KB JPG
>>
File: ComfyUI_Flux_23.png (1.42 MB, 1216x832)
1.42 MB
1.42 MB PNG
>>101923973
>>
File: FD_00046_.png (1.05 MB, 1024x1024)
1.05 MB
1.05 MB PNG
>>101923973
>>
>>101924044
>>
Blessed thread of frenship
>>
>vae decoding takes longer than gen itself
>>
File: 00002-4190916082.png (559 KB, 800x600)
559 KB
559 KB PNG
>>101924044
>>101923973
>>
>clip missing: ['text_projection.weight']
First time I've seen this. I'm loading clip_l and t5.
>>
>>101924176
yeah I got the same shit but it's still working so meh
>>
File: FD_00050_.png (1.32 MB, 1024x1024)
1.32 MB
1.32 MB PNG
>>101923974
Madame President Peaches! The Chinese! They have developed a counter-bimbo squad!
>>
File: file.png (1011 KB, 800x600)
1011 KB
1011 KB PNG
>>101924044
>>
Can some give a technical description of the meme that "You can force model into your cpu"
Because thats some technical feat to say put a 23GB file inside a device that has a couple of megabyte capacity.
Do they mean it is processed on the CPU being fed by RAM, or, VRAM? because noone ever says and i'd like it cleared up because the phrasing implies something magical.
>>
File: file.png (620 KB, 2236x860)
620 KB
620 KB PNG
>>101924018
Improved with some CPU offloading.
>>
>>101924214
>Do they mean it is processed on the CPU being fed by RAM
Yes.
>>
>>101924214
>an some give a technical description of the meme that "You can force model into your cpu"
>Because thats some technical feat to say put a 23GB file inside a device that has a couple of megabyte capacity.
no, it's the text encoder that has to go to the cpu, that one isn't that big it's 9.3 gb for the fp16
>>
File: ifx61.png (1.3 MB, 1024x1024)
1.3 MB
1.3 MB PNG
>>101924202
>Sir, there's fries in my drink
>>
File: file.png (1.07 MB, 800x600)
1.07 MB
1.07 MB PNG
Somehow I managed to get 90 seconds gens on my RX 5700 XT (I'm the AMD VRAMlet from yesterday). But no matter how low I go I can't fit the whole model in the GPU. I think the model itself would fill it up completely, so no matter the size of the image, it needs to split memory usage.
>>
>>101924098
i don't like this edible cat
>>
File: file.png (199 KB, 320x240)
199 KB
199 KB PNG
This burger for ants rendered in 38 seconds. Not bad.
>>
File: Comparison_all_quants.jpg (3.84 MB, 7961x2897)
3.84 MB
3.84 MB JPG
>>101924281
the RTX 5700 is only 8gb of VRAM, no quant will fit in it, you need at least 12gb
>>
>>101924254
>>101924262
Thank you anons. Now to find a node that lets you specify the text encoder to use that, because when loras come along and control nets we're all going to want to retain the quality we have but be more flexible in the final composition.
Would that node that lets you use 2 gpus be the right thing to read about to do this?
>>
>>101924333
>Now to find a node that lets you specify the text encoder to use that
it already exists
https://reddit.com/r/StableDiffusion/comments/1el79h3/flux_can_be_run_on_a_multigpu_configuration/
>>
File: image (3).jpg (175 KB, 1024x1024)
175 KB
175 KB JPG
>>101924044
>>
>>101924351
Thanks. I'll look at that after I patch my 13500 (if it needs it, I still cant figure out if it's affected)
>>
File: ComfyUI_02643_.png (3.61 MB, 1248x1824)
3.61 MB
3.61 MB PNG
>>101924351
anon you really need to set up a flux rentry we can put in the sticky so you can stop redirecting us to your various reddit posts
>>
File: file.png (2.43 MB, 1024x1024)
2.43 MB
2.43 MB PNG
177 seconds.
>>101924329
Yeah, I figured.
>>
File: 00019-1601796130.png (1005 KB, 896x1152)
1005 KB
1005 KB PNG
I wonder how many dudes online would fall for this.
>>
>>101924401
Create a tinder profile and find out.
The hard part is making her again without a LoRA
>>
>>101924396
You can make the rentry anon if you want, I'm a bit busy with other experiments right now
>>
File: ComfyUI_00750_.png (741 KB, 1024x1024)
741 KB
741 KB PNG
>>101923896
>>101923899
HOLY SHIT ANON THANK YOU SO MUCH
Turns out it really was taking more than 12GB vram, after trying Q4_1 the time went down to 1:35
It bugs me why people are posting the timing of a different model under flux1-dev-fp8
>>
File: flux.jpg (418 KB, 1816x1698)
418 KB
418 KB JPG
what value do i mess with to make an image look more like an image in img2img?
>>
File: ComfyUI_Flux_25.png (1.39 MB, 1216x832)
1.39 MB
1.39 MB PNG
>>101924202
>>101924222
>schnell is so poor it doesn't even have sesame in its burgers
>>
>>101924444
the denoising strength is the most important one
>>
>>101924468
If steps take as long on dev as they do on schnell, I'd be looking at 10+ minute gens at this resolution. No thanks.
>>
is there a good way i can speed up tiled upscale? I noticed it's only using about 6 gigs of my vram, there's at least another gig it could use.
damn i reallize i have so much to re learn here and learn anew..
>>
File: 0.jpg (207 KB, 1024x1024)
207 KB
207 KB JPG
>>
File: IMG_0706.png (237 KB, 1074x792)
237 KB
237 KB PNG
I have a question, how come nobody makes "VRAM" cards?
Like an extra card you plug into a PCIe slot and voila you have more VRAM. 8GB vram costs like 27 dollar. so 80GB would be like 300 bucks.
>>
>>101924537
bus is too slow?
>>
>>101924537
That's not how it works shahar.
Also, Jewvidia doesn't make 48gb consumer cards because they want to sell you a worse performing card with the extra 24gb of vram for 4 times the price.
>>
>>101924537
>I have a question, how come nobody makes "VRAM" cards?
VRAM is way faster than RAM but it has a price, it's really instable so you have to weld that shit into your gpu, you can't just take it out with your hands like some ram it would destroy it
>>
File: file.png (709 KB, 768x512)
709 KB
709 KB PNG
>>101924490
Actually, ~5 minutes on dev at 20 steps. Will try 6 steps on schnell next to see if it can compare.
>>
File: FD_u_00009_.jpg (520 KB, 1536x2432)
520 KB
520 KB JPG
>>101924512
How big are the tiles? Are you using math nodes to fix the tile sizes?
Here's my workflow with upscaling: https://files.catbox.moe/3zy8zd.png
>>
>>101924537
nvidia is a trillion dollar company
>>
>>101924329
I think this is bullshit, I can gen fp8 with 12GB.
>>
>>101924537
>he's asking questions
newfag doesn't know about (((The Technical Decrescence Initiative)))
>>
>>101924611
you don't, it's offloading the surplus into the vram, the model itself is 12gb, you add that size and the inference calculus (+2gb for a 1024x1024 resolution) and it already overflow your card
>>
>>101924611
You can gen, but it's using your RAM as well as your VRAM and it's slower than it should.
>>
File: ComfyUI_00025_.png (488 KB, 432x768)
488 KB
488 KB PNG
>>101924490
>>101924565
Took 1:10 for this on schnell 4 steps, not really worth it compared to 1.5, would have to add more stuff to it to make it lose the slop look it comes with naturally
>>
>>101924548
is it?
why not just have the GPU offload shit into the VRAM card?
>>101924554
why dont they put more VRAM into these A100s and increase the VRAM in consumer cards?
>>101924561
so you sayin VRAM cards are impossible to make?
>>101924573
why not 2 trillion?
>>101924619
elaborate
>>
>>101924667
>so you sayin VRAM cards are impossible to make?
that means that you can't change the VRAM value of a card, so if you want to increase the VRAM, you have no other choice but to buy a card with more vram in it
>>
>>101924569
its this workflow https://comfyworkflows.com/workflows/4302a04e-072a-4196-8d1a-86dbf8be0a2d
and i was previously using this one
https://comfyworkflows.com/workflows/e6c1c436-f878-4cc3-be0a-43ee96864467

ill check yours in a few
>>
File: D0n in.jpg (303 KB, 1080x896)
303 KB
303 KB JPG
>>
>>101924667
>so you sayin VRAM cards are impossible to make?
NTA, but they were a thing for a while and didn't sell very well. Then again, I don't think VRAM was as big a commodity back then.
>>
what node am i supposed to be using to save images with metadata? this year old one doesnt seem to work well (or im retarded and arent using it right)
https://github.com/giriss/comfy-image-saver
>>
File: file.png (760 KB, 768x512)
760 KB
760 KB PNG
>>101924659
This took a minute. For me it's borderline worth it. On the one hand, the amount of detail you get is incredible, but the slop look...
Still, I think with skillful prompting it might be worth it.
>>
File: file.png (20 KB, 653x131)
20 KB
20 KB PNG
>>101924667
hopefully this is self-explanatory.
>>
>>101924637
>>101924651
I don't think so, I'm on an AMD GPU, it doesn't have that behavior by default. Using Q8_0 which use a tiny bit more vram doesn't work for example. But, with FP8, I notice that it's unloading models between step, it doesn't have enough vram to keep everything loaded.
>>
>>101924723
it's physically impossible anon, the model itself is 12gb, that means in a MINIMUM it's asking for 12gb of vram, now you add the inference memory part and you can see that it can't simply work
>>
>>101924719
explain like I'm a golden retriever with profound mental retardation
>>
>>101924758
Alright, buddy, imagine you've got a really big pile of your favorite toys, but sometimes it gets so big and messy that you can't find your favorite ball anymore. The Technical Decrescence Initiative is like a plan to help clean up that big toy pile, so you can always find your favorite ball when you want it.

But instead of toys, we're talking about all the technology and gadgets that people use every day. Sometimes, there are just too many gadgets, and things get confusing and hard to use. So, this plan is about making things simpler and getting rid of the extra stuff we don't really need, just like how we'd put away the toys we don't play with as much. This way, life stays fun and easy, just like when you have your favorite ball ready to go!
>>
>>101924723
You don't think so? Okay lmao. Tell that to the laws of physics.
Your inference is happening mostly on RAM. I would know, I, too have an AMD card and 8 fucking GBs.
>>
File: file.png (1.64 MB, 1536x1024)
1.64 MB
1.64 MB PNG
>>101924712
Same prompt, took 5 minutes. Unlike in SD, larger resolution did not add detail? This looks like an upscale. I'm doing 4 steps with schnell.
>>
>>101924771
in english Doc
>>
File: FD_00089_.png (1.21 MB, 768x1216)
1.21 MB
1.21 MB PNG
>>101924667
It's purely because of enterprise, Anon. If people can buy a bunch of cheap XX90 cards with shit loads of VRAM then nobody would buy their enterprise cards.
>>
File: ComfyUI_00026_.png (573 KB, 512x768)
573 KB
573 KB PNG
>>101924712
Some loras could make it better, I use realism lora for this one

https://huggingface.co/XLabs-AI/flux-lora-collection/tree/main

>>101924723
Use this for amd
https://github.com/patientx/ComfyUI-Zluda
>>
>>101924827
The Technical Decrescence Initiative is a concept focused on simplifying technology by reducing unnecessary complexity. Imagine that over time, technology keeps advancing, and we keep adding more features, gadgets, and systems. While this can be exciting, it also leads to a lot of clutter, confusion, and things becoming harder to use or maintain.

This initiative is about stepping back and deliberately "downsizing" the technology we use—removing what's not essential and refining what's left. The goal is to make technology more intuitive, efficient, and accessible by stripping away the excess. It's like cleaning up your room: getting rid of things you don't need so you can easily find and use the stuff that really matters. By doing this, we make sure technology serves us better without overwhelming us.
>>
>>101924824
>Buttchin
>>
>>101924829
there's always been an enterprise line of cards, but in the past they've scaled with the consumer line
stands to reason that if consumer cards can achieve 48gb vram, then the enterprise cards have to do significantly better than that, not the other way around
>>
File: ComfyUI_00027_.png (597 KB, 512x768)
597 KB
597 KB PNG
>>101924824
The negative prompts really helped on 1.5, here you can't use it, idk how to remove the slop look
>>
>>101924840
I dont get it
>>
>>101924865
https://old.reddit.com/r/StableDiffusion/comments/1ekgiw6/heres_a_hack_to_make_flux_better_at_prompt/
>>
File: FD_00094_.png (1015 KB, 768x1216)
1015 KB
1015 KB PNG
>>101924856
Consumer cards can easily reach that, Anon, but they won't.
>>
File: file.png (760 KB, 768x512)
760 KB
760 KB PNG
>>101924833
>Zluda
What is the improvement that I'm supposed to see?
I replicated >>101924712 and it actually took 30 seconds longer. I have all the ROCm libs and all that.
>>
>>101924875
Even the tech itself gets technical debt. Most things are just layers and layers of crud glued and stapled on each other. Eventually you need to look at what you have and figure out what's actually used and important and start from square one with just those things (but done really well).
>>
File: ComfyUI_Flux_37.png (1.67 MB, 1344x768)
1.67 MB
1.67 MB PNG
>>101924444
>>
>>101924917
>what the fuck TOY that font
it would've taken you like 15 seconds tops to correct this in paint.net

toss mikuposters into active volcanos, slice mikuposters in half with a katana, deport mikuposters back to >>/lmg/
>>
why did i update forge
all my shit is broken now
i just want to use regional prompts

>ModuleNotFoundError: No module named 'ldm'

Anybody know how to fix this?
>>
File: file.png (999 KB, 768x512)
999 KB
999 KB PNG
>>101924771
>Alright, buddy, imagine you've got a really big pile of your favorite toys, but sometimes it gets so big and messy that you can't find your favorite ball anymore. The Technical Decrescence Initiative is like a plan to help clean up that big toy pile, so you can always find your favorite ball when you want it.
>But instead of toys, we're talking about all the technology and gadgets that people use every day. Sometimes, there are just too many gadgets, and things get confusing and hard to use. So, this plan is about making things simpler and getting rid of the extra stuff we don't really need, just like how we'd put away the toys we don't play with as much. This way, life stays fun and easy, just like when you have your favorite ball ready to go!
>>
>>101924896
None if you are on linux, probably even worse perf, zluda is just used by wintoddler because they don't have full rocm support.
>>
>>101924939
start using comfy >>101923922
its 2 hours or so into this adventure and im already using comfy with better results and stability.
only roadblocks are tryin to navigate this ADHD spaghetti mess of nodes and updates to figure out how i can get this shit to save metadata in the pngs
>>
File: ComfyUI_temp_cnurn_00003_.png (1.06 MB, 1024x1024)
1.06 MB
1.06 MB PNG
>>101924620
wtf it worked. this is a fp16 with txxlfp16 all on gpu image. uses 22.7/25.0gb vram. i don't recon i can use loras thought. 4.39s/it compared to 1.46s/it on gguf8 and 1.20s/it on NiggaFast4.
>>
File: latina.png (1.33 MB, 1024x1024)
1.33 MB
1.33 MB PNG
how do i get realistic teeth
>>
File: file.png (415 KB, 768x512)
415 KB
415 KB PNG
>>101924840
>The Technical Decrescence Initiative is a concept focused on simplifying technology by reducing unnecessary complexity. Imagine that over time, technology keeps advancing, and we keep adding more features, gadgets, and systems. While this can be exciting, it also leads to a lot of clutter, confusion, and things becoming harder to use or maintain.
>This initiative is about stepping back and deliberately "downsizing" the technology we use—removing what's not essential and refining what's left. The goal is to make technology more intuitive, efficient, and accessible by stripping away the excess. It's like cleaning up your room: getting rid of things you don't need so you can easily find and use the stuff that really matters. By doing this, we make sure technology serves us better without overwhelming us.
>>
GUIDANCE TOO HIGH. Turn your guidance down by at least 40% RIGHT NOW.
>>
>>101924944
>>101924965

kek, thanks for indulging in my schizoposting
>>
This guide for ayymd still up to date?
https://rentry.org/sd-nativeisekaitoo or should I just use the docker?
>>
File: file.png (965 KB, 768x512)
965 KB
965 KB PNG
>>101924963
>mexican woman laughing. her teeth are clearly visible. she is in a park
Here's your slop, sir.
>>
>>101924995
>mfw amdlets in any given ai general
>>
File: file.png (894 KB, 768x512)
894 KB
894 KB PNG
>>101925006
Are you making fun of me? Here. Have another.
>>
File: ComfyUI_00813_.png (2.18 MB, 1024x1024)
2.18 MB
2.18 MB PNG
>>101925006
>mfw i make the logical and optimum choice to go for nvidia despite the price difference, so i never have to deal with ayyMD bullshit again

damn it feels good to be right
>>
File: ComfyUI_00032_.png (590 KB, 512x768)
590 KB
590 KB PNG
>>101924896
>>101924945
Works on windows
>>
File: ComfyUI_02666_.png (3.55 MB, 1248x1824)
3.55 MB
3.55 MB PNG
>>
>>101925029
ANOTHER ONE!

>>101925034
no shit man holy fuck. I was already getting extremely suspect when all the OPENCL drama was happening between AMD and the Blender foundation, and people called me crazy/stop complaining/etc. Look where we are now. AMD's barely relevant anymore.
like jumping off a sinking ship that was already 80% of the way deep.
>>
>>101925045
Ah, I'm on Linux.
>>
>>101924984
git clone https://github.com/comfyanonymous/ComfyUI
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.1
pip install -r requirements.txt
python main.py
>>
File: file.png (1015 KB, 768x512)
1015 KB
1015 KB PNG
>>101925059
Here. Yeah, I actually find the argument some anon made yesterday quite compelling. AMD main reason for existing might be so that NVIDIA doesn't have to deal with monopoly lawsuits.
>>
File: ComfyUI_00033_.png (625 KB, 512x768)
625 KB
625 KB PNG
>>101925057
This is good, what prompt you used for the light?
>>
File: file.png (1.01 MB, 768x512)
1.01 MB
1.01 MB PNG
Welp, it looks like I'm going down this path now. How long can a flux prompt get?
>>
File: 1704695247791936.jpg (104 KB, 768x1024)
104 KB
104 KB JPG
>>101925078
thanks
>>
File: mechadark.png (1.35 MB, 1024x1024)
1.35 MB
1.35 MB PNG
>>101924963
>>101925126
looks like its just the model, other was base pony. and this is mechadarkmix i inpainted. so will have to gently inpaint teeth with a different model that doesnt ruin the colors and lighting
>>
>>101925084
Probably just the 90s modelling photo bit and the rainbow dancing across the lens is the important part, but I'll post the full prompt.

>90s modelling photo from a japanese gravure magazine. She's in a japanese high-rise apartment, dressed in a long sleeve sweater and wearing no pants. There's a rainbow dancing across the lens of the camera. She is making a peace (V) sign.

>The girl has a realistic pair of cat ears. She is a catgirl.
>>
File: ComfyUI_00036_.png (612 KB, 512x768)
612 KB
612 KB PNG
>>101925126
>>
File: file.png (1.01 MB, 768x512)
1.01 MB
1.01 MB PNG
No matter what I do, no maggots appear on the picture. Why?>>101925181
Imma test this on schnell now.
>>
File: 1723718824956713.png (2.61 MB, 2772x3061)
2.61 MB
2.61 MB PNG
Can somebody spoonfeed me the method for getting rid of background blur with flux dev? I've tried replicating this >>101903393 but have a hard time following it and am getting bad results
>>
File: file.png (912 KB, 768x512)
912 KB
912 KB PNG
>>101925181
>>
>>101925229
use this tutorial
https://reddit.com/r/StableDiffusion/comments/1estj69/remove_the_blur_on_photos_with_tonemap_an/
>>
File: file.png (880 KB, 768x512)
880 KB
880 KB PNG
>>101925195
>>
File: FD_00103_.png (1.62 MB, 1024x1024)
1.62 MB
1.62 MB PNG
>>
File: ComfyUI_00038_.png (560 KB, 512x768)
560 KB
560 KB PNG
>>101925238
>>101925181
>>
>>101925270
Very nice.
>>
File: ComfyUI_02675_.png (1.94 MB, 832x1216)
1.94 MB
1.94 MB PNG
>>101925238
>>101925270

I wonder if this is just a Schell thing or the fact I've got a lot of the optimizations the black miku guy posted is helping with style adherence.

Here's a box if you wanted to play around with it, changed up the prompt a bit to reduce the unnecessary parts.

https://files.catbox.moe/4or69c.png
>>
File: file.png (711 KB, 768x512)
711 KB
711 KB PNG
>>101925259
I'm starting to see that anything that could remotely resemble gore is missing from this model.
>>101925305
Damn, that looks good.
>>
>>101925305
Also I'm running full dev and a custom CLIP so it might require a few adjustments to run.
>>
>>101925320
Highly recommend the CLIP btw, felt like a flat-out improvement to my gens.

https://huggingface.co/zer0int/CLIP-GmP-ViT-L-14/blob/main/ViT-L-14-BEST-smooth-GmP-ft.safetensors
>>
>>101925320
>>101925305
I'll try the gravure magazine too
>>
File: FD_00110_.png (1.73 MB, 1024x1024)
1.73 MB
1.73 MB PNG
What the fuck happened here. To be honest I've missed genning monstrosities.
>>
>>101925305
Catbox? Which Lora is this?
>>
File: ComfyUI_Flux_40.png (1.67 MB, 1344x768)
1.67 MB
1.67 MB PNG
Hm, content-aware inpainting is breddy good in comfyui
>>101924938
>manually redrawing
lol
lmao
>>
>>101925341
>mfw i pop a boner to one of my failgens
>>
>>101925367
This is vanilla Flux.
>>
File: FLUX_00048_.png (1.31 MB, 896x1152)
1.31 MB
1.31 MB PNG
I know it doesn't make sense
>>
>>101925367
>>101925378
nvm, you posted it, I am a blind retard
>>
>>101924748
>the model itself is 12gb,
and the quanted model is smaller
>>
File: ComfyUI_02556_.png (1.68 MB, 832x1216)
1.68 MB
1.68 MB PNG
>>101925389
no problemo brother
>>
>>101925390
Q8_0 is 12.7gb big
https://huggingface.co/city96/FLUX.1-dev-gguf/blob/main/flux1-dev-Q8_0.gguf
Just loading the model will overflow a 12gb vram card
>>
giga billion parameter model
>>
File: 00014-3713629951.png (1.96 MB, 1280x1024)
1.96 MB
1.96 MB PNG
I need to get my creative juices flowing more often, it's fun
>>
File: ComfyUI_00042_.png (481 KB, 512x768)
481 KB
481 KB PNG
>>101925335
How do you add the clip to comfyui?
>>
File: flxgn.jpg (193 KB, 832x1216)
193 KB
193 KB JPG
>>101925305
If you like vintage stuff, I recommend this Lora as well:

https://civitai.com/models/646686/japanese-photo-1980s-style-1980
>>
File: if only.jpg (433 KB, 1544x1552)
433 KB
433 KB JPG
>>101925341
>>101925369
>>
>>101925482
Put it in models/clip. Use the dual clip loader like that anon to load this one (replacing l_clip) and t5
>>
File: ComfyUI_00043_.png (620 KB, 512x768)
620 KB
620 KB PNG
>>101925181
>>
>>101925335
Can confirm, it's excellent.
>>101925482
You need picrel node, it's a default one.
>>
File: FD_00129_.png (1.35 MB, 1024x1024)
1.35 MB
1.35 MB PNG
Unbothered. Moisturized. Happy. In My Lane. Focused. Flourishing.
>>
>>101925544
I notice you have the models reversed in order. Is there a difference? If I use cliptextencodeflux, should I use the bottom or the top textarea?
>>
>>101925335
>Highly recommend the CLIP btw, felt like a flat-out improvement to my gens.
>>101925544
>Can confirm, it's excellent.
Same, It helped me to remove the blur even better: https://imgsli.com/Mjg3OTU5
>>
>>101925577
was meant to reply to >>101925465
>>
File: FLUX_00051_.png (1.28 MB, 896x1152)
1.28 MB
1.28 MB PNG
>>
>>101925482
Clip goes in your \ComfyUI\models\clip folder.

If you're asking about how to actually load it, then you need to use the DualCLIPLoader node with the newly download CLIP. (i.e >>101925544) on one and the fp8 or fp16 t5 text encode on the other

https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main (if you don't have them downloaded)
>>
File: file.png (968 KB, 512x768)
968 KB
968 KB PNG
What are the recommended sampler and scheduler for schnell? deis/ddim_uniform completely fucks up the image.
>>
have now tried
>https://github.com/giriss/comfy-image-saver
>save image extended
neither work properly, I can't get all the metadata i need. someone be my savior here for fuck sake.
>>
>>101925627
euler simple
>>
>>101925645
Sorry anon didn't see your post earlier, this is what I use.
https://github.com/receyuki/comfyui-prompt-reader-node
>>
File: file.png (1008 KB, 512x768)
1008 KB
1008 KB PNG
>>101925305
There's definitely some magic going on in your workflow, or schnell is just so shit compared to dev.
>>
File: FD_00153_.png (1.61 MB, 1024x1024)
1.61 MB
1.61 MB PNG
>>
File: file.png (1.01 MB, 512x768)
1.01 MB
1.01 MB PNG
>>101925655
Improvement.
>>
File: ComfyUI_00044_.png (583 KB, 512x768)
583 KB
583 KB PNG
>>101925666
>>101925601
>>101925544
Thanks, works a bit better
>>
File: flux_00166_.png (1.3 MB, 1024x768)
1.3 MB
1.3 MB PNG
>>101924421
>>101921977
So how do you achieve consistent person?
>>
>>101925712

hm if this is using my workflow then maybe schnell is just limited for those kinds of styles
>>
>>101925341
That's Comfy
>>
>>101925657
this shit doesnt seem to work for reading data from an image, i upload it and the node doesnt change.
>>
>>101925738
Ah I've never used that feature, I just use it for saving metadata to images I view on infinite image browser. When I need to recreate an image I normally just grab the details from there and input it manually or just drag the image into comfy to copy the image verbatim.
>>
>>101925734
>>
Any good profiles for flux LoRa training on bmaltais kohya?
>>
>>101925763
>dragging an image into comfy literally creates the entire node setup from scratch for you
..oh.

well okay that's that mystery solved, now im gonna actually try the part from your suggestion i needed.
>>
>>101925772
MY EYES
>>
File: FLUX_00052_.png (1.25 MB, 1152x896)
1.25 MB
1.25 MB PNG
vgh
>>
>>101925772
Yeah I have a lot more going on. Mostly implemented a lot of these tweaks with my own numbers. https://old.reddit.com/user/Total-Resort-3120/submitted/

You can >>101925305 drag the image from the catbox into comfyui to recreate my workflow and try and adjust from there if you wanna
>>
File: ComfyUI_00570_.png (2.16 MB, 1536x1152)
2.16 MB
2.16 MB PNG
>>
File: FD_00164_.png (1.34 MB, 1024x1024)
1.34 MB
1.34 MB PNG
>>101925733
It was purely accidental. Likely a coincedence.
>>
File: ComfyUI_085354_0_00001.png (1.33 MB, 768x1152)
1.33 MB
1.33 MB PNG
>>101925763
Yeah overall i have no fucking clue how this setup works, how it hooks up to the other nodes to simply save the metadata..
I just 100% flawlessly recreated picrel by dragging and dropping it into comfyui so that was neat, but, i still need to find a way to actually save metadata since it apparently needs two nodes to get the job done? That example setup is confusing as shit.
>>
File: file.png (1.1 MB, 600x800)
1.1 MB
1.1 MB PNG
>>101925734
Your workflow crashed :3 I'm trying to get closer by adding all the stuff you have one by one in order to find the magic ingredient. It seems the adaptive guidance and the sampler make a big difference. But yeah, schnell seems really limited here.
Moving things around seemed to help:
>A very old vintage 90s modelling photo that is very faded and has a lot of signs of wear and tear.
>The photo is from a japanese gravure magazine. A girl is in a vintage appartment building, wearing a t-shirt and nothing else. There's sunlight coming in through a draped window behind her. There's a rainbow dancing across the lens of the camera.

It seems to suffer from brain damage. I understand that it will ignore me saying she's wearing nothing and put some panties on her if her butt is gonna show, but it also seems oblivious to some concepts, like the wear and tear that yours so beautifully included. I had to put a lot more effort into the prompt to get anywhere near that.
I'm going to try a 5 minute dev version next to compare.

She still looks like a modern plastic surgery Korean model and it's disgusting.
>>
is ModelSamplingFlux supposed to be before or after the lora loaders?
>>
>>101924093
kawaii
>>
File: ComfyUI_00049_.png (634 KB, 512x768)
634 KB
634 KB PNG
>>101925830
I think it's the dev with more steps, but that would take too much to generate compared to schnell 4 steps. Tried heun pp2 karas

>>101925942
>>
File: ComfyUI_00456_.png (1.44 MB, 1368x768)
1.44 MB
1.44 MB PNG
>>101925246
Thanks, looks like it'll take some tweaking but it's getting there
>>
File: file.png (1.08 MB, 600x800)
1.08 MB
1.08 MB PNG
>>101925942
20 step dev comparison. schnell was euler/simple. This is deis/sgm_uniform.
So, the problem is not schnell. Something in your workflow makes it magical. Gotta find it. I'll try setting CFG to 5 and reproducing your negative with 2.5 and 8 guidance.

Damn dev looks so much more authentic.
>>
File: ComfyUI_00052_.png (477 KB, 512x768)
477 KB
477 KB PNG
>>101926035
shnell 4 steps deis sgm
>>
File: file.png (1.18 MB, 600x800)
1.18 MB
1.18 MB PNG
>>101926035
Back to schnell. The CFG/guidance trick destroys the image. I am defeated.
>>
File: Capture.jpg (64 KB, 753x732)
64 KB
64 KB JPG
>>101925246
For those using Tonemap, here's a list of recommended multiplier values to use for each CFG from 2 to 12 (took me way too much time to find these...)
>>
File: ComfyUI_00053_.png (545 KB, 512x768)
545 KB
545 KB PNG
>>101926071
>>
File: file.png (207 KB, 750x918)
207 KB
207 KB PNG
>>101925939
You need to convert everything to an input and convert the right things. Unfortunately I found it extremely clunky, yet still the best thing to work with. Ended up having to convert a few things to strings using utility nodes.

Looks like the new Prompt Merger Node & Type Converter Node will help with this.

Also, they have comfy workflows at the bottom of the page which should help.

https://github.com/receyuki/comfyui-prompt-reader-node?tab=readme-ov-file#example-workflow
>>
File: ComfyUI_00054_.png (546 KB, 512x768)
546 KB
546 KB PNG
>>101926099
>>101926072

same, with this lora

https://huggingface.co/XLabs-AI/flux-lora-collection#mjv6_lora

0.50
0.75
>>
>>101926072
Have you got adaptive guidance on? Try setting the threshold lower (0.940~) and it should turn back towards CFG 1.0 earlier and maintain the image without high cfg artifacts.

Might just be that dev can handle these higher numbers.
>>
File: file.png (171 KB, 240x320)
171 KB
171 KB PNG
>>101926035
I don't think I can come back from this. I'm going to have to cope with 5+ min gens. Someone make a Q2_0 quant.
>>101926133
Alright, let me try.
>>
>>101926161
Also have a play with the guidance negative as well, it is also quite high (although both higher cfg and guidance negative creates greater adherence to the prompt, so it's a fine balance).
>>
>>101926022
You're welcome, try to tinker with GuidanceNeg if you have some cases where the blur removal isn't strong enough, there's always some values that do the trick better than the others somehow
>>
>>101926072
>Back to schnell. The CFG/guidance trick destroys the image. I am defeated.
Do you use Dynamic Thresholding? If yes then ditch that shit, there's something better for realistic shit, it's Tonemap >>101925246
>>
>>101926108
It's unnecessarily complicated, this should be something default to image output that comfyui ships with.
The example workflow doesn't work because i have no idea how to turn off the img2img, it won't work without passing the initial gen nodes through it.
I really do not want to use comfyui without being able to replicate images later, this is such a bummer.
>>
File: ComfyUI_00058_.png (566 KB, 512x768)
566 KB
566 KB PNG
>>101926130
Never saw that reflection before generated
>>
File: file.png (1.15 MB, 600x800)
1.15 MB
1.15 MB PNG
>>101926072
Increasing steps helped. Still looks nothing like the 90s. Schnell is just too sloppified I guess.
>>
>A1111 won't find adetailer because I have a space in my pc's name and now the folder path fucks up
without doing some registry key fuckery do I just shoot myself?
>>
>>101923951
I agree porting images is overrated.
>>
ComfyUI gguf lora yet?
>>
WOOOO MADE THE OP COLLAGE
>>
File: RetroXL0005.jpg (193 KB, 1320x1320)
193 KB
193 KB JPG
>>101925369
I dont get boners anymore. Consider yourself lucky
>>
>>101926354
what's the critera for the making the collage?
>>
>>101926325
Are you using sgm_uniform?
>>
>>101926377
Rethink your post please
>>
>>101926389
no bitch, I wanna know
>>
File: ComfyUI_01127_.png (1.97 MB, 1008x1456)
1.97 MB
1.97 MB PNG
>>101926377
post good/unique/hot gens
>>
>>101926398
Like I said, rethink your post. You are asking an irrelevant question
>>
>>101926403
this a style lora?
>>
File: ComfyUI_00572_.png (2.43 MB, 1536x1152)
2.43 MB
2.43 MB PNG
>>
>>101926425
I can ask whatever the fuck I want bitch
>>
File: file.png (924 KB, 512x768)
924 KB
924 KB PNG
>>101926223
My issue here is prompt adherence, not realism. Thank you anyway.
>>101926382
I tried a bunch of things.

I will now stay with dev and spend many hours staring at slow progress bars while I wait for a ridiculously small gguf to come into my life.
>>
>>101926108
Mind sharing whatever workflow you use consistently to gen with? Im just going to run off that and remove/add whatever i need. At my wits end trying to figure this shit out when i have at least a modicum of understanding now but getting filtered solely because of wanting metadata/other people's presets having weird unnecessary shit/pre set settings that don't make sense/completely missing some things(?)/don't work at all for some reason
now i remember why i was filtered by comfyui initially 6 months ago.
>>
>>101926449
>My issue here is prompt adherence, not realism.
Tonemap allows you to go to high CFG and then have prompt adherence, and unlike DynamicThreshold it doesn't burn the picture at all
>>
can Forge pin T5 to RAM/CPU yet?
>>
>>101926428
nope just chatgpt boomer prompting

https://files.catbox.moe/bt4ab4.png
>>
>>101926474
I tried getting it running when I was making my last post but I've been away from genning for a few months and it looks like the saving part of the workflow has broken.

Plus it's tuned to Pony instead of Flux so it's pretty old news now outside of hentai.
>>
File: 729435.jpg (2.18 MB, 1792x2304)
2.18 MB
2.18 MB JPG
>>
>>101926516
The fuck did you managed to make a lora of Andrea Botez for flux?
Or it's just coincidence?
>>
>>101926531
it's on civitai
>>
These loras of celebrities are just wrong to me anons
>>
>>101926482
What are the advantages of using NF4 vs GGUF?
>>
File: img__00009_.png (994 KB, 832x1216)
994 KB
994 KB PNG
>>
>>101926555
if i made that gen with NF4 it was just because GGUF wasn't out yet. GGUF is better and adheres closer to FP16.
>>
>>101926509
i mean i use pony primarily, would imagine it cant be hard to tick things to XL.
doesnt the node just update to whatever's being sent to it anyway?
>>
File: FluxDev_01442_.jpg (200 KB, 1024x1024)
200 KB
200 KB JPG
her favorite day
>>
>>101926555
>What are the advantages of using NF4 vs GGUF?
Nothing, Q4_0 is better than nf4 and it's the same size >>101924329
>>
Can you use loras with the GGUF loader yet?
>>
File: ComfyUI_00097_.png (1.11 MB, 832x1216)
1.11 MB
1.11 MB PNG
1girl, walking
>>
Can you put T5 on the CPU in Forge yet?
>>
File: 2389565483.png (930 KB, 768x1344)
930 KB
930 KB PNG
>>101926620
she might tip over
>>
>>101926592
there''s a pam lora already? lel
>>
>>101926592
lmao, poor Pam
>>
File: ComfyUI_00096_.png (2.6 MB, 1248x1848)
2.6 MB
2.6 MB PNG
>>101926641
the weight of her ass will help hold her back
>>
>>101926581
Well as I said, the saving part of the workflow is broken right now and there's a shit ton of custom nodes in here since I'm one of those autistic comfy users, but if you want to try and fix it - be my guest.

https://files.catbox.moe/30sn94.png

Also you'll need this custom node I designed to parse lora's into the correct string format from the output of rgthree's Power Loader Lora. No guarantee that works now after a couple of months of updates though.

https://files.catbox.moe/vl02sx.py
>>
File: ComfyUI_00069_.png (490 KB, 512x768)
490 KB
490 KB PNG
>>101926449
Try to restart comfyui and try again schnell, out of 25 gens 5 were ok
>>
>>101926373
based old man posing on the designated old people thread
>>
>>101926680
woops missed that crucial part of your message.
welp if you manage to figure this shit out today, let us know. Im taking a break to cool off before i try figuring out workflow again this morning.


>also my apologies if my rant sounded like i shot down autistic comfy users, autism is the name of the game i find nothing wrong with that
>>
File: RetroXL0002.jpg (187 KB, 1320x1320)
187 KB
187 KB JPG
>>101926699
I'm 32.
>>
File: ComfyUI_00083_.png (572 KB, 512x768)
572 KB
572 KB PNG
>>101926516
>>101926531
>>101926539

Link?
>>
>>101926701
kek, it's okay man im all about helping people make better gens. I'll end up setting up image saving at some point again so I'll post my workflow if you're still lurking at that point
>>
>>101926717
civitai has a search function
>>
>trump loras on civit
Yeah, cause Flux definitely needs loras to do Trump.
>>
File: file.png (14 KB, 528x432)
14 KB
14 KB PNG
OK, which one of these two fields do I use, and very important, WHY?
>>
File: file.png (2.34 MB, 1024x1024)
2.34 MB
2.34 MB PNG
>>101926695
I found some ideas I'll try next.
>if you're having trouble with it not following style-related parts of the prompt, try dialing down the guidance to 1.0-1.5. the default 4 works better with short/low-effort prompts; lower will listen better if you're actually putting in effort.
and
>out of focus, 1970s, 70s, blurry old photograph of [...] in the 70s, bleached scratched damaged torn smudged dirty wrinkled polaroid, kodak portra 400, cine film still, soft lighting, highly detailed, absurdres

Meanwhile, pic related took more than 20 minutes, so I'm not going to do that anymore.
>>
>>
>>101926730
It got fucking deleted
>>
>>101926819
no it didn't
fucking techlets can't even into search
>>
>>101926592
I wonder if it's possible to train a "Jim stare" lora. I don't care about the character himself. Or is it achievable via boomer prompting?
>>
https://civitai.com/models/652009/fluxrealisticv1?modelVersionId=729439
One of the first true finetunes? Doesn't seem to be a heavy train but the realism is a little less slopped.
>>
>>101926819
good, nobody should have to deal with a lora being made of themselves, it's extremely unethical in my eyes
>>
>>101926863
fucking shitheads putting up checkpoints with T5 in it so we have to download the same 5/10GB over and over
>>
>dev
>low guidance
>say she's wearing panties
>she's wearing panties

>schnell
>same settings
>say she's wearing panties
>she's wearing full leg length jeans

Thank you, I feel so much safer now.
>>
>>101926715
>Erectile disfuction at 32
kek
>>
File: 202316152.png (1.11 MB, 1344x768)
1.11 MB
1.11 MB PNG
>>
File: 1723829273412.jpg (179 KB, 761x1549)
179 KB
179 KB JPG
>>101927036
???
>>
File: RetroXL0001.jpg (170 KB, 1320x1320)
170 KB
170 KB JPG
>>101927044
I have cancer. Its not ED
>>
>>101926161
>gguf-py got updated to 0.10.0 and now supported dequant of all quants like K and IQ quants with numpy
>no one is using it, either the custom node from city96 or forge
I swear I am going to hack my own if no one has figured this out by the time I get back tonight.
>>
>>101927098
I don't know man. I put "panties" in the tag clip prompt and she at least is wearing shorts now.
>>
File: file.png (2.46 MB, 768x1344)
2.46 MB
2.46 MB PNG
>>101927115
Please do. I would try to do it myself, but I have never worked with numpy before.
>>
File: ComfyUI_00094_.png (562 KB, 512x768)
562 KB
562 KB PNG
>>101926872
>>101926859
The google one got deleted, here is the new one
https://civitai.com/models/650574/andrea-botez-flux

>>101926795
>>
>>101926863
Damn the wrinkles in the sheets and clothes look so much better on that. I prefer it for that reason alone.
>>
>>101927148
what do you mean "the google one"?
The loras from this AIENGI guy barely work.
>>
>>101925589
it looks worse wtf
>>
File: file.png (530 KB, 512x512)
530 KB
530 KB PNG
Would you?
https://civitai.com/models/652143/bombshell-biceps-flex-flux
>>
File: 2796359731.png (1.09 MB, 1344x768)
1.09 MB
1.09 MB PNG
>>
>>101927190
ayeee thats a goirl that eats her spinach
>>
>>101927148
I will never understand the obsession with ecelebs for anyone who is not underage.
>>
File: file.png (2.07 MB, 768x1152)
2.07 MB
2.07 MB PNG
Effortprompting is the way.
>>
>>101927206
>obsession
>>
>>101926863
Can it do NSFW?
>>
What weapons does Flux know? I either get AKs or AR-15s for literally everything
>>
File: ComfyUI_00095_.png (521 KB, 512x768)
521 KB
521 KB PNG
>>101927165
There was some older one appearing on google

>>101927198
Is this dev?

>>101927206
It's good to compare
>>
>>101927220
AKs and AR-15s
>>
>>101926863
can't unsee the chins
>>
>>101924537
Basically data transfer speed, which would kinda negate the gains from traditional VRAM a bit. Intel tried to do that years back (Larrabee) but with extra buses and connectors it's just too slow and you can't market that easily. It's easier to strap on a lot of vram on a low performing GPU instead
>>
File: 145101_00001_.png (1.13 MB, 1024x1024)
1.13 MB
1.13 MB PNG
>>101926965
Yeah, that's retarded of them forcing people to not be able to use any applicable future flux vae's that are released and stopping people from offloading T5 to another device because they only released a all-in-one checkpoint.
>>
>>101927215
Ecelebs are toxic individuals who live off simps and donos, I am sure you already know that.
>>
>>101927247
wow he's literally me
>>
Any flux models or loras that can do tiddies and vagoo yet?
>>
File: file.png (2.08 MB, 768x1152)
2.08 MB
2.08 MB PNG
>>101927247
Me right now.
>>
File: ComfyUI_00096_.png (523 KB, 512x768)
523 KB
523 KB PNG
>>101927225
>>
>>101927247
that's not how any of it works, anon
you know how the checkpoint loader in ComfyUI shows the diffusion model, CLIP and VAE as outputs? You're not forced to use that VAE when using a checkpoint, you can load just what you want
the problem is what I stated, making people download the same T5 over and over, the CLIP and VAE are small enough not to matter but T5 is huge
>>
File: ComfyUI_00097_.png (220 KB, 320x640)
220 KB
220 KB PNG
>>101927280
>>
>>101927251
you're obsessed with talking about ecelebs, anon
>>
>>101927309
Well several people on civit have been compelled to train models on them, and now one is shared here. If you're the eceleb buy an ad.
>>
>tfw 12GB Vramlet
It was never more over
>>
>>101927324
and you consider that an obsession?
>>
File: file.png (2.03 MB, 768x1152)
2.03 MB
2.03 MB PNG
This thing has attention for days. You just keep describing every single little thing in the scene and it keeps adding detail.
>>101927328
>never more over
I'm an 8GB vramlet.
>>
>>101927336
If you even care about the existence of the eceleb then yes.
>>
File: 1970670911.png (1.09 MB, 1344x768)
1.09 MB
1.09 MB PNG
>>101927225
ye
>>
>>101927368
weird definition of obsession
>>
>>101927283
>You're not forced to use that VAE when using a checkpoint, you can load just what you want
Are both vae loaded into vram if you chose a seperate vae with a model with a baked in vae? Then that is bad, why use vram on something the user is not using, and for some users, that baked in CLIP and VAE may mean the difference between being able to run at a resolution/batch number of loras etc or not or ooming.
>>
>>101927247
Now he's *really* me
https://files.catbox.moe/49e6ux.png
>>
>>101927377
Finding the eceleb on your X feed and checking out some images is one thing, downloading all of their images and training a model and then sharing with others is another.
>>
File: file.png (894 KB, 512x768)
894 KB
894 KB PNG
Let's see what this prompt looks like with dev and a proper resolution.
In case you want to follow along:
>An old, blurry, grainy vintage, modelling photograph from a Japanese gravure magazine. The photo has a lot of signs of wear and tear. It is very degraded.
>The photo features a girl in a classic 70s apartment building. She is smiling, and she has classic Japanese facial features. The girl's hair is black, disheveled, and short. It is a little bit dry, and the light illuminates it softly. Her breasts are large, and they hang generously underneath the thin fabric of her t-shirt. She is wearing a t-shirt with the word "fresh" written in light blue letters. She is only wearing white panties.
>Her legs are bare and crossed. Her hands are small and feminine. The girl's skin has imperfections and pores that can be appreciated in the film.
>There is a subtle rainbow across the lens of the camera, and the light from the sun shines softly through the curtains on a window.
>There are some scratches and grain on the photo. The paper texture of the photo is very evident and one of the corners is torn.

>bleached scratched damaged torn smudged dirty wrinkled polaroid, kodak portra 400, cine film still, soft lighting
>>
>>101927403
lol
I tried with Yotsuba Koiwai on screen but it just threw up some random western granny.
>>
>muh ecelebs
Who gives a shit nigger
>>
File: ComfyUI_00312_.png (1.51 MB, 1024x1024)
1.51 MB
1.51 MB PNG
>>101927112
This is what I got with a prompt "There is a light that never goes out". I wish you the best possible outcome or a quick and painless way out.
>>
File: img__00012_.png (1.55 MB, 832x1216)
1.55 MB
1.55 MB PNG
>>
>>101927371
and is it the shitty AIENGI lora?
>>101927388
Are both vae loaded into vram if you chose a seperate vae with a model with a baked in vae?
No, you can load just part of the safetensors.
>>
>>101927462
Thank you so much anon.
>>
File: ComfyUI_00103_.png (308 KB, 512x512)
308 KB
308 KB PNG
>>101927431
>>
File: 1782563701.png (1.33 MB, 896x1152)
1.33 MB
1.33 MB PNG
>>101927471
ye
>>
File: ComfyUI_Flux_62.png (1.46 MB, 832x1216)
1.46 MB
1.46 MB PNG
>>101927431
>>
>>101927536
suspicious bulge
>>
4090enjoyer who wants to make things of my favorite streamers, how fix?
>>
>>101927536
>that bulge
anon...
>>
File: fs_0071.png (760 KB, 768x768)
760 KB
760 KB PNG
>a delicious torta

hmm
>>
>>101927536
That's a man
>>
>>101927559
>>101927570
T. Has never seen a hairy pussy
>>
>>101927589
cope
>>
Bread that is fresh...
>>101927580
>>101927580
>>101927580
>>
File: file.png (1.03 MB, 1024x1024)
1.03 MB
1.03 MB PNG
>>101927431
>>
File: ComfyUI_00643_.png (1.73 MB, 832x1216)
1.73 MB
1.73 MB PNG
>>101925305
How do you even make this by yourself? Did you watch a youtube tutorial? What is the best youtube tutorial to learn comfy?

Your workflow takes me 70s to gen. Normally I do it in 30, sometimes 20, it was 20 this morning I have no Idea why or how. Any Ideas why it takes longer?
>>
File: hqdefault.jpg (12 KB, 480x360)
12 KB
12 KB JPG
>>101926516
holy shit trans??
>>
File: ComfyUI_02553_.png (1.85 MB, 768x1344)
1.85 MB
1.85 MB PNG
>>101927667
I started with these basic workflows, just seeing how everything fit together.

https://github.com/pwillia7/Basic_ComfyUI_Workflows

Then I just started experimenting and using Comfy like a playground. Researching custom nodes and how they affect a workflow, what strategies work where etc. That's what I like about it. It's the "mad science".
>>
>>101927122
put nude hopen open pussy bagina
>>
>>101927667
Oh also my workflow is probably longer to gen because it uses various strategies that increases the length of time, but creates better gens. For example, using a negative prompt doubles the gen times and different sampler/scheduler combos will take varying amounts of time for different results.
>>
File: ComfyUI_00650_.png (1.74 MB, 832x1216)
1.74 MB
1.74 MB PNG
>>101927871
How long have you been doing this. First day using comfy btw, first week doing AI. I tried out Foocus and Forge before, but only on comfy I can reliably not OOM.

>>101927899
Does it really need Negative Prompt for "Cartoon, Drawing, Anime"? I've rarely get non photos. Wouldn't specifying photograph in the begging of the Positive prompt serve the same purpose?
>>
File: file.png (72 KB, 1049x221)
72 KB
72 KB PNG
>>101927964
About 3 months ago. I took a break for a month because I got a bit bored of genning until Flux came out.

Also yeah, I don't think the negative prompts are necessary here, but I genned a catgirl >>101925057 earlier and it gave me a couple of anime-looking girls even with the positive prompt reinforcing it.

But also in general, having the negative prompt present (instead of genning with just the positive conditioning, pic related) will help the model adhere more closely to your positive prompt, even when it's blank - since it still has an influence on the generation.
>>
File: ComfyUI_00821_.png (1002 KB, 1280x720)
1002 KB
1002 KB PNG
>Q4_1
>still can't do hands



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.