[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: the longest dick general.jpg (3.06 MB, 2500x2238)
3.06 MB
3.06 MB JPG
Discussion of free and open source text-to-image models

Previous /ldg/ bred : >>102723260

AI Video Games Edition

>Beginner UI
Fooocus: https://github.com/lllyasviel/fooocus
EasyDiffusion: https://easydiffusion.github.io
Metastable: https://metastable.studio

>Advanced UI
Forge: https://github.com/lllyasviel/stable-diffusion-webui-forge
Automatic1111: https://github.com/automatic1111/stable-diffusion-webui
ComfyUI: https://github.com/comfyanonymous/ComfyUI
InvokeAI: https://github.com/invoke-ai/InvokeAI
SD.Next: https://github.com/vladmandic/automatic
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI

>Use a VAE if your images look washed out
https://rentry.org/sdvae

>Model Ranking
https://imgsys.org/rankings

>Models, LoRAs & training
https://aitracker.art
https://civitai.com
https://huggingface.co
https://github.com/Nerogar/OneTrainer
https://github.com/derrian-distro/LoRA_Easy_Training_Scripts
https://github.com/kohya-ss/sd-scripts/tree/sd3

>Flux
https://replicate.com/black-forest-labs/flux-1.1-pro
https://huggingface.co/spaces/black-forest-labs/FLUX.1-schnell
https://comfyanonymous.github.io/ComfyUI_examples/flux

>Pixart Sigma & Hunyuan DIT
https://huggingface.co/spaces/PixArt-alpha/PixArt-Sigma
https://huggingface.co/comfyanonymous/hunyuan_dit_comfyui
Nodes: https://github.com/city96/ComfyUI_ExtraModels

>Index of guides and other tools
https://rentry.org/sdg-link
https://rentry.org/rentrysd

>Try online without registration
txt2img: https://www.mage.space
img2img: https://huggingface.co/spaces/huggingface/diffuse-the-rest
sd3: https://huggingface.co/spaces/stabilityai/stable-diffusion-3-medium

>Maintain thread quality
https://rentry.org/debo

>Related boards
>>>/aco/sdg
>>>/aco/aivg
>>>/b/degen
>>>/c/kdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/tg/slop
>>>/trash/sdg
>>>/u/udg
>>>/vt/vtai
>>
Blessed thread of frenship
>>
>>102744297
That was deprecated back when SD2.0 was released, the safety checker didn't make it into any of the later versions of diffusers.
>>
File: ComfyUI_34275_.png (539 KB, 848x1024)
539 KB
539 KB PNG
>>
pixartists status?
>>
Meiling on the front page!
>>
testing local live potrait.
How do you expand face detection area or it can't? It seems hair/neck still have limited movement
Also anyone knows where you can get good sources or tiktok whores potrait? Most of these vids have head movement all over the place
>inb4 film it yourself
I'd but I need sound too. Deepfake my voice would be too gay.
>>
File: 0.jpg (227 KB, 1152x896)
227 KB
227 KB JPG
>>
File: IMG_0551.png (1.09 MB, 1024x1024)
1.09 MB
1.09 MB PNG
>>
File: 00092-141845483.png (849 KB, 960x1024)
849 KB
849 KB PNG
Anyone actually using Illustrious? Lora support seems to be pretty rough for it, and documentation doesn't help much either
>>
File: 0.jpg (234 KB, 1152x896)
234 KB
234 KB JPG
>>
did anyone manage to make a quant of the de-distill yet?
i tried yesterday following the instructions in >>102741794
with https://huggingface.co/nyanko7/flux-dev-de-distill/blob/main/consolidated_s6700.safetensors
but even the first conversion step of that file to BF16 gguf seems broken, as when i try to run it, it gives me a RuntimeError: mat1 and mat2 shapes cannot be multiplied (4032x64 and 256x768)
if not a quant, is there at least a working BF16 somewhere that i could try using for the conversion?
>>
File: 00013-995668237 copy.png (3.52 MB, 1152x1632)
3.52 MB
3.52 MB PNG
>>102746615
it's only been out for a little over 2 weeks.
lora's exist you just more or less have to go to where people actually post them(read: the porn threads).
I think anyone bothering to upload to civit are doing it under SDXL 1.0 or something with an illustrious header. Otherwise, as mentioned, the lora's are getting squirreled away into a mega/mediafire/etc. directory.
>>
bigma....
>>
dev 1.1....
>>
>>102746807
Ugh, CivitAI needs to be better at actually creating filters for new models that exist
>>
File: VPX-115.jpg (3.03 MB, 2608x2080)
3.03 MB
3.03 MB JPG
What's up fellas how is Everyone today?
>>
>>102746868
Civitai needs to be thrown into the trash and replaced, but here we all are, sitting with a tab open that's actively leaking memory as you scroll down a page that has 0 right not to employ pagination
>>
>>102744592
>Tree Church in the bottom picture.
How do we make this happen in real life?
>>
File: 00028-3081995937.png (3.55 MB, 1632x1152)
3.55 MB
3.55 MB PNG
>>102747426
the real solution is to not use civit for anything other than downloading full base models from and learning to train the shit you need/want yourself.
besides, there's more concerning things to care about; like how there's nowhere to really post images that actually take a modicum amount of effort and isn't just straight slop.
>>
File: cforest4.png (1.67 MB, 768x1344)
1.67 MB
1.67 MB PNG
>>102747449
We start with a tree
>>
>>102746765
>the first conversion step of that file to BF16 gguf seems broken
I don't know why you even need that, when I converted to Q8_0 I did it directly here
https://github.com/leejet/stable-diffusion.cpp/blob/master/docs/quantization_and_gguf.md
>>
>>102747556
How did you gen this img?
>>
https://reddit.com/r/StableDiffusion/comments/1fzrjp8/fluxbooru_v01_a_boorucentric_flux_fullrank/
>Used SimpleTuner via 8x H100 to full-rank tune Flux
NOW WE'RE TALKING
>>
Hibernation Thread: Ultra
>>
File: file.png (1.63 MB, 2265x1317)
1.63 MB
1.63 MB PNG
>>102747616
https://huggingface.co/spaces/bghira/FluxBooru-CFG3.5
YIKES
>>
>>102747616
>no example outputs
>>102747659
holy sovl
>>
>>102747605
model is noobaiXL(it's a light further finetune of illustrious)
lowres https://files.catbox.moe/oauhjd.png
low effort redraw for i2i upscaling https://files.catbox.moe/klu760.png
raw upscale https://files.catbox.moe/28kbfo.png
photoshop file with all further edits and filter shit applied after the fact if you're so inclined https://files.catbox.moe/yp4cms.psd
you can get one of the lora's here https://www.mediafire.com/folder/f1uuqrzy5s83e/arako_o (the noob version is in the noob directory)
I don't have the other uploaded anywhere but I'm going to rerun it to, hopefully, be a bit better later, anyway.
>>
>>102747588
trying to make a Q4_K_S with the source the other anon provided
i couldnt get stable diffusion cpp to work on my pc but maybe ill just have to try it again, because at this point i'd be fine with a Q4_0 too
>>
File: file.png (2.15 MB, 3308x1764)
2.15 MB
2.15 MB PNG
>>102747659
kek
>>
>>102745321
Will you help me remember to take my meds?
>>
>>102747706
the loras are cool, what artists are they?
>>
>>102747750
the one that's uploaded is arako o
https://x.com/arako_o/media
the other that isn't uploaded is deal360acv(just look for them on *booru)
>>
>>102747393
I woke up after 3 hours so I’m taking the day off
Event horizon was okay
>>
>>102744592
Nice Moebius vibes.
>>
File: file.png (284 KB, 1405x602)
284 KB
284 KB PNG
>>102747659
Excellent
>>
File: file.png (61 KB, 1854x415)
61 KB
61 KB PNG
>>102747659
>>102747717
that's a bit rich to say something like that when your model produces monstruosity like that lmao
>>
>>102745480
>>102746822
Continually blue balled
>>
File: file.png (204 KB, 2125x1405)
204 KB
204 KB PNG
>>102747992
Why do they all act like fucking bitches, there's such a disconnect between finetuners and users
>>
we reddit now
>>
File: file.png (614 KB, 960x540)
614 KB
614 KB PNG
>>102748168
>now
>>
Total Mikufag Death
>>
>>102748189
how times have changed
>>
>>102748297
desu that's our fault, we shouldn't have to go to reddit to get the AI news, but nothing's happening there so...
>>
File: file.png (81 KB, 2819x409)
81 KB
81 KB PNG
>>102747716
>trying to make a Q4_K_S with the source the other anon provided
Stable diffusion ccp seems to be able to do Q4_k but Idk if that's S or M
>>
>>102748378
hopefully M
>>
>>102748378
oh cool, ill try it out
>>
File: file.png (279 KB, 2176x1284)
279 KB
279 KB PNG
>>102748402
>hopefully M
Sounds about it yeah
https://github.com/ggerganov/llama.cpp/discussions/2094#discussioncomment-6351796
>>
>two threads worth of anon not knowing how to quant an image model
ldg is full of retards
>>
>>102748471
you don't know how to do it aswell, or else we would've seen your undistill quants on huggingface
>>
>>102748483
my quants are for me not you
>>
>>102748490
you have no quants, you don't know how to make them, you're a retard anon
>>
My quants go to a different school
>>
>>102747659
did you try masterpiece best quality?
>>
File: file.png (2.14 MB, 2405x1389)
2.14 MB
2.14 MB PNG
>>102748749
that's better
>>
File: flux booru.jpg (151 KB, 1024x1024)
151 KB
151 KB JPG
Desu lots of furry
>>
File: 00020-2212530929.png (2.14 MB, 1248x1824)
2.14 MB
2.14 MB PNG
New model
https://civitai.com/models/838583?modelVersionId=938191
>>
File: file.png (517 KB, 3082x1018)
517 KB
517 KB PNG
>>102746765
>but even the first conversion step of that file to BF16 gguf seems broken, as when i try to run it, it gives me a RuntimeError: mat1 and mat2 shapes cannot be multiplied (4032x64 and 256x768)
>if not a quant, is there at least a working BF16 somewhere that i could try using for the conversion?
I think you messed something up, I managed to get the BF16.gguf working
>>
>>102748888
you made that model anon?
>>
File: flux booru.jpg (121 KB, 1024x1024)
121 KB
121 KB JPG
>>
>>102749023
yeah
>>
File: flux booru.jpg (254 KB, 1024x1024)
254 KB
254 KB JPG
>>102747992
If he posted what a good output looks like then maybe I'd believe him
>>
>>102749135
it's hard to gaslight if you define the goalposts.

Who is this guy anyways?
>>
File: file.png (30 KB, 1939x170)
30 KB
30 KB PNG
there seem to be a difference whether you do the quant on stable diffusion cpp or on comfyui GGUF, I think the latter is the good one, the speed is better and the quality too
https://github.com/leejet/stable-diffusion.cpp/blob/master/docs/quantization_and_gguf.md
https://github.com/city96/ComfyUI-GGUF/tree/main/tools

https://imgsli.com/MzA0OTIw
>>
File: 1704425439899803.png (1.42 MB, 1470x642)
1.42 MB
1.42 MB PNG
>>
>>102745480
My best guess is it's going to be announced/launched with the 50 series of GPUs.
>>
>>102749170
kek
>>
>>102749135
It's a retarded standard anyways, if they're training Booru tag prompting in it's going to result in a Pony-esque reconfiguration of how the model understands images from prompts. It's going to produce weird results as the text encoder is realigned.
>>
File: file.png (25 KB, 1870x140)
25 KB
25 KB PNG
>>102749165
here's another example
https://imgsli.com/MzA0OTI2
>>
File: file.png (99 KB, 2771x986)
99 KB
99 KB PNG
>>102748902
K, now that I got the hang of it, I'm gonna do all those quants for the un-distilled dev model and put them on huggingface, do you want to add something else to that list?
>>
File: 00001_.png (1.44 MB, 904x1152)
1.44 MB
1.44 MB PNG
Is there a comfy node that can call an API? Searching comfyui API is not going great.

I want to feed an image to a node and have an API call to an external program (upscaler).
>>
Flux Pony?
>>
>>102749165
could you make a Q4_K_M one and perhaps share it? i would be very grateful
im not well versed enough in this stuff to figure out whats going wrong on my system with the BF16 conversion
>>
>>102747556
who wants to do all that gay shit when you could be genning more cool slop
>>
>>102749630
>could you make a Q4_K_M one and perhaps share it? i would be very grateful
yeah it's on the list already, I'll be putting all of them on huggingface, I'll make a comment when it's ready o/
>>
>>102749649
>>102749291
i hadnt scrolled down this far
bless you anon
>>
File: file.gif (204 KB, 220x212)
204 KB
204 KB GIF
>>102749697
>bless you anon
:3
>>
>>102749165
Interesting.
>tfw have the ability to do fp16 so never had to deal with this stuff
>>
>>102749784
I also have a 24gb vram card but I don't like running the fp16 on it, it barely fits and you can't add other shit like PuLID, loras and stuff because it increases VRAM and overflow that shit
>>
File: fs_0013.jpg (82 KB, 1240x984)
82 KB
82 KB JPG
>>
>>102749807
I run dual GPU. Might not be using them to their full potential though I guess since I just use the simplest workflow and no loras.
>>
>>102749951
I also have 2 gpus but I use the 2nd one for the text encoder, didn't know you could split the unet model onto two different models though
>>
>trade score_9, score_8_up, score_7_up for masterpiece best quality
It's like the 1.5 days again.
>>
>>102750279
we are so back
>>
>>102749183
i pray
>>
>>102749183
what are the chances of the model having tensorRT support right away? because iirc we still can't use loras with tensorRT on comfyui
>>
>>102750279
>masterpiece best quality
Never stopped using it.
>>
https://civitai.com/models/838784
>>
>>102750907
Who knows but I would predict they'll try to put some sort of proprietary tech/software stack on it so people will upgrade to the 50 series (kind of like they did with the upgraded DLSS for the 40 series). They'll probably announce something like TensorRT 2.0 support that only works for 50 series cards that will boost generations by 100% or something.
>>
Is there a way to copy paste my custom settings from forge to reforged?
>>
>>102750943
i wouldn't put it past them, but this seems more like a gaming oriented block, for AI the lock is VRAM, it's the reason why no card other than the 5090 has more VRAM
>>
>>102751001
ui-config.json?
>>
>>102751051
The tensor cores at the hardware/firmware level might be configured to better perform with a proprietary inference standard.
>>
>>102750937
finally
>>
>>102751001
>reforged
what is this and what is different from forge?
>>
>>102751165
Forge is being helmed by an actual retarded destroying the project. Reforged is someone trying not to destroy the project.
>>
>>102751066
i think it's more likely we will get some sort of more efficient way of doing calculations locked behind the new generation of cards. like fast fp8 mode for the 40xx series
>>
>>102748147
it's funny to pretend that most people don't use this for coom
>>
>>102749291
Thumbs up
>>
>>102750937
>image of a fit woman in a sailor moon cosplay
kekd
>>
>>102751606
finally a proper greta model
>>
I have a 3090 and it takes 4 seconds to generate 1024 x 1024
is that good or bad? do I need a new card for faster gens?
>>
>>102751648
>4 seconds is too long
bro calm down and turn your step count up to get better gens.
>>
File: blueb1.webm (805 KB, 480x854)
805 KB
805 KB WEBM
To whoever asked in the previous thread, it’s Runway
>>
File: fs_0108.png (1.59 MB, 1456x1040)
1.59 MB
1.59 MB PNG
>>
File: fs_0112.png (1.7 MB, 1456x1040)
1.7 MB
1.7 MB PNG
>>
>>102751817
I'm not complaining, but I saw some posts that claim they get 10 images in the spawn of a second and I got jealous, but maybe they were just talking shit
>>
File: fs_0114.png (2.08 MB, 1456x1040)
2.08 MB
2.08 MB PNG
>>
>>102752091
10 pics at 1024? Nothing is going that speed. The non-gpu stuff takes more time than that.
>>
File: fs_0120.png (1.58 MB, 1456x1040)
1.58 MB
1.58 MB PNG
>>
>>102752091
maybe at low res, really low steps that is possible. On a 4090 at 80% power limit I can do a batch of 12 at 50 steps in just over a minute. This is just a base install of forge, there are some ways to optimize generation to get stuff faster but I don't care that much.
>>
>>102752227
I am still very annoyed we don't have a standardized measurement similar to what they have for measuring LLMs....before everyone started cheating.
>>
>>102752227
I timed my 3090 and it gave me 4 images after a minute at 50 steps 1024px
so 4090 is 3x better maybe
>>
>>102752392
batch size matters anon, generating images in a batch of 8 or 12 or 16 is faster than generating separate images in batches of 1
>>
>>102752456
yeah, 12 images at 50 steps took a little over 3 minutes on my 3090
I don't generate one by one if that's what you're implying
>>
>>102752599
on my 3090, batch size 12 is 2:15. What are interface you running? If webui/forge make sure to launch with --xformers
>>
File: 5467842345.jpg (150 KB, 967x623)
150 KB
150 KB JPG
I use forge, but same results on reforged

>>102752599
>over
*under
>>
man i need a better gpu
takes minutes per image on my 3070
at least it builds suspense to watch them generate
>>
>>102751917
>>102751953
>>102752041
>>102752096
>>102752217
boob and butto
>>
>>102753039
but no feet
>>
File: 645324567.jpg (42 KB, 360x364)
42 KB
42 KB JPG
in onetrainer, what does x inpainting mean?
also have anyone made their own flux lora? What were your results?
>>
File: fs_0132.png (1.07 MB, 1376x1120)
1.07 MB
1.07 MB PNG
>>102753141
pour vous
>>
>>102753278
sniff
>>
>>102749170
Is that made with sd?
>>
>>102753438
SDXL, with a lineart controlnet and a quick sketch
I used this model
https://civitai.com/models/832537/zuki-cute-mix
>>
File: 00003_.png (870 KB, 896x1152)
870 KB
870 KB PNG
I am ready to build my own UI on top of other AIs at this point. The absolute cluster that is this spaghetti nightmare is killing me.

Does anyone have any docs on how pytorch caching of models works? If I run the same model in forge and comfy does it load twice?
>>
File: 1728485270843339.png (561 KB, 1331x1174)
561 KB
561 KB PNG
fluxbros... our time is nigh
>>
>>102753756
so is there going to be a titan or TI?
>>
>>102753756
I bet the 5090 will be more than twice the price of the 5080
>>
>>102753767
I hope they do a 40 GB Titan AI
Although I'm pretty happy with them doing 32 GB for the 5090, that's a massive boost for local training.
>>
pixart sexuals will rise a-gain
>>
>>102753756
I got it early and was able to give me a batch 128 in 40 seconds
price is $5000 on launch
see you on the other side genners ;P
>>
>>102753874
For $5000 I'd just buy a A6000 ADA
>>
>>102753884
nice try, but they're all sold out
I bought them all
>>
>>102753874
post tits or gtfo
>>
>>102753919
like genned tits or?
>>
>>102747585
Prompt?
>>
>>102753756
Future 5090 masterrace
>>
File: paypig.png (14 KB, 356x238)
14 KB
14 KB PNG
>some dude just paypiggied several ((early access)) checkpoints on civitai
>100k buzz EACH
cant tell if he's a fag or incredibly based, i appreciate it, but man. this just encouraged the fuck out of the very jewish practice.
>>
>>102754085
either
>>
>>102754547
https://civitai.com/user/leirtes/models
I don't see any models on his account? Also literal who
>>
>>102754896
he doesn't have the models, he paied the full early access price for some models so we won't have to wait to download them
>>
>>102754547
WAI-ANI V9 HERE I COME!!! >:D
>>
dry thread
>>
t: homosexual
>>
File: file.png (379 KB, 1256x726)
379 KB
379 KB PNG
Thoughts on https://sdtools.org ? looks kinda neat
>>
>>102755308
very cool and helpful,


>but it doesn't even link the tools it mentions
>>
>>102755308
no mention of Forge?
>>
>>102755308
seems to be a very very high level overview of the concepts. may be fine but it doesn't reference schedulers for instance
>>
File: 00000-995595884.jpg (189 KB, 896x1152)
189 KB
189 KB JPG
prompt: dingus cringus
negative prompt: croinkle
>outputs only consist of offputting creatures in muted color schemes, most of them monkey-like
interesting
>>
>>102755433
its a good thing you left out the croinkle that wouldve been a bannable offense
>>
How do you use masked loss in Kohya? There's a box to check but no way to set the directory. The guide says to specify the path to the masks using conditioning_data_dir. I set it in the additional parameters, but I'm not sure if it's even working. There's no output in the CLI that says the masks are being used.
>>
File: 00016.png (1.21 MB, 896x1152)
1.21 MB
1.21 MB PNG
>>102755308
it's like those I built this AI in 20 minute videos (looking at you codebullet). They are entertaining, but they don't progress you further.
>>
>>102755433
literally me
>>
>>102755433
putting dingus cringus and negative croinkle genned gay porn in waiANINSFWPONYXL_v8Hyper12step.safetensors
god dammit anime coomers
>>
>>102755433
>>102755655
Thank you for posting your findings anon
>>
File: dingus cringus flux.png (1.24 MB, 1740x881)
1.24 MB
1.24 MB PNG
>>102755788
and apparently this is a dingus cringus according to flux

>lost 50~ buzz for this finding
>>
>>102755810
could be the checkpoint/lora I didn't change, but I feel pony might have a different idea what dingus cringus is. Everything was safe for work until I added that keyword.

https://litter.catbox.moe/y7wyzh.png

NSFW, although that should be painfully obvious.
>>
File: 00008-4184128538.jpg (181 KB, 896x1152)
181 KB
181 KB JPG
>>102755655
>>102755810
>>102755919

intriguing differences
>>
>>102755997
interesting specimen
>>
what's the best nude lora for FLUX that doesnt fuck with my character loras faces?
>>
>>102756185
just inpaint with sdxl and be done with it
>>
>>102756269
fluxfags how do we recover from this?
>>
File: 1722486845241032.png (807 KB, 905x737)
807 KB
807 KB PNG
does anyone else get these vertical lines when they generate pictures over 1024px with flux? is there any remedy?
>>
A lot of people are waiting for the succesor of Pixart Sigma, personally I wait for an update on this lol
https://huggingface.co/Kwai-Kolors/Kolors
>>
>>102756516
its a watermark, can't be removed
>>
>>102756533
All dual language models are shit.
>>
>>102756541
>its a watermark, can't be removed
it can if we finetune it further with normal pictures
>>
>>102756516
means you need to use a lower quant cause your gpu cant handle that resolution properly
>>
>>102756582
oh, it has to do with the tilted VAE shit or something?
>>
File: ComfyUI_temp_zhnmg_00020_.png (2.48 MB, 1072x1880)
2.48 MB
2.48 MB PNG
>>
File: file.png (1.38 MB, 1024x1024)
1.38 MB
1.38 MB PNG
Who wins?
https://imgsli.com/MzA1MDgx
https://github.com/MythicalChu/ComfyUI-APG_ImYourCFGNow
>>
Is there some loras or finetunes already been made from the undistilled flux and uploaded on civitai now?
>>
File: 0.jpg (274 KB, 832x1216)
274 KB
274 KB JPG
>>
File: 0.jpg (211 KB, 1152x896)
211 KB
211 KB JPG
>>
File: Image.jpg (2.56 MB, 2144x1880)
2.56 MB
2.56 MB JPG
>>
I was gone for a while. What's the latest on workflows for flux with negative prompt?
>>
>>102756850
APG but i think you knew that already. post prompt
>>102757287
turn CFG up to 4 and weep
>>
>>102756850

Hmm.... APG is weirdly greebly. The staff are ribbed for her pleasures. Also the folds on the fabric are much more numerous.
>>
File: head_final3.png (3.85 MB, 2000x1456)
3.85 MB
3.85 MB PNG
>>102756533
Imaging trying to desloppify this. And I thought Flux was bad.
>>
>>102756850
I choose the fastest gen
>>
File: fs_0245.jpg (108 KB, 1024x1024)
108 KB
108 KB JPG
>>
>>
>>102758589
local?
>>
>>102758611
Pixar Flux LoRa and Haluo AI. you can use the Pixar Flux LoRa locally but the AI video model is on a website. Shit I wish an AI video generator model as good as Hailuo AI was local
>>
File: fs_0262.jpg (128 KB, 1440x960)
128 KB
128 KB JPG
>>
>>102757904
nothing wrong with FLUX
>>
>>
>>
>>
>>
>>
>>102744592
Remember to ask your Qwen model if Jews and the Chinese are working to destroy the West.
>>
>>
>>102758649
cool
didn't know that you can do i2v
>>
>>102757302
>post prompt

>The 35mm analog photograph features two females with contrasting styles, standing back-to-back, each holding a unique staff. The two woman are standing on a pirate ship with a dynamic camera angle and cinematic mood.
>On the left is a young woman with green eyes, thick eyebrows, and long, white hair parted in the middle and tied into two high pigtails. She has large, pointed ears. She wears a striped black and white shirt, along with a white jacket tucked into a skirt with a black belt. The sleeves of her jacket end with large, gold cuffs. Both her jacket and skirt have gold trims along the edges. Over her jacket, she wears a short cape that matches the white and gold theme of her jacket and skirt, and the cape includes decorative, gold accents with red jewels on each shoulder and a high collar that is fastened with a red jewel. She also wears black tights, brown boots, and a pair of gold earrings with red, teardrop-shaped jewels hanging from each earring. The staff she holds is a long, ornate piece with a large red orb at the top, surrounded by a golden crescent shape, and a red ribbon tied just below the orb, fluttering slightly.
>On the right is a taller woman with purple eyes and long, waist-length purple hair with a straight cut and bangs. She wears her hair down with two additional chest-length strands framing her face. She wears a long, buttoned white dress with a Victorian top, including a frilled collar and puffy white sleeves, along with black boots. Over the dress, she also dons a long black coat with a hood, which has a gray inside layer. She wields a long, wooden staff wrapped with purple ribbons in battle.
>>
>>102758204
APG and CFG have the same speed
>>
>>102758744
they just added it. Unfortunately, this means they are going to add a subscription service soon, too. People will have to rely on and be limited to free credits.
>>
>>102757287
>What's the latest on workflows for flux with negative prompt?
flux un-destill + a normal workflow (no distilled guidance + CFG > 3)
https://huggingface.co/nyanko7/flux-dev-de-distill
>>
there's not a Minimax thread anymore on /pol/? Oh man what a shame they were funny as fuck, I wanted to see how they would've handled the new image2video feature
>>
>>102756992
That's not a good sign. It might be the case that undistilled flux doesn't respond well to training.
>>
File: file.png (3.05 MB, 3230x1433)
3.05 MB
3.05 MB PNG
babe wake up, new local vision model
https://rhymes.ai/blog-details/aria-first-open-multimodal-native-moe-model
>>
File: file.png (3.66 MB, 2839x1426)
3.66 MB
3.66 MB PNG
>undistill dev
>can't have Miku with dreadlocks anymore
It's ova...
>>
File: file.png (290 KB, 3791x922)
290 KB
290 KB PNG
>>102759140
Idk man, I've seen some testimonies there and there where they claim that they got better results with undistill
https://huggingface.co/nyanko7/flux-dev-de-distill/discussions/3#6705765f2214de561f5499d4
>>
>>102759161
It might be placebo. Until loras or finetunes are released for dedistilled flux, I'm remain skeptical.
>>
>>102759154
I mean have you tried playing with the cfg. It says 1.0 right there.
>>
File: file.png (2.95 MB, 2959x1430)
2.95 MB
2.95 MB PNG
>>102759201
>I mean have you tried playing with the cfg. It says 1.0 right there.
I was using APG, which is supposed to be CFG but better, but even on regular CFG I can't get what I usually get on flux distilled unfortunately
>>
File: q234q2.png (252 KB, 1069x956)
252 KB
252 KB PNG
>>102754246
>Prompt?
>>
>>102757904
>And I thought Flux was bad.
i don't care much for kolors in it's current state but you're crazy if you think that's as bad as flux
>>
File: 1725978871149050.png (1.53 MB, 1152x896)
1.53 MB
1.53 MB PNG
>>
>>102755463
https://rentry.org/d2ckzxmq
>>
>>102759382
if Flux is worse than Kolors then why the fuck everyone is running Flux atm then?
>>
File: 1726121555224583.png (509 KB, 2208x594)
509 KB
509 KB PNG
Alpha Two sucks
>>
>>102759438
because reddit worships it
>>
>>102759474
4chan also worshipped it as fuck lol
>>
>>102759488
/ldg/ is just the 4chan branch of r/stablediffusion
>>
File: file.png (2.46 MB, 1745x1635)
2.46 MB
2.46 MB PNG
>>102759154
I think we slept too much on Flux2Pro, that one is undistilled to, maybe it's better than de-distill
https://huggingface.co/Kijai/flux-dev2pro-fp8
https://huggingface.co/ashen0209/Flux-Dev2Pro
>>
>>102759497
blacksune miku...
>>
>>102759497
What's with the comments about the license?
>>
>>102759520
some nerds saying that it shouldn't be apache 2.0 because it's still flux dev, which is true but I mean, why do they care? Are they getting paid to police the licence on the behalf of BFL?
>>
File: file.png (310 KB, 628x628)
310 KB
310 KB PNG
>>102759526
>Are they getting paid to police the licence on the behalf of BFL?
nope, those people are ruining the fun for free
>>
>>102759462
Did you try the version at
https://huggingface.co/spaces/John6666/joy-caption-pre-alpha-mod
It has options like choosing an uncensored model, maybe one of them is good.
>>
HOLY SHIT
https://pyramid-flow.github.io/
https://huggingface.co/rain1011/pyramid-flow-sd3
a local video model and it's not shit!! let's fucking goooooooo
>>
https://huggingface.co/bluepen5805/FLUX.1-dev-minus
>>
File: tokyo.webm (2.69 MB, 1280x768)
2.69 MB
2.69 MB WEBM
Kling-tier open-source video model based on SD3, only 2B parameters.
https://huggingface.co/rain1011/pyramid-flow-sd3
https://pyramid-flow.github.io/
What do WE think?
>>
File: tokyo.webm (2.81 MB, 1280x768)
2.81 MB
2.81 MB WEBM
>>102759649
It's only a 8gb model, are we back?
>>
>>102759658
what is this? flux dev transformed into schnell?
>>
File: file.png (191 KB, 2748x941)
191 KB
191 KB PNG
>>102759688
>>102759649
The chinks will definitely save us from the cucked commiefornia, feelsgoodman
>>
File: 1721187686031408.png (972 KB, 1152x896)
972 KB
972 KB PNG
>>
File: tokyo (1).webm (3.87 MB, 1280x768)
3.87 MB
3.87 MB WEBM
>>102759649
never expected this kind of quality for a 2b model, and they went for fucking SD3M kek, imagine if they did this on a 5b model instead, Instant Kling at home
>>
File: file.png (801 KB, 1183x766)
801 KB
801 KB PNG
>>102759778
>>102759649
>COMING SOON Training code and new model checkpoints trained from scratch.
We'll get something even better in a few days, I don't think we realize how back we are
>>
File: file.png (71 KB, 2226x450)
71 KB
71 KB PNG
>>102759800
https://github.com/jy0205/Pyramid-Flow
unprecedented levels of back
>>
>>102751606
The "Down syndrome" Lora was the highest intellectual feat ever produced in Stable Diffusions history, no slop from github comes anywhere close.
>>102759536
oh fuck i'd forgotten about BFL, let's see if they've slopped out their now outdated video model....nope! lawl
>>102759649
China save us! (3 second videos of default azn people, woo!) not getting fooled again like with COG
>>
File: file.png (3.62 MB, 3096x1355)
3.62 MB
3.62 MB PNG
>>102759898
>3 second videos of default azn people, woo!
that can go for 10 sec + 24 fps, we're so back!
>>
File: file.png (121 KB, 299x169)
121 KB
121 KB PNG
>>102759649
When ComfyUi?
>>
>>102747556
>besides, there's more concerning things to care about; like how there's nowhere to really post images that actually take a modicum amount of effort and isn't just straight slop.
Just post on twitter and pixiv? No one's gonna bother extremely sloppy aislop from effort aislop. Just keep posting what you like and don't get too concerned about lack of views or shit like that. Your pics are very tame and appeal to personal tastes so they probably won't get wildly popular, even if they are clean and are above most of the aishit.
>>
File: fireworks.webm (793 KB, 1280x768)
793 KB
793 KB WEBM
It looks good
And new models are coming, they mention flux on their github page
>SD3 Medium and Flux 1.0: State-of-the-art image generation models based on flow matching.
So the next models may be flux-based
>>
File: trailer.webm (2.57 MB, 1280x768)
2.57 MB
2.57 MB WEBM
>>102759992
>>
>>102759992
>It looks good
and we'll get even better results in a few days, they've gotten rid of that stinky SD3M and decided to train their model from scratch >>102759846
>>
File: 1720234355861989.png (1.18 MB, 1344x768)
1.18 MB
1.18 MB PNG
>>
>>102760049
you could go for something even more authentic by using tucker's face and PuLID kek
>>
>>102759649
Kek, looks bad but if they're making a new model it'll get better so I'm not worried
>>
File: BRUH.webm (246 KB, 640x384)
246 KB
246 KB WEBM
>>102760063
wtf are you doing Kobe?
>>
>>102759912
Is it img2vid driven? because if not, then it's another Chinese D.O.A. vid sloppa.
Img2Vid is the MINIMUM entry standard in late 2024, anything else is just a side-projected pooped out for clout in the AI sphere.
>>
>>102760143
>Is it img2vid driven? because if not, then it's another Chinese D.O.A. vid sloppa.
it is, for example this is from a real picture >>102760063
>>
>>102760148
Thank you, i've not had time to read the paper/page, due to neverending barrages of calls :/
>>
>>102759649
The chinks are really the kings of video models, Kling, Minimax, now this...
>>
https://github.com/jy0205/Pyramid-Flow/issues/5#issuecomment-2404503890
>Thanks! and tile_sample_min_size=256 is great for A6000 (it went to near 40gb).
wtf, it's asking for too much vram for a fucking 2b model
>>
File: file.png (17 KB, 506x97)
17 KB
17 KB PNG
>>
>>102760464
who cares? he's training the fucked up Auraflow model, this finetuner is deprecated
>>
>>102760480
with 10m captioned images and i think 15m images total it'll fix the model
>>
>>102760488
we don't need this cuck anymore, he's removing the artist tags, there's new fintuners that are training SDXL with the full booru dataset without any cucking involved
>>
>>102760523
i don't care about artist tags, i just need some form of style control, which pony v7 will still have
>training SDXL with the full booru dataset
don't care if it's not captioned, tag-only isn't enough
>>
>>102760534
>i don't care about artist tags
a lot of people care, I don't give a fuck about a "style control", I want to reproduce my favorite artists
>>
>>102760545
then use a lora? it's still a local model
>>
File: file.png (35 KB, 220x220)
35 KB
35 KB PNG
>>102760550
>then use a lora?
why are you waiting for his finetune then? just use some loras
>>
>>102760558
loras for artists are trivially easy to create and it'll most likely be the only lora you need
by comparison, good fucking luck generating multiple characters doing different things in a tag-only model
>>
File: file.png (802 KB, 750x1000)
802 KB
802 KB PNG
>>102760564
>by comparison, good fucking luck generating multiple characters doing different things in a tag-only model
>>
File: file.png (71 KB, 600x800)
71 KB
71 KB PNG
>>102759649
https://github.com/jy0205/Pyramid-Flow/issues/12#issuecomment-2404752801
>The 384p version requires around 26GB memory, and the 768p version requires around 40GB memory (we do not have the exact number because the cache mechanism on 80GB GPU)
>>
>>102760652
Chinks do it again. Bravo, ramp up the frenzy for clout then drop the bombshell that it's not useable by 99.9999999% of people interested in the technology.
D. O. A.
>>
>>102760676
it's still usable if we go for Q8_0 though? it'll go under 24gb of vram usage
>>
>>102760687
You're right, i retract my statement, it's now 99.9999998%
>>
>>102760652
What retarded approach do current devs use that they HAVE to load the entire model in VRAM instead of chunking it from RAM as needed?
Why are they like this?
>>
>>102760185
they don't have issues with copy-write and are img2video everything on the internet. The are the kings of something, but it isn't models.

>>102760676
this is a general problem. So many fake papers out there with matching githubs that don't run.

>>102760734
it kills speed horribly
>>
>>102760831
>speed
I'd rather wait 20 minutes (ram chunking) than wait an eternity (doesn't run at all).
People literally do other things while video models generate in the background.
I suspect a lot of devs are at the babby mental stage of using IT where they are literally staring at the % bar instead of getting on with something else and this translates to them having this mindset where the quicker the % bar moves the better. (Line goes up mentality) and they still think everyone interested in these projects must think the same way and damn their eyes if they can drag themselves away from watching the % bar!
AI Devs need the equivalent of a tard wrangler, seriously.
>>
Are we at the awkward inflection point?
>>
File: 1711927575399811.png (1.27 MB, 1152x896)
1.27 MB
1.27 MB PNG
>>
I open this thread every 3 months. Is there a flux fine tune for porn already (a good one)?
>>
File: 4227872585.png (911 KB, 896x1152)
911 KB
911 KB PNG
>>
>>102761492
Not really
>>
>>102760894
Speed is preferred for testing purposes.

If you test for 100 results in a minute, you have a good idea of what the model can do.

If you test for 100 results and get 1 result every 20 minutes, it's gonna be tedious to repeat and make new tests.
>>
>>102760010
>some chinese trained a full 3B video model from scratch faster than SAI could either a) fix their shit model or b) retrain a new one from scratch.
Actually amazing.
>>
>>102761675
What a throwaway platitude and derogatory statement.
Just use 20 Virtual machines for testing the 20 minute version that'll work on everyones machine rather than the version that only works on 1% of peoples, there, that wasn't so hard.
>>
>>102761675
>I promise my tests actually do something even though 100 tests really means nothing given how random AI actually is so most of the time I base results on what was actually random chance
>IMMMMA GUNNNNNA TEEEEEEEST
>>
>>102761814
>derogatory
Now you're just being a drama queen
>>
File: 00028-427894480.jpg (168 KB, 896x1152)
168 KB
168 KB JPG
where are the ggufs i was promised
>>
>>102761937
still working on it, i'll you know when it's done
>>
>>102759969
Just about everyone filters AI on pixiv and if it's SFW you may as well not even post it there. and twitter has zero discoverability these days(though I'll also admit I don't actually try to force discoverability on twitter).
>No one's gonna bother extremely sloppy aislop from effort aislop
that's kind of the point and that a "more curated" place intended for people who actually try and treat AI as a tool 'art' would be beneficial to pretty much everyone involved. Beneficial to creators due to having a place that doesn't actively shun them for posting, beneficial to the viewers since they don't have to sift through thousands of pages of the exact same 1girl standing AOM-slop and beneficial to the field since it would act as a condensed location of people trying to legitimize the process.
As it stands, about the only way to actually get any traction is to go full retard and intentionally invade the "normal" spaces and presenting your shit as if it's not AI because people go out of their way to AVOID AI shit because of all of the slop. Even I filter out AI shit because of all that garbage.
Also doesn't help that most "AI spaces :^)" more or less expect entirely AI workflows with repeatable results and zero user input.
>Just keep posting what you like and don't get too concerned about lack of views or shit like that
I mean, I do, but that's pretty much a given in that there's just nothing else to expect.
Should also note that the vast majority of what I do is actually porn and even that has fuck all for reach since, lol, if people look for AI porn they're absolutely not looking for quality.
>>
>>102762019
Yeah I agree pretty much. It's still possible to get audience on twitter if you get lucky or if you grind and grift enough, but it's much harder to get big numbers than it used to be. It's funny seeing how some early AI adopters effortlessly got 30k+ followers with absolute fucking slop, but now even the highest effort guys are barely getting 10k if they started later.
I decided to just stop caring much and just dump semi-effort coomslop to pixiv. Nobody cares about 1girl pinups unless you already have an audience.
>>
>>102762179
Damn, really? The twitter audience dried up? Why is that, it became easier to make stuff that doesn't entirely look like slop?
>some early AI adopters effortlessly got 30k+ followers with absolute fucking slop
Examples?
>>
>>102761267
>awkward inflection point
?
>>102761316
nice
>>
>>102762202
>The twitter audience dried up? Why is that, it became easier to make stuff that doesn't entirely look like slop?
I don't know, likely just the algorithm fuckery.
>Examples?
https://x.com/eyeai_
https://x.com/Rakosz1
not actually bad slop but those i had remembered by browser history. it's mediocre but I've seen better aifags struggling to break 5-10k+ followers
there are much worse ones with similar audience
and even very known aifags who started late like https://twitter.com/flooxyfloox barely get 30k followers while literally everyone in ai coomer space knows about them
>>
>>102762241
>barely get 30k followers while literally everyone in ai coomer space knows about them
and also not saying they're good btw, but certainly well-known
>>
>>102761871
>Attack the person not the argument if you've been beaten by reality.
>>
>>102762289
I mean Virtual Machines don't run on air, so you're just taxing resources more.
>>
>>102762202
a big issue with twitter is that discoverability of NSFW got tanked hard between the time when AI first hit and now. And when shit was new and shiny(literally, even) people weren't adjusted to or aware enough to actively avoid the shit.
If you're "just starting" now and aren't actively trying to crash in on spaces where people are not expecting there to be AI in the only interaction/views you'll get are from automated bots.
Something I post to pixiv that can hit 1k views/hundred or so likes/favorites with an AI tag would probably get 5-10x the amount if I submit it without the AI tag(or possibly more).
The whole scene is just fucked in terms of discoverability and interaction. Unironically, posting to 4chinz is going to ensure that you get more unique, actually tangible views than posting to twitter if you don't already have a follower base.
>>
>>102759492
>/ldg/ is just the 4chan branch of r/stablediffusion
Where did this cope come from? I'm out of the loop.
>>
what should I use for things like logos and stylized letters
>>
>>102762357
learn how to use adobe illustrator.
>>
>>102761316
fun
>>
>>102762346
when is the last time you say anyone talk about something that wasn't the newest model, random github repo or a reddit link. I haven't seen a discussion about workflows, colors, or *gasp* coding in this general in a very long time.
>>
>>102762432
Are you saying there's another more in-depth /g/ ai thread or are you just upset in general? Is model quantization not tech enough for you?
>workflows
Where else did you find flux workflows the day it released? They all came from here.
>colors
Elaborate.
>>
>>102762432
cope & seethe
>>
>>102762357
if you only care about typography SD3 might work kek
>>
CogVideoX finetuning.
https://github.com/a-r-r-o-w/cogvideox-factory

requirements.txt requires diffusers>=0.30.4
latest version on pypi is 0.30.3
How is this resolved?
>>
>>102762919
>How is this resolved?
you tell me
>>
>>102762919
it's likely the dev branch
>>
>>102763018
Thanks.
>>
is there a flamethrower lora?
>>
>>102763234
nvm there isn't
>>
>>102760687
>>102760702
There's probably a lot of optimizations that can bring the VRAM usage down. For example, a quick scan of their github code seems to indicate that flash attention is off (fucking can't link it, 4chan thinks the post is spam...)

Vanilla attention is quadratic compute and memory usage, flash attn makes the memory usage linear. For a video model, there's a time dimension, so the attention operations scale like (width * height * time)^2. So flash attention reducing that from quadratic to linear (for the memory usage) should be massive savings.
>>
>>102763431
So what you're saying is that these devs are somewhat fucking useless at holding from waving their dicks and posting on redi1t as soon as their code runs once without errors?
>>
>>102762346
>Where did this cope come from?
what cope? post flux r/ldg is 80% reddit, 20% lmg and 100% retarded autistic furries. flux for /ldg/ is what biden was for america
>>
cope & seethe
>>
>>102763585
you're wrong about the furries
>>
>>102763618
>>102763623
post your fursona
>>
File: 0.jpg (29 KB, 320x320)
29 KB
29 KB JPG
>>102763633
>>
nice images retards
>>
Anybody have recommendations to an (porn) image interrogator besides deepdanbooru?
>>
>he actually posts images
>>
File: 00051-1187801572.jpg (192 KB, 896x1152)
192 KB
192 KB JPG
>>102763633
>>
nice gens
>>
File: screenshot.png (278 KB, 1686x840)
278 KB
278 KB PNG
>>102759141
Doesn't see text but you can give it videos which I think is neat.
>>
File: screenshot.png (504 KB, 1686x1130)
504 KB
504 KB PNG
https://files.catbox.moe/ad8nhv.webm
>>
what would be the 512x512 equivalent resolution of a picture in portrait?
>>
>>102759141
pixart bigma will release before we see llama.cpp support for this
>>
>>102763793
512x512
>>
File: file.png (716 KB, 512x512)
716 KB
716 KB PNG
nerds
>>
>>102763585
>continues to cope
>>
>>102763889
turn up your rep pen, buddy. your llm is repeating itself.
>>
>>102763952
>>102758839
>>
>>102763965
>>102746385 (image of me)
>>
>>102759649
https://github.com/AIFSH/PyramidFlow-ComfyUI?tab=readme-ov-file

comfy node
>>
File: fs_0057.png (3.03 MB, 1200x1600)
3.03 MB
3.03 MB PNG
>>
https://huggingface.co/TheYuriLover/flux-dev-de-distill-GGUF/tree/main
for the anon who wanted Q4_K, here you go, I'm trying to upload the rest but huggingface can't stop giving me errors, this sucks!
>>
>>102762919
>CogVideoX finetuning.
CogVideo is deprecated now that this exists lol >>102759649
>>
Fresh

>>102764387
>>102764387
>>102764387
>>
Will PyramidFlow finally lead to a local porn video model? Quality looks pretty good. Only 2b parameters, so with optimizations it should be usable for inference in 24GB, and for training you might need to rent A100s but it should be pretty computationally efficient (meaning not that expensive to finetune). Their paper says they only used 10M videos, and many times more images. So it might only take a couple 10s of thousands of images, and maybe like 1000 videos to get a decent NSFW finetune.
>>
>>102764328
hmm im still getting the same error with this as i got when i tried to quant it myself, running it on forge
are you able to run it on comfy?
>>
File: file.png (671 KB, 3071x1257)
671 KB
671 KB PNG
>>102764456
>are you able to run it on comfy?
yep, works fine on comfy, I think Forge is fucked that's all lol
>>
>>102764328
>yuri
based
>>
File: file.png (1.16 MB, 1600x800)
1.16 MB
1.16 MB PNG
>>102764538
>based
thank you fellow man of culture
>>
>>102764456
https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/1285#issuecomment-2345381995
lmao, that's because Forge still doesn't support the QK quants, what a deprecated software
>>
>>102764456
please don't tell me you have an amd card...
https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/1269#issuecomment-2308971752
>>
>>102764514
>>102764599
damn, its gonna be a pain to get the workflow ported over to comfy
i much prefer the simplicity of forge, shame the development sucks
>>102764622
no i have a 3070
>>
>>102764653 (me)
i just checked though and the normal flux dev Q4_K_S by city96 works fine
>>
>>102764653
>, its gonna be a pain to get the workflow ported over to comfy
you just load the workflow of someone else? here, take my workflow: https://files.catbox.moe/bxn9tj.png
>>
>>102764678
maybe it has to do with the fact it's an undistilled model and you have to deactivate distilled guidance on forge? what if you put cfg > 1 + distilled guidance at 1 or 0?
>>
>>102764725
doesnt work
>>
>>102764782
make an issue on Forge I guess, it's most likely an issue on his side
>>
File: file.png (15 KB, 500x478)
15 KB
15 KB PNG
>>102764696
heres comes the fun
>>
File: file.png (93 KB, 3100x442)
93 KB
93 KB PNG
>>102765004
you just click on "Install Missing Custom Nodes"?
>>
>>102765004
you don't need any of them, you can delete them

- Override -> For multiple GPU
- Playsound -> it makes a sound when the generation is over
- Xyz plot -> To make xy plot
- Select Inputs -> Same thing
>>
>>102765029
wow i didnt know that node existed but its super useful, thanks anon
>>102765047
good to know because the manager couldnt find the override node which i dont need anyways
>>
>>102764696
is there a reason to use APG instead of CFG with the de-distilled model? also, do i set the value for APG in the node itself or in the CFG parameter in KSampler, or both?
>>
>>102765309
>do i set the value for APG in the node itself or in the CFG parameter in KSampler, or both?
if you want to use APG you go for APG > 1 and CFG = 1 on the KSampler, if you want to go for CFG you remove the APG node (or you bypass it by right clicking on it -> Bypass) and you go for CFG > 1

>is there a reason to use APG instead of CFG with the de-distilled model?
You get less burns at higher values on APG than CFG >>102756850
>>
>>102765384
>You get less burns at higher values on APG than CFG
i thought the point of de-distill was to remove those burns?
>>
>>102765460
it does, but like every undistilled models, you can't go too high on the CFGs, so that's up to you, it works fine until cfg 4, if you want even better prompt understanding you go for AFG
>>
*taps mic* this thing still on?



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.