Discussion of free and open source text-to-image modelsPrevious /ldg/ bread : >>101879426>Beginner UIEasyDiffusion: https://easydiffusion.github.ioFooocus: https://github.com/lllyasviel/fooocusMetastable: https://metastable.studio>Advanced UIAutomatic1111: https://github.com/automatic1111/stable-diffusion-webuiComfyUI: https://github.com/comfyanonymous/ComfyUIForge: https://github.com/lllyasviel/stable-diffusion-webui-forgeInvokeAI: https://github.com/invoke-ai/InvokeAISD.Next: https://github.com/vladmandic/automaticSwarmUI: https://github.com/mcmonkeyprojects/SwarmUI>Use a VAE if your images look washed outhttps://rentry.org/sdvae>Model Rankinghttps://imgsys.org/rankings>Models, LoRAs & traininghttps://civitai.comhttps://huggingface.cohttps://aitracker.arthttps://github.com/Nerogar/OneTrainerhttps://github.com/derrian-distro/LoRA_Easy_Training_Scripts>Fluxhttps://huggingface.co/spaces/black-forest-labs/FLUX.1-schnellhttps://comfyanonymous.github.io/ComfyUI_examples/flux>Pixart Sigma & Hunyuan DIThttps://huggingface.co/spaces/PixArt-alpha/PixArt-Sigmahttps://huggingface.co/spaces/Tencent-Hunyuan/HunyuanDiThttps://huggingface.co/comfyanonymous/hunyuan_dit_comfyuiNodes: https://github.com/city96/ComfyUI_ExtraModels>Index of guides and other toolshttps://rentry.org/sdg-linkhttps://rentry.org/rentrysd>GPU performancehttps://vladmandic.github.io/sd-extension-system-info/pages/benchmark.htmlhttps://docs.getgrist.com/3mjouqRSdkBY/sdperformance>Try online without registrationtxt2img: https://www.mage.spaceimg2img: https://huggingface.co/spaces/huggingface/diffuse-the-restsd3: https://huggingface.co/spaces/stabilityai/stable-diffusion-3-medium>Maintain thread qualityhttps://rentry.org/debo>Related boards>>>/h/hdg>>>/e/edg>>>/d/ddg>>>/b/degen>>>/vt/vtai>>>/aco/sdg>>>/trash/sdg
>mfw
I bet you kiss girls faggots
>>101882089new breadmaker?
>>101882175
Testing that owl prompt from the previous thread
N
>>101882231>Nendroid owner above ^
>>101882208Nice, here's mine.
https://huggingface.co/spaces/fancyfeast/joy-caption-pre-alphaIt's pretty good indeed, still not at the level of GPT4V though
>>101882247gpt4v cant do porn and is not free, joycaption is based on llama 3.1 8B
>>101882261for SFW you could use GPT4 and for NFSW you could use joy caption, that way the model will see multiple prosing styles instead of seeing only one kind of slop
>>101882273if booru then feed the captioner the tags and it performs at or better than gpt4 most of the time.
I
G
>>101882283>if booru then feed the captioner the tags and it performs at or better than gpt4 most of the time.gpt4 also can eat the booru tags though, so it's probably gpt4 + booru tags > joy captions + booru tags for the SFW department imo
This man blocks your path what do you do?
E
>>101882307N
>>101882320I
A
>>101882329C
>damn, flux can do that?!
https://github.com/kohya-ss/sd-scripts/pull/1374#issuecomment-228713462324gb fp16 finetune? Sounds too good to be true, what's the catch?
>>101882305suck him off of course
>>101882348add a catbox anon, so that we can see the boobies in full and beautiful display
>>101882261Not bad.
>>101882348>>101882369https://files.catbox.moe/obzx64.png(boobs, obviously)ignore the rest of the autism in the workflow
>>101882247>>101882374From the looks of it, it's better than florence and it can do NFSW, maybe the new local SOTA, where can I download it?
>>101882395nice anon, nice looking boobs
All I want is a caption model that stops calling assess bottoms and buttocks.
>>101882351just change them later
>>101882374>headband >its a blindfold
>>101882416give in and start prompting for buttocks too
>>101882305
>>101882428It's talking about her actual headband, seems like it didn't notice her blindfold.
>>101882416Those terms are far more common as labels than "asses". I know it sounds cheesy though
>>101882416>All I want is a caption model that stops calling assess bottoms and buttocks.make a python script that changes the "bottoms" and "buttocks" words to "ass", you could even include a % of replacement so that you get diversity, post processing is a thing and I love that shit
>>101882374I get A LOT of art style consistency (the thing people say Flux doesn't have) when using a prompt from this (or chatgpt)
>>101882455kekwhats the artist tag?
>>101882455YUGE MISTAKE PAL
>>101882455is this flux?
>>101882443not really, no one in a relaxed casual conversation would call it a buttocks or a bottom
>>101882488>american detectedthis is a euro board
>>101882465I know people criticize language models for sounding "generic" but they can all understand each other really well.
>>101882351>Sounds too good to be true, what's the catch?Nobody but the person claiming to have written the code has seen the code.
>>101882488>no one in a relaxed casual conversationbut classifiers that label the images do not
>>101882488"Ass" can mean "donkey," so language models prefer to use the less ambiguous terms.
>>101882466just 'a comic drawing'>>101882483yeh>>101882475GIT
>>101882500I think he added more details on that reddit comment, I think he's too much knoledgable to be full of shit, time will tellhttps://www.reddit.com/r/StableDiffusion/comments/1erj8a1/comment/li0hwmt/?utm_source=share&utm_medium=web2x&context=3
>>101882522Samefagging to add an illustrative example.
LLMs will conform to MY WAY of prompting I WILL NOT conform to THEIR way
>>101882397https://huggingface.co/spaces/fancyfeast/joy-caption-pre-alpha/tree/maingit clone, create a new venv for it, activate venv, pip install the requirements then run the file and it will launch the gradio. For batching you will need to edit the file.
>>101882560Did you download it? How big is it? How much VRAM does it ask compared to Florence?
>>101882536This would be awesome because I hate fucking around with LoRAs.
>>101882374Feed the tags with it like the character name and it will do even better. You can change the prompt in the .py file
>>101882580Don't you need way more images for a full finetune. If you just want a character doing a thing I think a LoRA is a much more time and effort saving task.
>>101882576its llama 3.1 8B + adapter so whatever that takes. For sure less than 24GB
>>101882583>Feed the tags with ityou can't on this demo unfortunately
>>101882595>local diffusion general>>101882560
>>101882595just run it local >>101882560
>>101882592I'm surprised they went for L3.1, why not Gemma2-9b, that one has better benchmarks
>>101882605prob the license or the architecture
Does anyone have a script for training a LoRA on 16gb VRAM?
goodnight /ldg/flux development is moving too fast it consumes my entire day
>>101882603Using a demo of a local model is on the scope of "local diffusion general" bucko
>>101882615sounds like cloud diffusion to me, host it locally and do what this anon said >>101882560
>>101882604where do you download it for local?
>>101882634Has no one used huggingface before? Google it.>>101882560
>>101882615you are correct
I'm gonna go ahead and say it. I don't think we can train flux on 24gb of vram and this is all a hoax.
>>101882591>Don't you need way more images for a full finetune. If you just want a character doing a thing I think a LoRA is a much more time and effort saving task.I think a finetune of flux (I'm talking about a real finetune, with a shit ton of pictures in it) is a must need for 3 reasons:- NFSW, duh- Flux is severely undertrained, it doesn't know much concept and for a 12b model it can probably eats several more billions of pictures before saturating (we'll never reach to that limit though kek)- Flux-dev is a distilled model, finetuning it will transform it into a more natural model, dunno if that will improve anything but heh, let's see
question, does LoRa training for flux work now?is there a guide or something for it on how to do it?
>>101882654https://github.com/ostris/ai-toolkitThis is the most straight forward
>>101882650>finetunecan someone explain to me what's the difference between a finetune and a pretraining? a finetune will use less layers on the model right?
>>101882654>question, does LoRa training for flux work now?>is there a guide or something for it on how to do it?https://github.com/bghira/SimpleTuner/blob/main/documentation/quickstart/FLUX.md
>>101882650No I totally get why a finetune is necessary, but for personal use to get something very specific to your needs a LoRA seems way more appropriate than a full finetune.
>>101882712the both work, and a finetune can also add more concepts into it, so you'll need to download less loras for our use case, which is always good. I like when my model can output everything I have in mind without having to worry on looking at a lora on civitai... if that one ever exists that is
>>101882686Pre training = initial training run to get the thing workingFine tune = training for a specific task or set of tasks for a model that has already been trained Continued training = fully updating the model across the boardI think. Someone correct if wrong lol.
>>101882670cute
>>101882779Will this create a 3D image if I cross my eyes?
>>101882764Pretty close Marlboro logo plus Shell, based
>>101882785Try it sir
>>101882764impressive, it has the feel of the 90's anime, prompt?
>>101882797Mmm, kinda?
>>101882645It works, just wait :)
>>101882785i was just trying that myself.. didn't work for me.. dunno if its because it's too big or if its just not shifted
>>101882824Looks like you're the legit guy who claimed that shit, your ":)" was a dead giveway >>101882536So... when will you upload the code?
>>101882800I used, "a retro style, anime OVA, VHS cover art from 1988" then the rest
>>101882853can you give me the full prompt, I wanna see if that can be improved with CFG 6 + GuidanceNeg 10
Does Flux knows what an adult look like?? It always reverts back to chibi style when a lot of complex prompt happens
>>101882842When it's ready. I promise it's real though. :)
>larping
>>101882817That's stereo 3D friend
>>101882872We'll see anon, we'll see, will you end up as a hero or a clown is the question
Test
>>101882856are you the guy from reddit?
>>101882897Congratulations on your ban expiring.
>>101882901yeah lol
Guys I just tossed out my second 3090. Looks like vram means nothing anymore.
>>101882906I fucking hate redditors
new grok model just dropped on x. apparently its better than midjourney
>le women on the grass test
>>101882982it literally didn't gen what he asked for
>>101882982>not local
>>101882996I see cum on her belly
>>101883010shh... let them think that's SOTA
>>101882982can you provide link or something?
>>101882982It's literally using Flux nigger.Also>giving Flux access to every tech illiterate normieI hate Elon.
>>101883037>>giving Flux access to every tech illiterate normie>I hate Elon.what do you mean? Elon made a flux API or something? Can you give me a source or something?
>>101883059>>101883076neat>>101883074>it uses fluxthe voices told him this
>>101883080>the voices told him this
>>101883074It's a new subscription based service on X, shows up when you open up the app.
>>101882779Took me a bit, but this actually makes a pretty good 3D effect when you get your eyes crossed right. How'd you make it?
bruh
>>101883089>>101883080https://xcancel.com/nima_owji/status/1823388838279922166#m
>>101883116thanks anon>It'll also generate images using the FLUX.1 model!It's impressive how much FLUX has changed the landscape forever, and it's only been 10 days, people are focusing on quants (nf4), on more training optmisations, fucking Musk decided to use it as an API, that's how good you know a model really is, when SD3M got released, the only things it managed to add more on the community is "le funny women lying on grass meme" kek
New Quant: flux1-dev-bnb-nf4-v2.safetensorshttps://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/1079
>>101882663>>101882701thanks bros, gonna read up on it a bit.
Bigma next month
>>101883150I'm not sure if them working directly with X is good news for open source. I guess time will tell, but those guys want a competitor to MJ and Elon replied to the CEO all the time, so it could be bad.
>>101883165>I promise this time it's better than fp8!1!1!1!Fool me once, shame on you. Fool me twice...
How can we get rid of **** shitting up /sdg/
>>101882884kek :(
>>101883201not my problem, this is /ldg/ go back fixing your shit in your home
>>101883101more like this or like Simon Stalenhag art
>>101883165will this still work on comfy's nf4 node? I wanna try it out
>>101883199NF4 is pretty fast, i'm just waiting for lora support
>>101883257>NF4 is pretty fastit's almost the same speed as fp8 if you have enough vram to run it, nf4 isn't fast because of optimized math calculation, but simply because nf4 will be less likely to end up into your ram
pc rebuild todayprobably, if I can be bothered
>>101883268well it just werks
>>101883116Honestly this perfectly explains why Flux can't do artists. There would be an outrage.
>>101883378>Honestly this perfectly explains why Flux can't do artists. There would be an outrage.what do you mean? SD1.5 and SDXL survived and they could do artists and celebrities well
>>101882613Does anyone have any kohya training script for flux at all? I can adapt it but I am very stupid and keep getting errors
Does ComfyUI just let everything sit in RAM until it's like 90% full before freeing up unnecessary stuff (like the unet fully loaded in VRAM)?
>>101883165Good news, nf4-v2 still works on ComfyUi nf4 nodehttps://github.com/comfyanonymous/ComfyUI_bitsandbytes_NF4
>Anon, you're so funny...can't believe you're my coworker
>>101883523When did it stop?
>>101883565Flux does frutiger aero???Could you please share your prompt?Thank you!!!!!
>>101883677he changed the architecture a bit on v2, so I supposed there would be an error or something
Looks like the ays/ays+ schedulers manages to remove the white effect on Dynamic Thresholdinghttps://imgsli.com/Mjg3NDM0/0/1
>>101883689sure fren, I just put a frutiger aero image into Joy Caption and fixed its mistakes for the prompt, it's long:>This is a digitally created artwork depicting a cityscape on an island floating in the ocean. The scene is vibrant and detailed, featuring a blend of natural and man-made elements. The background is a clear, bright blue sky dotted with fluffy white clouds. In the foreground, the water is crystal clear, allowing for a view of the ocean floor. The water surface is rippled, and there are several small fish, including a yellow one with black and white markings, swimming near the bottom. >To the left of the image, a large, blue, spherical globe is partially submerged, with water flowing out of it. The globe appears to be made of glass or a transparent material, and it is partially cracked, with water cascading out in a dynamic, realistic manner. >On the right side, a futuristic cityscape rises from the water. The buildings are modern skyscrapers with sleek, glass exteriors in various shades of blue and white, reflecting the sky and water. The cityscape is detailed, with intricate architectural designs, suggesting a high-tech metropolis.>The overall style is a mix of realism and fantasy. The artwork combines vibrant colors and detailed textures to create a visually striking and imaginative scene. The image evokes a feeling of hope for a better future like that depicted in the image, along with a refreshing feeling thanks to the bright colors and vibrant, healthy coexistence of nature and man.
>>101883715Thank you again! This is great news. Have not played with Flux yet so can't wait to use it to create inspiration gens to create in Photoshop. (Or shop gens)
>>101882351>https://github.com/kohya-ss/sd-scripts/pull/1374#issuecomment-2287134623Will this mean anything for potentially smaller vram requirements for loras too if it works for finetunes?
>>101883715>>101883689
>>101883755kek
I should make the prompt longer
>big anime titty LoRA has more downloads than the base model
>>101882536Wait is quant lora training out now? What's vram reqs?
>>101883793>almost 60k downloads Flux Won.
how do you upscale with flux? Is it just like upscaling with other models?
>nf4 on comfy breaks LoRAsWhat's the fucking point then?
I wish there was a slightly smaller param flux that was more vramlet friendly. It doesn't really feel like flux truly needed as many params as it has. And I know vramlets can run it still with quantz but loras being limited to a 3090 minimum sucks ass
>>101883880How about instead of cucking to vramlets, we all hit Nvidia with an anti trust lawsuit and force them to make better cards and a reasonable price?
>>101883880Nvm just read this https://github.com/kohya-ss/sd-scripts/pull/1374/filesFuck you guys for making fun of me when I asked if 12gb lora training had any chance of being possible in the future My salty poor fag ass is back baby
>>101883902Meds. NOW.
Gm Eurobros
>>101883165
>>101883845the weights are too different, it's like being mad Pony loras don't work on XLyou're throwing a tard tantrum>>101883964GOOD MORNING SAR
>>101883960The fuck are you on about fag
>>101883973>You're being unreasonable for seeing an issue with the fact that half the community wont be able to use LoRAs with the only viable way to use Flux for them.I'm not even a vramlet and can see this is an issue.
>>101884003Nta but isn't this fixed by just training the Loras on the nf4 version...?
>>101883996>Fuck you guys for making fun of me when I asked if 12gb lora training had any chance of being possible in the futureYou're literally making up fake scenarios in your head to be upset with us. That's what crazy people do, anon.
>>101883393SD and SD 1.5 were never at the hands of normies.
>>101884014And now they don't work on the fp16 and fp8 version.
>>101884039This literally happened a few threads ago you retard, rope
>>101884058Well, yeah? You either train for both or the community ends up favoring majority support for one (most likely nf4 because the vast majority of users will be vramlets)
>>101884055>SD and SD 1.5 were never at the hands of normies.you're joking? SD1.5 was a 0.75 model, everyone could run it on their potato PC, you can't make a more normie and accessible model than this one
>>101884076I know and that's a problem. I don't want to lower my standards just to please some poo in the loo who wants to make celebrity porn.
>>101884092Well don't train for nf4 or use nf4 Loras then, I don't see how this is an actual issue for you
>>101883972So... nf4-v2 is actually slower than fp8? damn...
>>101884087Right, they were at the hands of anyone who did their due diligence and knew how to download a file on their computer and run some commands. Not literal retards who are dumb enough to browse X.
>>101884103It's an issue for me that even vramlets make good LoRAs.
>>101884111maybe in some cases with a 3090 that doesn't need it anyways, for me it's way faster
>>101884114you think that flux will be uncensored on X's api? LOOOOOOL
>>101884120>makes good Loras >poo in the loo who wants celeb pornPick one Or buy me a 3090 and I'll make you every lora you've ever desired - t. vramlet in question Wouldn't be a problem if we could actually pool multi GPU memory for lora training though reee
>>101884141Not fully, but take a look at https://xcancel.com/SpaceX69_420/status/1823389265427939753#mOf course they will censor it more after all this, but if Elon gets a stake in Flux then say goodbye to 2.0 being able to reproduce images like this once enough normies produce "harm".
>>101884175>Of course they will censor it more after all this, but if Elon gets a stake in Flux then say goodbye to 2.0 being able to reproduce images like this once enough normies produce "harm".fuck... you got a point there...
>>101877628Thanks. You linked wrong but "posts" was the correct one.
>>101883972https://imgsli.com/Mjg3NDUy
>>101884175Also don't forget that SD 1.5/XL had seething Twitter trannies.The entire fiasco is what caused the CEO to cave in (because he clearly wasn't acting in good faith) and pic rel.
sooo what are we using to caption datasets for flux loras? running them through chatgpt?
>>101884209If you give them an inch they'll take the arm, if you kneel for pardon, they'll remove their pants for a succ. SAI just don't understand that the cucking they are doing to their models will never be enough to those twitter crazies, all they want is to kill AI, not just have a cucked AI.
>>101884175But that was always going to be the case, no? They get acquired by someone, and if they try to push the lid on a genie bottle like stability, the devs are just going to leave and do it all over.
>>101884242>sooo what are we using to caption datasets for flux loras? >>101882247 >>101882261
>>101884257It still isn't clear they "defected", for all we know they just formed a rival company and are feeding off the open hype before going fully closed like Mistral and "Open"AI. For all we know Flux while brilliantly trained didn't include that many different artists in its training data after all.
Flux paper when?
>>101884312>before going fully closed like MistralIsn't Mistral's best model right now free to download?
>>101884261thanks anon, wasn't sure if that was the general go to or just NSFW
Heun makes these even better, too bad it's so slow
>>101884387well, you could probably get even better captions for SFW using a cloud model but then it's not local and you have to pay
>>101884400Go for deis, this shit is good and as fast as euler
would clip_l + t5 not be capable of understanding booru tags if a lora was trained on them instead of natural language? I'm under the impression it would but if no one knows I'll try a test run tomorrow
>>101882247Tried it with a recent gen and this about one of the subjects>The man on the right is facing away from the camera, but his blond hair and attire suggest he is likely the same individual as the one in the previous photograph.It is obviously Donald Trump to any human that has seen him even just once but the mention of a "previous photograph" really lowers my trust in this caption model.
>>101884445can you instruct it to caption each image individually without reference to any other image etc?
Is there anything in local which can mimic NAI's style? I haven't done any image gen since sd 1.5 tunes so i'm going to have to relearn everything.
>>101884483probably, I'd have to clone the space and run it locally to try
>>101884485animagine sdxl is probably your best bet unless you want NSFW, then you'll want to try some pony mix like autism confetti and abuse loras still not quite on the same level as NAI but depending on what styles you're after it may be satisfactory. you're unironically better off looking at the /hdg/ or /e/ most likely, as everyone here is currently balls deep into flux which is kind of shit with styles (finetunes pending and loras in progress)
>>101884485T-ponynai3 is probably the closest, it's a pony mix designed specifically to look like NAI
>>101884433damn you were right
>>101884385I still remember when Medium wasn't released by them and large was API only, but it seems competition has made them release one of their best models.
>>101884569
>>101884185np, just assumed the file named 'tags' was the right one for the tags you wanted
>>101884385>Isn't Mistral's best model right now free to download?they were forced to open source it due to competition, mistral's new 123b model was their answer to llama 3.1 405b. flux doesn't really have much competition right now so it's kind of like the sd era where only one company dominates.
>>101884647Doesn't SAI have an 8B model that is supposedly pretty good?
>>101884672Oh, I like this. Might make it my background.
>>101884647Hopefully Hunyuan, Bigma and Lumina change that.
>>101884678>Doesn't SAI have an 8B model that is supposedly pretty good?I tried SD3-8B on their api and this shit isn't even close to Flux, what an embarassement
>>101884678SAI is in such a bad shape i doubt they could afford to release it without collapsing>>101884694yeah, image gen really really needs more players competing in the field
https://github.com/comfyanonymous/ComfyUI/issues/4343#issuecomment-2287947547>Can you try running it with: --disable-cuda-malloc to see if it improves things?Comfy, it worked fine before with malloc, you shouldn't ask us to remove that feature, but to make it work again with malloc instead
man trying to use the adaptive cfg workflow whatever with a lora and this shit is so fucking slow on a 3060
>>101884690Glad you like it anon, I think I finally found some settings to really get the aesthetic right.
>>101884771do you also have adaptive guidance? it makes shit faster
>>101884699Their 8B is 100% not as good as Flux. That's like saying theirs is like Dalle. It was never that good at prompt following, it was more on the level of Sigma at prompt following (a literal 600m model), that's how big their embarassment was...
>>101884784>I think I finally found some settings to really get the aesthetic right.Can you share it with us? I might learn something or two
it's really annoying that it's adequate with vapes but terrible with cigarettesI get it, people don't usually hold things between their index and middle finger so it gets confused, but fuck
>>101884785Yea, it actually seemed like it was A LOT faster without the adaptive guidance when i was using a different workflow
>>101884806there's no way it's faster without adaptive guidance, maybe you should update ComfyUi and try again
did the guy with the 4090 that took 3 minutes to gen one image ever figure out the issue?
>>101884710SAI just says whatever to keep the scam going. I bet the idiots who reinvested into SAI right before BFL released their model feel like idiots right now.
>>101884799Of course! Most of the prompting is just from putting frutiger aero images into Joy Caption. https://files.catbox.moe/pncyo6.png
>>101884847>Of course! Most of the prompting is just from putting frutiger aero images into Joy Caption.I think you'll have more success if you put it on gpt4v, you can do it it's SFW kek
>>101884821im getting like 145s/it. I've updated multiple times...it was way faster on an old version but i was getting the white grid.
>>101884766Comfy is a talentless hack who seethes and rages constantly while denying any fault, what more can you really expect? Mentally ill homo who has had multiple public melties and actively points people towards them because he lacks the depth of mind to recognize it's shameful and makes him look like more of a retard
>>101884766Even without malloc I still have OOM on a lora load, the fuck did he do?
>>101884678SAI is unironically going to go bankrupt. They don't have anything to offer. The most baffling part is that, while for users Flux came from nowhere, SAI knew was well aware of it, and they genuinely thought it was going to flop, so they didn't feel pressured to release the 8B version as SD3. Even after Flux came out and people were asking SAI to reconsider, they still were convinced Flux was going to flop. Now Flux runs on 8GB of vram, can be trained on 12gb, and it's overall much better even than SD 8B, so even if SAI released it to the public no one will bother with it, not to mention the horrible licenses they pushed for.
>>101884766I used to think Comfy was so cool, bro, but he's kind of a hack, bro. What if that Forge guy made his own node based UI, bros? Wouldn't that be great?
>>101884923they got what they fucking deserved, it's been more than a year that we asked us to stop acting like a fucking cuck, I'm not gonna cry on their grave, fuck them, it's flux era now
>>101884923Even in the official Stable Diffusion discord people are demanding to turn it into a Flux server because SD is DOA. KEK
I hate mondays
>>101884923You don't have to convince all the users that SAI is garbage. The problem is they have a pretty deep set set of roots that let's the farm capital to keep the afloat whenever they need it based on their previous so-so track record. It's going to take a few more rounds before people realize it's just an empty shell company full of resource sucking safety trannies and HR with no actual talent on board.
>>101884935>What if that Forge guy made his own node based UI, bros? Wouldn't that be great?Holy kek the amount of seething and dilating from comfy if that happened. Please illya, for the hilarity alone
>>101884864Lol you're right, I usually just default to whatever is free and open. ChatGPT gives better captions tho, helped me with this one
Are LoRAs in ComfyUI used in parallel at the same time or in sequence? Using a node like Power Loras that allow to load multiple loras.The use case would be for example to first load a Lora or a person, and then the realistism lora. So it "fixes" the lower quality of the person's lora?
>>101885095that's not how loras work in anything really, you can't really fix bad quality on one lora with another, the bad quality lora will just add bad quality any time it's used
>>101884923SAI is going to triumph in the end, because they are the only company that takes AI safety seriously. Have a good time prompting porn with your toy KEK. Meanwhile SD will be used by adults as a tool for our jobs.
>>101885095lora load order doesn't change the resulting weights
The latest updates to forge fucked something up. I'm suddenly getting out of memory errors and crashes when using schnell with the exact same settings I was using yesterday. Dev still works fine, strangely.
Can i run flux on a 3060 12gb?
>>101885195you can even now train Loras on it, yes
>>101885195mostly, hope you have a good cpu
>>101885195Nigga, you can fine tune it with 12gb now.
>>101885213Which model should i get for that gpu?
God I wish I could live here.
>>101884400heun 10 steps = euler 20 steps
>>101885298Any recommendation on what to try? For realistic generations
>>101885259>the future we were promised
>>101885077>helped me with this oneyou're welcome anon
>>101885224Iunno, I'm still using the day1 setup and workflows, the 16 dev model in 8 precision
>>101885195I am but it takes like 20 minutes to get 1 pic on cfg 6 with a lora. cfg 1 aint bad tho
>>101885403thamks
thread ded?
site dedwhat are the FBI investigating now
>>101885591posting was fubar'd from around 6am eastern until a few mins ago. check a few other generals and there will be big gaps there too
>>101885591no hiroshimoot and the feds feeding off our data can't handle keeping the site up for long periods of time throughout the years anymore>just by sheer coincidence blacked porn was left up on /v/ this whole time
>>1018855914chan's captcha or cloudflare was dead
I blame the pedo.In other news, can we train LoRAs on 16gb vram yet?Also nf4 v1 vs nf4 v2 vs fp8 vs fp16https://imgsli.com/Mjg3NDg5/1/3