Discussion of free and open source text-to-image modelsPrevious /ldg/ bread : >>101558992>Beginner UIEasyDiffusion: https://easydiffusion.github.ioFooocus: https://github.com/lllyasviel/fooocusMetastable: https://metastable.studio>Advanced UIAutomatic1111: https://github.com/automatic1111/stable-diffusion-webuiComfyUI: https://github.com/comfyanonymous/ComfyUIInvokeAI: https://github.com/invoke-ai/InvokeAISD.Next: https://github.com/vladmandic/automaticSwarmUI: https://github.com/mcmonkeyprojects/SwarmUI >Use a VAE if your images look washed outhttps://rentry.org/sdvae>Model Rankinghttps://imgsys.org/rankings>Models, LoRAs & traininghttps://civitai.comhttps://huggingface.cohttps://aitracker.arthttps://www.modelscope.cn/homehttps://github.com/Nerogar/OneTrainerhttps://github.com/derrian-distro/LoRA_Easy_Training_Scripts>Pixart Sigma & Hunyuan DIThttps://huggingface.co/spaces/PixArt-alpha/PixArt-Sigmahttps://huggingface.co/spaces/Tencent-Hunyuan/HunyuanDiTNodes: https://github.com/city96/ComfyUI_ExtraModels>Kolorshttps://gokaygokay-kolors.hf.spaceNodes: https://github.com/kijai/ComfyUI-KwaiKolorsWrapper>AuraFlowhttps://fal.ai/models/fal-ai/aura-flowhttps://huggingface.co/fal/AuraFlows>Index of guides and other toolshttps://rentry.org/sdg-linkhttps://rentry.org/rentrysd>View and submit GPU performance datahttps://vladmandic.github.io/sd-extension-system-info/pages/benchmark.htmlhttps://docs.getgrist.com/3mjouqRSdkBY/sdperformance>Try online without registrationtxt2img: https://www.mage.spaceimg2img: https://huggingface.co/spaces/huggingface/diffuse-the-restsd3: https://huggingface.co/spaces/stabilityai/stable-diffusion-3-medium>Related boards>>>/h/hdg>>>/e/edg>>>/d/ddg>>>/b/degen>>>/vt/vtai>>>/aco/sdg>>>/trash/sdg
Blessed bred of frenship
The significance of the passage of time
Oooh this bread is searchable and blessed!
The new stability "4d video" is pretty neat. I don't have use for 3d assets, but people who buy them sure do.
Later
havent made the collage in a while
>>101566261 >>101566128 >>101566356beautiful
>>101566407Fret not about the taste of the baker and consider what you yourself would like to see.>>101566495ty
>>101566407its okay anon we believe in you
1girl 1bump
Steady as she goes
>>101568548Your post as the prompt
>>101568096Your post as the prompt
Ciao for now
>>101569012>the voices >>101569379Caio
>>101565034TY for the blessings, anon
>>101565003nice pyramid
>>101561519i tested a lot of euler variants, i might test others eventually but i have reason to believe euler is the best here. euler a and the cfg++ variant are very close. i think cfg++ may handle complex backgrounds better especially if you use RAUnet on the cross attention blocks. euler a regular may do shit like hands better, not obvious from a low sample size. euler negative from the koishi-star package is also interesting. schedulers: prefer ays_30 but gits is interestingpicrel is a 12 step gen, 14 seconds straight at 1792x2304https://files.catbox.moe/kptpa3.png
>>101570720and just out of robo-curiousity, the same seed but with regular euler a (samplers have different CFG curves, 4.5 in pic related and 2.8 using cfg++ in the above gen)
>>101569412Your entire post as the prompt>>10157072012 steps is quite low. I tried gits, but maybe too many steps. Interesting findings anon, ty
>>101571197i mean more steps than 12 is more quality, but ive clicked 12 step rough drafts and been perfectly satisfied many times now. a few months ago i wouldve thought it impossible even with lightning memes
>>101571221Yep 12 steps too low. 24 steps dpm++2sa 4.2CFG is my go-to for Sigma
>>101560949>>101570720>>101570801I opened this thread only for C.C. Thank you and make more.
Sigma IPadapter when
Good evening
I got into the OP
>>101570720What do you use for gits coefficient?>>101571873Evening
>>101572136
>>101572136i dont know what that is, i only tested gits as a scheduler preset provided by some custom node>>101571405
1girl 1bump>>101572212No problem. It's an option if you choose the scheduler as a node
android Laura
So peaceful here compared to sdg :)
writing your own scheduler >>>
>>101572434Your post as the prompt
>>101564986
>>101572577imagined with ai watermark.. you made that on meta.ai
>>101572512accurate portrayal
>>101572620Yeah, can I run a local model with my machine? I have a RTX 3070 (8GB) and 16GB of RAM?
>>101572662>RTX 3070 (8GB) and 16GB of RAM?You can make so much coom with that
>>101572662I have a GeForce GTX 1060 3GB, your pc can handle it for sure.
>>101572830>>101572697Thanks I'm gonna try it.
Hunyuan packaged into safetensors for use with ComfyUI >https://huggingface.co/comfyanonymous/hunyuan_dit_comfyui/tree/main
>>101573176Nice.
>>101573176Wooow
This would be a cool job
>>101573058Comfy just merged native support for Hunyuan too
>>101573954
>>101573176Still going back to this. Really cool
>studio anime screencap, landscape,
>>1015742593/4 Ghibli
>>101573058>>101573986Seems like there are a few finetunes already
>>101574304A shame I don't know Chinese
Make Sigma less difficult to use, we must
Are there any interesting non SD finetunes yet?
>>101574738Yes, quite a few https://civitai.com/search/models?baseModel=PixArt%20E&sortBy=models_v9
>>101574738https://civitai.com/models/573014/900m-pixart-sigmahttps://civitai.com/models/435669/bunline-2k1024512-pixart-sigmahttps://civitai.com/models/575978/thelaboratoryhttps://civitai.com/models/593660/900m-pixart-sigma-animeonlyhttps://civitai.com/models/590464/anime-sigma-768pxhttps://civitai.com/models/490203/booru-madnessCould be more on other model sites desu
>>101574738whats your definition of interesting?
>>101565003Look guys, I JUST want to make porn of women in my life that I want to fuck but can't ever. Which AI is best for that?
>>101573317>>101573912thanks
>>101574941Sigma still struggles with basic anatomy and Hunyuan looks like a render. Afraid you've coom to the wrong place
>>101574941https://civitai.com/images/20254452
is it just me or is 1.5 better at photo realism than pony? im looking at sdxl checkpoints too and they look a bit off like pony does from the images people are making. but epicrealism for 1.5 is amazing
>>101575120XL is better when it's better but 1.5 has more face variety
>>101571603Can you catbox this? I've always liked cartoonish AI renders.
Good night anon
>>101575120>epicrealism for 1.5 is amazingNot really
>>101575531NTA but red pill me - I am using epic realism SD 1.5 with usually pretty solid results with some proompting
>>101575591It'll give you marginal results that few will comment on and everyone will know you're using that model.
>>101564986probably should include >>101573058 since it makes it easier to use
pan someone convert a model into a gguf for me?
>>101572866how did it work out
>>101576249Does 16ch vae for 1.5 work yet?
Someone please make Kolors text encoder 4bit
>>101577194How much memory would it consume if it were 4bit?
>>1015772645-6 GB
official pixart bigma and lumina 2 waiting room
>>101575213>XL is better when it's better but 1.5 has more face varietyThat's probably because SD1.5 is the only good model that was being trained with only real pictureshttps://www.nature.com/articles/s41586-024-07566-y
>>101577554synthetic data and it's consequences have been a disaster for local image gen, worst part are the 10 star retards that train their model on SD1.5 and sdxl gens of all things.
>>101577626Can't agree more with you on this one, it's a disaster and we should avoid synthetic data like the plague when training models, this crap is literally poison.
Kling is insane! https://reddit.com/r/StableDiffusion/comments/1ecfu3g/girl_turning_around_for_the_camera_showing_her/
>>101577684will they be releasing the model weights?
>>101577725we'll never get a local model at this level, no way
>>101577730aw :(
>>101577684https://reddit.com/r/StableDiffusion/comments/1ecg544/a_girl_emerging_from_the_hot_springs_adjusting/bro, hentai is gonna be amazing in the next few years
hello /g/en friendsIs anyone running intel ARC for this hobby? How is it doing?Was considering getting one but from the results shared in the op in picrel they seem pretty bad, but I thought they were meant to be good at AI workloads?
>>101578365If you haven't already got a GPU then buy an Nvidia one with as much VRAM on it as you can afford. Cheap option would be 3060 12GB will be around 300ish new depending on where you live, less if used. Try not to go below 8GB VRAM if you can't afford something with 12GB or more on it.
>>101578404the thing is price performance for intel is better if taking gayming into account and i plan to make good use of intel qsvso I'm wondering why they're so shitty in those benchmarks, is it just poor support for a first gen product or what?
>>101577644Anon remembers when we had a full thread (or two) about this very thing. LDG ahead of the curve as always.
>>101578691synthetic data will keep on haunting us until training data requirements gets alot cheaper, it's why even megacorps like meta train their llms on 15t tokens of synthetic reddit. image gen is in a much luckier place though, good images aren't that hard to scrap and stuff like pixart bring the requirements down significantly. what we really need is a good image captioner. meta.. where my multimodal llm... meta...
>>101578815>what we really need is a good image captioner. meta.. where my multimodal llm... meta...I really think this should be the hardest thing to achieve, a great AI captioner, and like this captioner would be able to guess what character or celebrity is in the picture, if it can't do that, then it means we'll always have to rely on human labeling, and that sucks
>>101576249VAEry interesting.. odd the expected scaling ratio isn't working for Ostris.. Cool that it works at all!
>>101578437>the thing is price performance for intel is better if taking gayming into accountI put two charts together (from this review of 7900xtx that includes an A770 amongst other gpus) https://files.catbox.moe/q040jh.pngLook at where the A770 and the surrounding cards are in 1080p gaming on the left chart and SD performance in the right chart. If you've got an intel CPU then that will also have hardware support for qsv.
>>101578915>we'll always have to rely on human labelingHave you not seen the VLM advancements? Local is better than GPT4o in some cases with InternVL
>>101578942I mean, AI captioning has 2 issues right now:- Doesn't seem to do NFSW or is really bad at it- Doesn't add character/celebrities names onto the pictures that need one, and that one is a huge issue
>>101578815The only issue with synthetic data is that we like our bias more than the transformer's. That paper is a "slippery slope" of recursively re-using only synthetic data generated without human oversight
>>101578959anon pls
>>101578930thanks for sharingmaybe this is a stupid question but "relative performance" is performance relative to price right?
>>101578959Human oversight will merely delay the inevitable. There is no need to use AI images in training.
>>101578992Microsoft wouldn't release Phi if synthetic data was toxic. Humans control the content, style, and flow of information there. This paper can be boiled down to: "It's proven harmful to let Jesus take the wheel"
>>101579047Every cellphone photo you use is an AI image. Every photoshopped image too
I don't understand why anon brings up LLM training when discussing txt2img models. I could care less if your ERP makes you ejaculate quicker if you train on AI sloppa.
>>101579051nta but nobody likes phi, it sucks. slopped to hell and back. gemma2, mistral nemo and mistral 2 large aren't gpt slopped at all and people love them.
>>101572251Sick!
>>101579074Can you elaborate on this because I think you're meaning to say many images already use AI enhancements but we are not talking about that, we are talking about completely synthetic images. I want to refrain from labeling you a retard but you're making it very difficult.
>>101579051>microsoft wouldn't do dumb thing kek good one
>>101579089It doesn't do well outside of the domain it was trained on. Synthetic data does induce that, which is why you don't let Jesus take the wheel.>>101579121>Can you elaborate on thisEvery pixel is synthetically rendered by your cellphone. The world is an approximation of what you see rendered, no matter how happy you are with the result. Cellphone sensors are tiny garbage that let AI do the heavy lifting. Things are not shaped how they appear in the photo and it changes drastically without you realizing it. You can move the goalpost and say retard all you want. I don't see anyone suggesting only using RAW photos fully white-balanced, which is as close to non-synthetic as you're going to get. Literally the paper's point is that if you regurgitate data ad nauseam, then you will isolate the major features that the model biases to.
>>101579206No goal posts moved, it simply sounds like you're saying "it's okay to use dalle images in training because when you really think about it all images are AI". If that's not what you're saying please correct me. >RAW photos fully white-balanced,That would be interesting.
And if that IS the case then YOU are the one moving >its okay to use AI images in training because real images also have flaws >well okay real flaws are better than AI flaws, but every real image is actually AI so it doesn't matter anyway
Training on synthetic data is a necessary part of dreambooth and is why it works.>>101579242That sounds sloppa af. I'm saying people speaking in ultimates about synthetic data being bad are missing the forest for the trees. Adding Dalle3 inputs won't ruin your model, but depending on some unknown resource will. You see the difference?
>>101579041>"relative performance" is performance relative to price right?no, when sites like these do reviews for cpu or gpus the relative figure here is to the main product being reviewed. So for example in that chart the A770 performs in gaming at 39% of the main 7900xtx (which is why that says 100%).If you want a performance per dollar then look at this page from the same review https://www.techpowerup.com/review/xfx-radeon-rx-7900-xtx-magnetic-air/32.html
>>101579259The best data is the data that fits our bias too. No matter the source, we're trying to get out what we expect.
>>101579282>7900xtxBeware there are cuda dragons here
>>101579308sure but I wasn't suggesting him get a 7900xtx. I just grabbed the newest GPU review TPU did so I could show him their SD performance chart that shows a bunch of other cards because he's thinking about getting an Intel Arc 770
>>101579320You did good anon. I'm telling the newfag the full story
>>101579272>Training on synthetic data is a necessary part of dreambooth and is why it works. Can you speak more on this? >missing the forest for the treesThat's a fair point. A very fair point. But I've never seen a model trained with AI images that "looked good". Every dalle clone looks like utter ass. Moreover, you can blatantly see when a model is trained with a heavy use of AI images like https://civitai.com/models/477673/extramode-pixart-sigma It seems like common sense to say "the more AI images used in training, the more it will look like AI which at the end of the day is what we're all trying to avoid, no? >>101579290I think the majority bias prefers real images over slop. If you prefer the latter by all means go for it but let's not ruin everyone else's fun by including it in pretraining.
>>101579272>That sounds sloppa af.more sloppa than training with le heckin meme ai images scraped from reddit and twitter?
>>101579352>dreamboothThe loss function in Dreambooth training typically consists of two main components:a) Reconstruction Loss: This measures how well the model can reconstruct the input images of the target subject. It's calculated as the difference between the original image and the model's reconstruction of that image after adding and then removing noise.b) Prior Preservation Loss: This helps maintain the model's ability to generate diverse images within the same class as the target subject.The model aims to minimize a weighted sum of the reconstruction loss on the subject-specific images and the prior preservation loss on the class regularization images.This balance is crucial:* Too much focus on subject images can lead to overfitting, where the model can only generate the specific subject.* Too much focus on regularization images can result in underfitting, where the model fails to capture the unique characteristics of the subject.
>>101579426You'll have to be more explicit with >Training on synthetic data is a necessary part of dreambooth because I don't see in your reply where that's implied.
>>101579271cool
>>101579352What I'm saying is that synthetic data has its uses. This paper is actually a good case against AGI being able to take over>>101579491All of the class images are gens from the model you're training (with the same caption). Improves fidelity and reduces forgetting when training against what the model already has.
>>101564986Any animatediff wizards here? Seems as though everything new for video doesnt involve animatediff, cant even use TensorRT with it. How do I speed things up? How do I use 2 controlnets without it taking hours to render 3 seconds? Any tips on dynamic movement? Seems even openpose struggles sometimes. I've used LCM, T2I, lowered frames, loading the smallest models, etc. still slow as all dicks
>>101579520Do you have an idea as to why https://civitai.com/models/477673/extramode-pixart-sigma looks like slop and https://civitai.com/models/435669?modelVersionId=665122 looks so much better other than the former uses more AI images in training than the latter?
>>101579594kek idk captions/learning rate/taste/etc. play a huge role. Maybe extramode just looks like what the author wanted it to and bunline looks like they wanted it to.They both seem to be an improvement on the base model
>>101579594bunline is a tastier sounding name so it adds more flavor to the model
>>101579282>>101579320Thanks for sharing all this, I was planning to pair with an amd cpu so I wasn't going to have intel qsv on there, but intel gpus seem to have a fair share of toothing problems normal in a first gen product. Terrible idle power and the SD performance isn't bad but it's underwhelming...I'm not in a hurry to upgrade so I'll wait and see what nvidia 5000 and battlemage bring.
localjeets will soon be training on kling outputs as they claim 1000 niggabytes worth of youtube videos isnt enough data.just keep falling for the synthetic psyop!
>>101579780kek https://www.404media.co/runway-ai-image-generator-training-data-youtube/
>>101579854Non-paywall version https://archive.is/RAXi4
Pizza theft.
>>101579509thanks
>>101579890Hoping Simo fixes AuraFlow
Good morning>>101579594ty, bunline author here. anyMODE took the feedback on extramode and made realmode. I probably have 20x more images as well, as he is much more selective.
Come to ldg Quokka anon. We know you aren't using saas to gen
>>101578815with image gen we get safety cucking instead preventing its full potential in terms of the dataset
Bye for now
>>101580159morning anon>>101580379see you anon
hello anonshows it goin?
>>101580455nothing much, just waiting. wbu? nice gen btw.
>>101580455>>101580476been a while since I visited here, chill weekend, hopefully get a nice rest after this hectic week, genrelated haha
>>101580512that's a really nice gen. catbox, if you don't mind?
hello m u p p e t sthread split, ahaha. well, whatever. good day and inspiration to everyone. also, I am back in business.>>101580455I am easing back into SD. just redid various comfy workflows, and learned a thing or two. yay. nice bazunevkahs !
>>101580713>thread split,Uh, where?
>
>>101580455Good about to grab lunch, hbu?
Om nom
Comfy bred
>>101580851Cool gen
>>101581956 (me)>>101581674Also cool gen
>>101581956Thanks
>>101581634Talk about thin skinned
>>101581956>>101580851>Cool genAgreed>>101580512Also very good>>101581967ty
>>101582002Don't say that too loud. He'll crumple if he hears you
2k model printer goes BRRRRR
>>1015823582 iq here, what are you doing?
>>101582375Training the Sigma 2k model to make a new finetune of bunline
>>101582404>>101582358did you notice a significant difference in quality between the "normal" one and the 2k one?
>>101582404Forgot pic
https://reddit.com/r/StableDiffusion/comments/1ecjpvw/combining_sd_animatediff_tooncrafter_viggle_and/That's really impressive, probably one of the first short films that actually looks like it could be professionally produced.
>>101582404>>101582423looks delicious
>>101582421Yes, it's way more "clear" but not 4 times longer to gen better imo. I'm mostly making sure the 2k bunline doesn't get left behind
>>101582453I think the combinaison of 2k and a 16ch VAE will make the hands way better, models just need more pixels to work I guess, that's why it works fine when something is zoomed in, but not so much when it's far away
>>101582431It's not bad but they need to learn Film 101 they break the 180 rule multiple times. What I'm really excited about is when some of the traditional 2D animators learn that AI can speed them up 10x. AI is most powerful right now in the hands of someone that can do key frames and want some inbetweens.
>>101582453pixart is so good with water reflections, imagine what it could do with more parameters and a bigger dataset. bigma cannot come any sooner.
bigma updates plz
>>101582575nuthin at all :(
>>1015825752girl, ballroom, formal dresses, fine_art
>>101582593spooky ghost face! two of em! EEK!
>>101582614It does lots of cursed images now. It does a lot of people like n64 / low poly where you have a low fidelity silhouette with a face glued on.
>>101582593>>101582633More please and thank you
>>101582633it looks like she has layers to her face>>101582593this one is strangely beautiful, really interesting to see the model slowly start learning concepts. who is the mother?
>>1015826461girl, drow, silver hair, woman, beautiful, sinister, holding a dagger
abstract posters are going to have a field day when this releases
>>101582668>who is the mother?What do you mean?
>>101582719the model is like a little baby, you can be the father. can i be the mother?
>>101582670I love these so much
>>101582737I made it with the force
>>101582516The SDXL VAE really eats up low-res faces, and more pixels definitely helps.>>101582562Yes please
>>101582593>>101582633>>101582670HYPE
>>101582770kek
>>101582794It cost $200 in electricity last month to make those images.
>>101582773we already have a generalist 16ch VAE ready to be used, it just needs to pre trained further to fit into those new base models, I'm sure it'll happen for Sigma
>>101582827drop a link and ill throw you some coin (not bitcoin kek)
>>101582845I'll wait until I get more official, it be nice to pull a Pony and get the scratch for a A100/H100 computer.
>>101582827Way less than renting. LET'S GO LOCAL, LET'S GO>>101582836We shall see what Ostris can do with his VAE. If not him, pixart should support it in the next release.
>>101582889Ignoring the cost to build the dataset computer of course. I'm like $4500 in. But chump change for a dumbass hobby project like this. Guys my age / profession are puttering around in their fishing boats.
>>101582915Borrowed parts from old mining rigs have gotten me far. That plus electricity will still likely have you under the costs of anything else being released
>>101582981prompt?
>>101582997kek literally this:>>101569012 (You)>the voices>>101569379 (You)Caio
>>101583010KEKE
>>101582981Yeah the main price I'm paying is time and my computer. I'll still doing captioning and downloading more images but the main dataset I want to train on is complete. But a lot of this is prep for the next model and getting everything before the internet / archives get nuked. Only a matter of time until you can't download anything in bulk.
>>101582836im sure there are people who know much more and i hope to be correctedbut from what i understand the problem is that a vae is two part, an encoder and decoder, when training the images are encoded to the vae dimensions and trained with those encoded latentsand then when you go generate with the trained model, the model generates latents, and those latents are then scaled with the decoder portionso the damage is kinda already done because the model is trained with a fixed amount of dimensions, even if you modify the blocks/layers of the model or translate the outputs of the model to what the larger channel vae can understand
>>101583036>prep for the next model and getting everything before the internet / archives get nukedAlmost every day a new complaint about major AI companies scraping too hard on twitter
>>101583045SD 1.5 and SDXL mapped over to 16ch nicely but Sigma gave issues. >model generates latents, and those latents are then scaled with the decoder portionThey're scaled after encoding in Sigma (1). You should only need to train a few layers or an adapter. Ostris was experimenting with a different scale factor (2), which might be the magic we need to get cooking with 16 channels.1. https://github.com/PixArt-alpha/PixArt-sigma/blob/master/train_scripts/train.py#L1552. https://x.com/ostrisai/status/1816064676096086030
>>101583210yeah what i mean is that its not that easy to convert already existing specific models like pony or animagine just by training the vae, since the weights of those models exist based on the originally encoded latents, so they would either need to be retrained completely or finetuned to get benefits of the new vae
>>101579890abomination kek
>>101583210>>101583276 (me)oh maybe i get what is going on now, but still wouldnt it be a kinda uphill battle to force the vae to do a lot of the work of improving the output of the models, since its much smaller and would probably be fighting against what is actually encoded there
>>101582423Why the image isn't 2k?
>>101582562current sigma (finetunes) are so good but always room for improvement indeed, bigma cannot come any sooner
>>101583441Can't say definitively what the 16ch problem is with Sigma or I'd have fixed it>>101583506The local anon curse. Training and no VRAM left to gen with. Now's a good time to validate though
>>101583506Needs more steps but here
Trees are still fuzzy. We have a while to go yet
And the .png's for 2k are always above the image size limit
>>101584232could your make one of these pictures but prompt for it to be made out of nuts or something?
>>101584252
>>101584289thats nuts
>>101584289im going this lesbian couple
>>101584306to eat
>>101584298lul>>101584306A detailed photo of two female souls joining as one made of nuts. The background is the sky
>>101564986Local or bust
>>101584605a nut
I tried using 1.5 for the first time (I started on Pony), and it's so fucking shitty.I wanted to use it because there's so many celebrity loras on civitai, however after this experience I'm quite confident you should just delete them all off the website.
>>101583916very cool colors
>>101576120>>101576838And this is also an awesome style
>>101584783Sigma does better colors w/ SDXL VAE than SDXL can. Doesn't make sense
what causes wild changes in generation speeds? same model and sampler for all
Getting near the end of the thread again, oven fresh bread ready to roll...>>101585073>>101585073>>101585073
>>101577684>>101577754any saves? it's gone
>>101585035If that's Windows, shared VRAM?
>>101585127We could it have it all!
>>101585035a1111 does this to me sometimes, using a controlnet basically guarantees the slowdown, after that my gens will take like 10 minutes and i need to restart the server
>>101585127Rolling in the deep
>>101585234yeah it is on windows. is there a recommended way to force stable diffusion to use up all the dedicated ram or something?
>>101583916So it's 2k native but also can gen 1024 without problem?
any starter video card suggestions? I have a 1050 ti which obviously isn't working out with 4gb vram. hopefully less than $500
>>101588327I think it's still typically a 3060 12GB.
>>101588826thank was looking at that one. The used price is just a hair below new though...