You bought a 24GB/32GB/48GB graphics processing unit to futureproof yourself, right anon? You didn't buy a peasant tier 12GB or 16GB card from 10 years ago? I seriously hope you didn't fall for GamersNexus memeing you into how the 30/40 series flagships were not worth the money, or how the 50 series should not be bought under any circumstance?
>>107993761Yep. Got myself a 3090.
>>107993817this but ti
I have (2) RTx 3090sI have (3) RTx 3090 tisI have 2 RTX 3060I have 1 RTX 3080tiI have 1 RTX 4080SI have 1 RTX 3060tiI have zero AMDgpu shit unless you count my two Ryzen 7 5700G APUs which are actually quite decent paired with my fast DDR4 4000 CL14.Honestly, Ampere has been more than enough for me.
>>107993761I have a 4060 laptop but since I mostly play GameCube/PS2/3/360 and older games it's overkill
>>107995233>I have a 4060 laptop but since I mostly play GameCube/PS2/3/360 and older games it's overkillBased
>futureproofing>VRAMRunning out of VRAM has never been the issue. You will still replace your flagship card in 10 years because despite having a decent chunk of VRAM, the actual processors themselves are incapable of handling the workloads of the future in an expedient manner.
>>107995367The next gen of games will only run on servers and you will pay a subscription
>>107995367Poor support from evolving standards too. I got rid of my 980 because it basically lost all support except for basic driver stuff.
bought prebuilt with 5070 12gb, because I realized that I don't do anything with pc anymore and my kids mostly play hytale with it. I just lurk here because I can't leave
I have a 5080 and I've played one game it couldn't max out and that was Cyberpunk, a bullshit tech demo game that's still benchmarked because it's trying to ray trace a city on the cdred engine. >local modelsImage generation is slop, and no consumer GPU has enough vram for a text model that's any good.
>>107995187kek this was the funniest shit nvidia pulled $1999 3090ti just to release the 4090 (massively better) for $1599 6 months later.
>>107995447>and no consumer GPU has enough vram for a text model that's any goodThere are no GPUs period that have enough VRAM for large text models. If you have the money, you build a $200,000 cluster and use model sharding. If you don't have that kind of money, you just use memory mapping + tensor offloading. You can run a massive 140GB+ model with 4GB of VRAM so long as your GPU is actually compatible with the APIs involved.>But it reduces performanceOh no, it goes from 0.8 t/s to 0.3 t/s. The horror.Although if we keep OP's logic, only a fool or a poor man wouldn't have a 16x cluster of A100s making the whole point moot.
>>107993761yea, 2x 3090
>>107993761I built a brand new rig right before all the prices went retarded. 9950X3D Nvidia 5090 and 64gb ram. Should be set for a while.
got my 4080 in 2022 for cheap looking back at it with the 4080s and 5080 performwnce i thing it was not a bad buy and sinze nothing will come out till end of 2027 it needs to last
>>107993761I just went from a 580 to a 9060xt 16 gb and only because my old pc exploded (psu gave out, took every single component with it into the grave). Wouldn't even have bothered with a 16 gb card if my local store hadn't offered me one on sale for €329 which was the same as the 8gb cards in stock.