You should buy this if you're serious about AI.
>>108790434Ok, you’ve convinced me. I’m buying one now
>>108790434Between a house or RTX 6000 Pro....... hmmmmmmmmmmmmmmmmm tough choice.......................
>>108790475A house that cost $8k isn't worth living in
>>108790434Nah. Honestly a 5060 ti 16GB is fine or get a 5070 or 5080 if ou must. If you are just fucking about you can even run models like misral 7b on openvoino with no GPU at all if you have an CPU made in the last 4 years. You don't need to spend thousands on his stuff, even to train models
>>108790496I dont live in United Statesian
i buderized and iv little interest in ai
>>108790434 Nah i have 5070ti and i can gen degenerate shit i want at 720p resolution with 20 seconds generation in 5 minutes.
>newer GPU lower VRAM for generative use>older GPU higher VRAM for trainingi won, prove me wrong
>>108790622*boughtendered
>>108790434you should buy 4 of them if you are serious about ai
>>108790434costs more than my toyota rav 4
>>108790434im poor so im stuck with my 4070 that can barely run qwen =(
>>108790434already have one and more
>>108790798why at 300w THO?
>>108790819power cap to prevent it from exploding. loses like 7% performance by halving the power.
>>108790826performance in what? because i happen to know thats not true for gaymes or image generation or local llms or..
>>108790843llms. it makes little difference in t/s and saves me like $25/month in electricity.
>>108790725Really sad were getting to this point now
>>108790843did you spend 11000$ on it or get it some other way?Because if you spent 11000 on it you are tarded
>>108790855have you tried some troubleshooting as to the why? that most certainly should not be the case. overheating? underpowered psu causing severe clock stretching?or do you think its normal? i care not for whatever reason you think its normal if thats the case jw.
>>108790892i paid $8500 for mine in november>>108790896i have a 1600w psu. it is in an open air frame with plenty of fans. pretty sure its just normal. have you tried power limiting your blackwell?
>>108790843how much perf do you lose in games going to 300w?
>>108790434Why is it called blacked edition?
>>108790925black = strong, fast and violent
>>108790434You dont need LLM. Talk to real people or be a loner.Consumer GPU is more than enough to generate imagesHigh end GPU is more than enough to generate videos
>>108790892it was $8697 including tax>>108790905>>108790906the only new game iv played with it is crimson desert and i didnt go that far down. i stepped down in 50w increments until the framerate became unbearable. i got down to 450w. for image generation, anima specifically with my normal settings, dropping to 500w increases gen time by 2 seconds, round about 9%. i havent intentionally tested with llms but iv been playing kenshi with sentient sands (local gemma4 31b for messing about) and accidentally left the limit at 450w. it was immediately noticable that the responses were coming back way too slow for almost real time conversations as normal.
>>108790971As for coding, you can run Qwen 3.6 on a used 3090.
>>108790434I'm only serious about my gender transitioning
>>108791270what has it gone up to now? 44%?
>>108790434I"m neither rich nor a business. I'll stick with the strix halo.
>>108790434if you're really serious about AI that doesn't have enough vram.>>108791292too bad AMDs software is a buggy mess.
>>108790434If you are actually serious about AI you would rent servers which are faster and have more capacity than the GPU.
>>108790434I'm not.AI is just a fun little tool that gets boring after a while. Not really life changing like home computers, smart phones, etc.The bubble pop is going to be nasty.
>>108793592But how about porn
>>108790434I already have 8, no model worth running will get any usable amount of tph with a single one.
>>108790434How will the price of this change in next year or two? There are some GPUs that were $10k a few years ago selling for less than $500 now
>>108790646This. I've got a 5090, it's fun for image gen, voice cloning. Haven't touched video in a while and it can do it, but takes a while plus I'm worried about starting a fire.Can't pool the 5000 series since they don't have NVLink, like you can with the 3000's, so to my understanding, the card you have is your VRAM cap.I'm honestly thinking about selling mine since you can get $3500 for them used pretty easily, and putting the money towards the best mac I can afford.The macs are slower with AI, but the higher memory ceiling seems like it'll be more valuable going into the future.Nothing's definite yet though, part of me wants to retain it so I can have multiple types of setups.
>>108790434what should i buy if am not at all interesed in AI?
>>108790434how many instances of DSv4 can this beast run?
>>108794749>The macs are slower with AI, but the higher memory ceiling seems like it'll be more valuable going into the future.It's the opposite. Memory efficiency is advancing faster than anything else with LLMs. Besides SOTA massive base models everything is built around 16/24/32GB limits because the entire prosumer market is limited by these values. 128GB of shared RAM doesn't mean much when usably fast models use 24GB in offloaded layers and 8GB in context.
>>108790798same, i also have a mac cluster
>>108794774what are you interesteded in?
>>108790434looked at a prebuilt lenovo with one of these in it, $12k all up for a fucking 16 inch laptop for multimedia work.
>>108790434>being serious about AI
>>108790434If you're serious about running an abliterated model maybe? vast.ai is better. Personal GPU's are for fucking 2023.
>>108790434Why? You should buy a Mac Studio with 256GB RAM.