the M4 Max MacBook Pro wipes the floor with the Nvidia RTX 3080 — and even comes close to matching the RTX 4080 Super.https://www.digitaltrends.com/computing/m4-max-chip-beats-nvidia-gpu/#:~:text=A%20new%20series%20of%20tests,matching%20the%20RTX%204080%20Super.
>According to Apple marketing dept quoting this one weird microbenchmarkamirite?
Yeah, it's pretty crazy how Apple was able to pull this off. On the Blender benchmarks website, the M4 Max is the ONLY non-Nvidia GPU to make it into the top 25 GPUs benchmarked for GPU rendering. They've just disrupted a section Nvidia has pretty much had complete dominance over for years. It's beat out the current-gen RTX 4070, the RTX A6000 (Nvidia's highest-end workstation GPU last-gen) and it's very close to beating the 3090. It's also beat out AMD's most powerful Radeon GPUs, the 7900 XTX and the workstation Radeon Pro W7900.https://opendata.blender.org/benchmarks/query/?compute_type=OPTIX&compute_type=CUDA&compute_type=HIP&compute_type=METAL&compute_type=ONEAPI&group_by=device_name&blender_version=4.2.0Imagine what the M4 Ultra will look like on the Mac Studio and the Mac Pro.
How long can it maintain that performance before needing to throttle though?
>>103209210When you look at how much higher the 4080 Super is on the chart than the M4 Max, though, it makes the two renders in the video from OP look more like outliers.
>>103209210The rtx scores are strangely low. Someone must have flooded the scores with openCL or something
>>103208944>when intel/nvidia/amd releases new product>gaming benchmarks!>when apple releases new product>blender!Suddenly everyone here is a 3d artist.
>>103209302This is from Blender, so the workload is probably going to reflect rendering rather than gaming.And given that Apple makes workstations, not gaming machines, it makes sense that their computers would show up more in lists of top hardware for work, rather than gaming.I know this is alien to /g/ since /g/ is full of people who don't have real jobs, but professional artists, modellers, composers/producers, etc., all mainly use Apple computers.
>>103209373>Apple makes workstations
It's dead half of a desktop 4090 at 10x less gpu power draw.The M4 ultra will match the best consumer gpu available at 60-70w.
>>103209403So it's half now? So it's at 60W? Why do you keep changing the tale? What actual information do you have available?
>>103209402Behold: one of /g/'s unemployed LARPers.
>>103209419Yes because we all know you need a "workstation" for every job out there.
>>103209415it always was half.the blender benchmark website and youtube tests with asitop and mx power widget showing how much power the 40 gpu cores in the m4 max draw at 100%.
>>103209415half in this task, namely.in others which puget reflects the m4 max smokes most desktop workstations.
>>103209483And many, many more who buy them to read their e-mails and watch gay porn. It's just a way of thinking.
>>103209373No, it's not from blender, it's just a score tracker from user submissions. The 4090 should score around 13k with optix but here it's 10.8k
Cool, but can I make full HD AI titties and cunny with a base mac mini?
>>103209520still ~11K
>>103209403It's half of a desktop 4090 at double the price. Why am I supposed to buy this again?
>>103209334Ironic considering Apple products are horrific for productivity.
>>103209334Hello there.
>>103209582A macbook with M4 max and 48GB of V/RAM is $3200.A 4090 24GB by itself is $2000 and you still need a computer around it.In the end it's cheaper, portable, can use larger models, draws far less power and won't dry pussies.
>>103209373>>103209419because you're employed as a shill? that's worseyes we know you've successfully gas lit many professionals
And yet no games to be had so this power goes unused. Apple silicon can be as good as it wants in benchmarks but what good is it if it has no use
>>103209334apple contributed specifically to blender not a long agothey probably optimized favoring their shillbooks
>>103209616>And yet no games to be had so this power goes unused.GPUs are used for a lot more than just playing games now.
>>103209334It's the same thing everytime. This board shits on videogames all day, but only cares about gaming benchmarks.They shit on apple all day, but spam the "real world" benchmarks all day as if they spent 24/7 rendering or unzipping shit.
>>103209373these are literally raytraced blender benchmarkssteve jobs would have fired you
>>103209612>A 4090 24GB by itself is $2000Unless you were lucky enough to get it from Nvidia's official store at $1599 when it was in stock.
>>103209550>apple floods the submissions with spoofed 4090 mobile GPUs to make their shit look better
>>103209550>what is a median>what is sample sizelemme know when you fags can do an on stage live benchmark battle like apple used to do in powerpc days
>>103209664there's no result higher than 11k on optix, regardless of your imagined spoofs.>>103209662just give a total for your build. i bet it's still close to $3k.
And here's the M4 Max GPU balls to the wall using whiper via mlx. - 37W peakhttps://x.com/ivanfioravanti/status/1856118273080754408
>>103208944Who will be dumb enough to use blender on a mac when they ask $1000 for 16gb of ram?
>>103209661Case in point: it's a rendering workload. It's not focused on shitting out swapchain images as fast as possible.
Lmao, it seems the faggot didn't even turn on the denoiser.https://www.youtube.com/watch?v=0bZO1gbAc6Y
>>103209758the 16gb M4 base is $499.
>>103208944tldw, he cheated. CUDA alone can't even fully utilize the RT and AI hardware, while Apple chips on Metal do. Optix is anywhere from 1.3x to like 1.8x faster than CUDA. He also used a 10900k as the CPU, which will also bring the speed down a little
>>103209373>This is from Blender>on a macLiterally not a single soul on the planet earth uses blender on a fucking mac.
Its an impressive machine and I have become a believer after getting an M4 pro on release day, but its one thing to have a benchmark and another thing entirely to actually be optimized for what Nvidia can do. tensor-metal and appleCOPEML among others can't do everything, I'm still running dnn/cnn and computer vision workloads mostly on CPU, so I need to ssh into the actual big boy pcs with A100s at work to get anything other than some quick off the cuff shit done. >>103209402It's true though, I don't know what to tell you.
>>103209912It's absolutely not true, girl laptops aren't workstations.
>>103209912nvidia knows something that we don't which is why they're making their own ARM desktop cpus.With Apple PCP launched I'm assuming Apple is going after nvidia's GPU compute business, which they have the silicon for, but so far, lack the software ecosystem.They've launched GPTK and MLX too so they're digging in.
>>103209948Is it even worth it for Apple to try that when their shit is already so locked down? That's going to bite them in the ass one day. They treat even the thought of opening up their shit to gaming like it is less than shit. Why are they surprised they are behind? If they simply stop being retarded this wouldn't be an issue for them now. There's literally no good reason for them to have their shit so locked down
>>103209948>nvidia knows something that we don't which is why they're making their own ARM desktop cpus.No, they know what we know too. The Qualcomm-Windows for Arm exclusivity deal expires by the end of this year.
>>103209961if a paywall keeps out the pajeets it'll pay off longterm. you don't want a piece of software made open source by white and/or asian engineers, then made accessible to anyone by white and/or asian engineers only to have millions of jeets using it to create slop and flood the job market with "their" knowledge.the monetary caste system is based and should be enforced in computing too.
>>103208944In a fucking laptop too wtf
>>103209885Why not? Blender fully supports Metal.
>>103209642so, gooning?
>>103209961if steve jobs was still alive they'd have probably broken into the pc ecosystem by now, but instead we have these people who grew with and into the apple cult
>>103210041Because these programs don't exist in a vacuum. You're using several other programs for converting files, managing files multiple people are accessing, (specifically for a studio), using third party plugins or proprietary plugins in studio, and half dozen other pieces of software that just don't run on macOS. You're using windows or linux.Third party plugins are rarely made for macOS and necessary for most dev flows.
>>103209948Nvidia already has some monstrous support for their hardware, it's hard to justify not using their Jetson equipment. >>103209961It didn't bite them in the ass, and their ARM processor line only strengthened their position. Gaming doesn't matter, not gonna bother getting into it in this thread just go to /v/ if that's what you want to talk about.>>103210004pretty great take. Jeets destroyed windows as a work environment. >>103210113I'm sure this is a really great opinion but I'm not familiar at all with your use case, so stuff like this doesn't affect me personally.
>>103208944I'm sure it runs at an ice cold 200° Celsius too because Apple wants you to buy a new one next year.
>>103210142That's cope dude
>>103209961>They treat even the thought of opening up their shit to gaming like it is less than shit.Yeah about that, they appear to be changing their minds, i.e. they managed to convince CD Projekt to write an entire Metal renderer for Cyberpunk 2077 and do a native MacOS build.
>>103209210>tied neck-and-neck with the 4080 laptop GPUYou want to know something interesting? The laptop 4080 only has 12GB VRAM, that's it. Once that runs out, the performance will tank like a rock. The M4 Max's GPU has access to virtually all of the MacBook Pro's unified memory. The MBP supports up to 128GB.
>>103210142M1 systems are still operational after tons of abuse, I'm not sure you know how this works. Also, these systems are easy to refurbish internally which is why you still see numbnuts buying outdated refurbished Mac systems for insane prices since they work just fine and cosmetically look great.
I hate Apple but I just hope they fucking crush NVidia because those asshats have become far too complacent in the last 15 years, fuck em.
>>103210150They're a tad bit late to the party, is the point I'm trying to make. They want developers to do extra work to support their OS when they neglect it the entire gaming industry for basically the entire existence of Mac OS, hell, their entire fucking company. Not for any practical reasons mind you. But because the CEOs and later his successors were so far up their own ass they thought they were above gaming for whatever the fuck reason. They made their bed and they have to lay in it via playing catch up
>>103210149I couldn't care less. Nobody uses a Mac for anything but video editing, graphic design, and watching youtube anyways.
>>103210170>tons of abuse>Yeah bro I watched thousands of hours of Netflix on this bad boy
>>103210172They won't because Apple will fumble the bag like they always do. They're too far up their ass in a confident to actually make a computer that is appealing to the workhorse power user or gamer. As I said above, they treat it gaming like I didn't exist. It's one of the few areas they will utterly fail in it'll be entirely their own fault because of their laziness. (Then failing in VR doesn't really count because everyone fails in VR. By failing I mean failing to Make it mainstream, a If you tell task some alien creature known as "the Zuck" keeps trying and failing to do)
>>103210194I'm glad it's not been ruined by gaming. You'd see too much third party jeetware and other bullshit otherwise.
>>103210194>As I said above, they treat it gaming like I didn't exist.https://www.apple.com/apple-arcade/
>>103208944Cool. Still not buying it, but impressive.
>>103210217That's THEIR in house shit. I'm talking about tearing enough about game development that gaming on a macbook is actually an attractive option. Haven't you wondered why most if not all AAA games just aren't a thing on MacBooks? It's not because the MacBooks are incapable of running them. Apple just doesn't care enough to provide support for game development. The most you can get is shit like Minecraft working, or if it's on steam, some gooner VN slop at best. It is 100% their own fucking fault. MacBooks are capable machines. They just held it back from being even more widely adopted for whatever the fuck reason. There is no good reason for them doing that. The higher ups were on crack or some shit and that's why they decided to not allow the developers to have proper game development support. No AAA developer is going to Port their shitta Apple arcade. There are a lot more PCs and a lot more Xboxes and a lot more PlayStations than their shitty Apple TV
>>103208944Impressive. Apple spends a $30B per year on R&D. But what about the RAM? The ram chips run really hot on RTX30 and 40 series, right? How do they handle that? And who owns the IP for this, Apple or ARM? Will be interesting to see analyzes of the die once it releases.
>>103210172>become far too complacent in the last 15 yearsThey've done the exact opposite, tard. They've been so hypercompetitive that no one on the planet can compete unless they're deploy shills to cheat benchmarks like >>103208944
>>103210306Dude, literally everyone can compete with them. Just because they're the most popular brand in a particular hemisphere does not mean they dominate the entire sphere. Drop a pin on any town in the US and you'll find on average just as many Androids (particularly Samsung, any other brand isn't even worth talking about) as the iPhone. The only area they really fall behind in heavily is the compute field because they didn't take it seriously. The M series chips don't even count because the only super exceptional when they are running software that is specifically rewritten to run on the M series ships. Not saying that M series chips are bad in any way but once you have to run something that requires virtualization (ie software that is written to be run on an x86 chip) That's when those amazing and out of this world benchmarks start to not happen as much, if at all. Windows arm laptops have the same issue. They do REALLY well with better life as long as you're only using the browsers or Microsoft office apps that all have dedicated arm versions. Run anything that requires x86 that is computationally intensive and suddenly you hear the fans kick up again
>>103210335>t. jeet having an ESL meltdown
>>103210347>no argumentConcession accepted.
>>103210371>jeet doubles down on his ESL meltdown and declares victory, remaining ever ignorantClassic jeetery
>>103210293the ram is directly cooled by the same heatsink as the cpu
>>103210335>Drop a pin on any town in the US and you'll find on average just as many Androidsdrop a pin on any town in India and you will see someone shitting on the street
>>103208944Why is gaming still so dog shit on mac?
>>103210679>No arguments
>>103210699because Apple didn't start caring until recently
>>103210513Wait, you are telling me that they're connecting the heatsink to the actual parts they want to cool down now?That's some great progress.
>>103210744yes, jonny ive is gone
>>103208983Even better. The Apple gorilla marketing team.See my fren? No lawsuits when its just strangers on the internetApple marketing team plays it safe when these gorilla marketers flood the internet with lies
>>103209373>professional modellers use mac lol
>>103212141>gorilla
wake me when 50 series
>>103208944woaaah a 600mm+ sq N3E SoC holds its own against a 380mm sq N5 GPU
>>103208944What's the point if you can't play games on it?
>>103212177
>>103208944I have no doubts it can run facebook as good as 4080 but that's not really what those GPUs are for.
>>103208944Meanwhile, a friend of mine had to sell his M2 macbook and go windows, because it had gotten so damn slow it was impacting with his work.Slow for no apparent reason.
>>103209373>Apple makes workstationsLOL. LMAO even.How much does applel pay you to shill here?
>>103208944That's nice, how much is this shit again? $3000? I can buy a 4090 plus the rest of a high-end PC and still have money left, and I can run real shit instead of the 5 or 6 apps available for mac.
>>103209864>Apple fanboy running a jewtube channel lied to get viewsShocking, that has never happened before!
>>103212472Now try putting it in your backpack and traveling with it
>>103212472Now try getting laid when a chick goes home with you to find your RGB gaymer shit taking up half the room
Didnt they say the m3 was 4090 tier? Lmao what a load of shit that was
The M3 Max had 92 billion transistors. M4 Max has even more, presumably. AD103 has 46 billion transistors. I suppose that's why they're cheaper.
>>103208944>crapplejeet currynigger shits the streeta tale as old as time
>>103210217>$7/month to play Super Fruit Ninja, Hello Kitty Island Adventure, Solitaire, Spider Solitaire, Snake.ioPC sisters... our response?
>>103209210>reddit spacing>"Yeah, it's pretty crazy how Apple was able to pull this off."
>>103208944For AI training or just web browsing?
>>103209817The m4 that is fast enough for blender is the max even the 20 cores pro is quite bad, and when rendering the more ram the better.
>>103212623Have you seen their ads?
>>103208944What the fuck is that zero font
>>103212540>t. Projecting virgin
>>103208944I CAN'T GAME
>>103209601Based.
And when it comes to AI training, how fast is the Mac mini m4 pro? Is there a benchmark for this type of task?
>>103208944Retard didn't activate OptiX for Nvidia GPUs and that's how he got "close" results.Also checking opendata.blender.org the benchmark is full of shit tooApple false advertising reposted by "popular" tech sites once again
>>103216251Slow, unless you get the max amount of ram and test stuff that uses more ram than what the rival GPUs have.
>>103216292I was under the impression that AI training went only feasible if your machine had cuda efficiency or whatever the AMD equivalent of it is. Literally no one is even capable of training anything useful on a CPU only set up. You can use small models, yes, but you sure as fuck aren't training anything on them
>>103216276Apple would never lie to us.
>>103216276Not a problem for mac fanboys, they'll start posting how the m4 is only 499.99 without realizing the only m4 that can perform as well as a laptop 4080 is the Max with 40 cores version.Price starts at 3699$
>>103216292So I'm keeping my m1.I want a machine for training and not just running llmas
>>103216322With the unified memory they can train large models as far as you have the ram for it, it'll be faster than trying to do it on a gpu that has not enough ram, just not great.But nvidia professional hardware is quite expensive and often hard to get.
>>103208944Still not buying a notch
>>103216379I don't think you quite understand what cuda cores are. Maybe I didn't mention them in the previous post. Cuda cars are like having a better engine that performs a specific thing better and faster. Having a bunch of ram means fuck all if the chip itself is poorly optimized to do that task. GPUs are really good at doing matrix multiplication. Laptop CPUs aren't. Think of it as like comparing my City 10-year-old Prius to an F-150. Most cars are built have roughly the same gas tank sizes and most of them are built to travel 400 to 500 mi tops. Having a similar size gas tank does not mean that both going to be able to do the same kinds of workloads. Do you get the analogy trying to make? If "just add more RAM sticks" or a feasible solution then GPUs would not even exist. There would be no need for them
>>103216379>>103216420Not that anything for talking about even matters anyway since Apple execs have a mental illness preventing them from allowing their machines to actually be useful for things beyond specific niches. For all you know you could be 100% right and they would still find an excuse to make local AI training and possible.
>>103216359>>103216322Be aware that it's gonna be slow even with 4 mac studio with the biggest max they have (currently is the m2 max), the only pro is that you can do it locally.It's better to rent a server with a bazillion cuda cores.
>>103216441Apparently you have to have a lot of money to enter this market, even taking the cloud into accountToday we could say we're in the 200,000 dollar range?
>>103216420Ok will try the analogy route too.Let's say your 4090 is a ford mustang and the m1 with 128gb ram is a shitty old truck that can't run above 50 km/h.Now lets say you have to move all your stuff to a new house 2000 miles away.What happens when there's not enough space in your car to put your stuff is the same thing that happens when your gpu has not enough ram to train something big.The shitty truck will still be faster but it's still just a shitty truck.If you want to do it fast just hire a moving company that can put your stuff in a plane.
>>103216441So why settle for that in the first place and just use a Windows or Linux machine that can use an Nvidia (hell maybe even AMD) GPU?
>>103209373>Apple makes workstationsDelusion is strong in this one
>>103216578Then what are these?
>>103216538You are analogy still does not work. The ran means fuck all if the efficiency is bad. Think of it as like comparing a minivan to I pick up truck that has a lot of torque. A Ford F-150 can haul an entire semi truck WITH The trailer attached. A minivan isn't doing that shit. Saying "more RAM" = better it's like saying having a bigger gas tank makes your car engine more powerful. That's just not how they work. GPUs are tailor-made for parallel processing, ESPECIALLY Nvidia GPUs with cuda cores. They completely shit on even their AMD counterparts. You can literally Google search a YouTube video to explain this better than I can but a laptop simply is a comparing to a GPU. As I said earlier, if simply having more RAM sticks solve the problem, we would never have any need or demand for GPUs in the first place.
>>103216511Depending on what you want to train, I don't train llms but do smaller ml stuff so I don't need massive hardware an A6000 (48gb) goes for ~0.50$/hour and an A100 (80gb) for ~1.50$/hour
>>103216596Pet projects. No one actually needs these shits. People who have even bought them on YouTube claim they kind of suck compared to Windows setups at comparable price points. Apple does not actually give a shit about making powerful shit that competes with their counterparts. They just want to grift Rich fags
>>103216511I think you're confusing training with fine tuning. Training and entire model from scratch takes a lot of money. Fine tuning can be done on a shitty consumer grade 8 GB GPU depending on what you're fine tuning. Image models or object detection models are probably easiest things to fine tune on consumer grade GPUs (as far as I can see.)
>>103216251training on what datasets?i did a facial recognition model with 100 pictures of a lo...vely person in coreml in a few minutes with 95% accuracy on a 16gb m1.you'd have to be more specific
>>103216623What kind of model were you using? I've tried training object detection models a month or two ago using YOLO models. I didn't know you could actually train anything on the M1s.
>>103216613>rich fagsyour poverty must be unvearable
>>103216596A desktop?
>>103216571cause having 128gb ram in a mac is still cheaper than a proper nvidia gpu, have you seen how much a workstation with a couple A100 costs?Tbh I recommend using the cloud cause training on macs is still slow as hell, or go the 3090 route and put 4 with nvlinks, anything above the 3090 will not have nvlink. I've also heard that there's a hack to enable p2p on 4090s but I haven't looked into it.
>>103216645just the basic image recognition template in xcode-coreml (they had a bunch already). i gave it the pics, trained, checked against a random sample.in the rnd it spat out a 30kB-ish model that got it right 95% of the time for my girl.very easy process
>>103216605>>103216621>>103216623Thanks Anons, now I get it. I'm more interested in creating models from scratch (even if they are crude), both as a hobby and for work, if such a thing as work exists in the future.
>>103208944Mods please remove these shill threads
>>103216711you can train small models on any M series mac. don't listen to the imbeciles who's only grief is "can it gayme?!"install xcode, install the coreml extensions and it's click click click from there
>>103216599You don't get it, as far as you have enough space(ram), sure the mustang will fucking crush that shitty truck, but if it doesn't have enough ram you will end up having to travel back and forth 100 times to move your stuff and it'll take a lot of time.>Ford F-150 can haul an entire semi truckYeah well I tried putting a couple 32gb ram sticks on top of my gpu and it didn't work out but thanks for the brilliant idea.
>>103216711I don't think that's possible with our current tech. Think of fine-tuning software is like a microwave and straight up training like a giant oven. If I want to reheat some mac and cheese or some barbecue ribs I can do that easy. You can't really cook anything properly in a microwave or bake a cake in a microwave right? It can't generate The kind of heat a proper oven could. They don't even generate heat via the same methods. (Sorry for even more shitty analogies). There's a reason these AI companies want to beg nations for billions of dollars. Making What are essentially minor tweaks to a model (That's what fine-tuning is) takes practically no resources compared to straight up training. t. Fine tuning is my main hobby https://civitai.com/user/AI_Art_FactoryThe closest thing we've gotten to models that were straight up trained from scratch or whatever mixtral labs have but I think they had access to millions of dollars. People aren't doing that shit in their garage unless there are either breakthroughs and chip data processing efficiency or someone figures out how to deep blow to the entire training process in general>>103216763The ramp doesn't matter if it sucks at doing it particular task. Could of course or tailor-made from matrix multiplication and other tasks like that. CPUs or not. Give it 200 GB of RAM and you have inferior efficiency compare to a dedicated GPU. (I say all of this assuming Apple would have been allowed a fucking laptop to ever have 128 gigabytes of RAM. They can but they won't because that would actually benefit the consumer). >Yeah well I tried putting a couple 32gb ram sticks on top of my gpuAnd you thought that would help, why? The bulk of the processing is done on the GPU and last time I checked you can't just upgrade the RAM capacity of GPUs like you can with a motherboard (again, that benefits the consumer and that's bad for business :))))). You can literally test what I'm telling you right now (cont)
>>103216815(Cont)>>103216763Take a 16 GB Nvidia GP for example, and then place it next to a MacBook, iMac, or a Windows machine with 32 gigs of RAM but no GPU. The Nvidia setup is going to curb stomp The other machines because again come up the GPU is TAILOR MADE but those specific kinds of tasks. You straight up cannot train SD models without a GPU unless you want to wait litteral days or weeks, then wait several minutes or even hours just to generate a few images.
>>103216815>>103216763>>103216763"Just add more RAM :)" is not the answer or, as I said for the third or fourth time, GPUs would not exist. Why do you think they exist?
>>103216831On an m4 with unified memory let's say 48gb against a 16gb 4080 you will probably wait more on the 4080 if your data set is 30-40gb.Ram is extremely important to train, as soon as you don't have enough everything slows down to a crawl.
>>103209210>laptop gpu
worth getting a m3 max on a costco clearance sale (when it happens) or should I just wait for m4 pro deals?
>>103216984the m4 pro gpu matches the m3 max so in perf they're about the same.however m3 max has ore bandwidth for llms and that matters.the m4 pro gpu will be faster though.and they added SVE to the M4.if your tasks are cpu bound more, wait for m4 pro; if they're more gpu/ram bound, m3 max.use the mlx version of models for a free ~20-25% uplift
>>103217025>the m4 pro Cpu* will be faster though
>>103216934>if your data set is 30-40gb.Most fine-tuning tags don't require data sets anywhere near that large. Especially if you're just fine tuning a Lora or object detection model like yolo. I don't know what the hell type of models or data assist you're using but 40 GB sounds excessive. >>103216934>Ram is extremely important to trainSo what's the type of brand. You can't just use any RAM stick. It needs to be good at actually doing the thing you want it to do. The built-in RAM sticks and your m1 are NOT ideal for training, well, literally anything. Even if it can, it would make more sense to just use a regular GPU.
>>103216841>Why do you think they exist?Why do you think professional and AI cards have more ram and nvidia doesn't want to put more ram on the "gaming" ones?Same for why they have removed nvlink on all but professional cards.Also why the fuck bother with magnum IO to move data at more than 1TB/s If it doesn't matter everything just works on 16gb vram so no problem every company is just doing it wrong.
>>103209912>I am consooomer believerWow. Shilling is so easy to spot. How much did you get paid for this You? Why are you fagples marketing so hard on /g/?
>>103217058>Why do you think professional and AI cardsYou mean the GPUs? There are multiple kinds of models and configurations. From as low to a gigabytes to as large as 40. An 8 GB Nvidia GPU will curb stop your 8 GB M series MacBook at literally any task unless set task or software is custom written for the m1 chips. What did you think every single GPU came standard with like 80 GB of RAM? They don't HAVE to have a fuckload of RAM to do a task. Think of your regular RAM sticks in your laptop as C grade RAM in the GPU RAM as A+ or some shit like that. Compared to the GPU come up the ram in your laptop sucks at training, assuming it could even train at all. Where do you think the demand for Jeep uses coming from? Yes the AI craze is a bubble but it's also because the Nvidia GPUs are the only ones that can actually do the things the company wants to do and it cost-effective manner. It's more efficient. It's more power efficient if you opt for the data center grade GPUs that can run 16 GB of RAM at only 70 watts peak or less. (See the Nvidia Tesla T4 for an example) >and nvidia doesn't want to put more ram on the "gaming" ones?Gamers don't need more RAM? Are you trying to tell me you need 24 gigs of VRAM to play fortnite? I know people who have a gigabyte GPU cards and they run the games fine. All the AAA ones. All that 60 FPS. All good graphics.
>>103217025thanks anon, I'm interested in trying local LLMs so I'll probably go with the m3 maxnot sure if my tasks are CPU bound enough + compiled to make use of the new SVE insturctions
>>103217051Have you tried to TRAIN a 1.5B model? it gets well over 32gb you can't compare it with fine tuning and as soon as you go above what your gpu can hold you are fucked.
I remember my GTX490, my eyes watered when I paid a few hundred dollars for it. Now people are paying thousands for their gaymer cards as if it's normal. What a crazy world.
>>1032171665090 will be starting at $2500, 36gb, 650w, 3.5slots
>>103217132In that case just use a 40 GB or 80 GB GPU card? Again, that will still curb stomp trying to use a CPU only set up for reasons I'm sure are all these to you..... https://www.nvidia.com/en-us/data-center/l4/Would you look at that, This site has a neat little graph showing you what I'm trying to explainNo one in the right mind will even consider in their dreams training an LLM on only CPUs. That just isn't possible or practical. You need entire server racks of CPUs, possibly even entire fucking rooms, running 24 hours a day for weeks on end. You seem to imply you think you can do that shit on a singular computer. I don't know WHY You think you can do that come up but you just aren't
>>103208944>In our own testing of the M4 Pro in the Cinebench R24 benchmark, the M4 Pro landed in between the RTX 4060 and RTX 4070 mobile GPUs.how could a gpu land next to a fucking cpu benchmarkare they retarded?
>>103212275lol
>>103209601stop rendering porn.
>>103217225>>103208944>>103209864>>103216276Newfag techlet here. They're implying the CPUs are comparable to GPUs right? Even I know that's her ridiculous claim but I think there is confusion on the part of whoever wrote the article. How are people being fooled into thinking a CPU which is the level of a GPU?
>>103217225M4Pro is an SoC with a gpu included.Not that hard, retard anon.
>>103208944now let it run for an hour and send me the temps if it didn't shut off from overheating
>>103216596fake, 2019 was the last real workstation from themI wish I owned the 2019 when it came out
>base m2 pro mbp>qwen-2.5.1-mlx-7b-4bit>40t/sI literally don't give a fuck. This shit is magic.
forgot pic
>>103208944The hardware is really nice. But who are these machines for?
>>103218100How much storage in VRAM does this use? I might test it out on my machine
>>103218245storage 4GB, VRAM between 4 and 6GB.
>>103218261A lot less than I expected :O. When you prompt it for something How does it compare to something like chat GPT? How long do responses take?
>>103218267it's practically instant, ~1s until first token for a request like this, 35-40t/s afterwards.>"Please write a haskell function that monitors the contents of a log file as it is being written into, evaluates each new line as a string and, in case a dictionary (saved as variable "b") is found in that string, tries to match it against a predefined dictionary (saved as variable "a").If variable "a" is a subset of "b", print "success"I notice that the more esoteric the language the more VRAM is uses. But if I use python or c it stays in the 4-6GB VRAM region
>>103218319You're the dude with the M2 laptop right? How much RAM does your machine have?
>>103209601>Windows 11Yeah, nice Apple Arm machine you got there buddy.
>>10321837116, it's a base m2pro macbook
>>103218218anyone who doesn't prioritize games that are not native to macos and who doesn't cpuencode tranime.video, graphics, compilation, llms are all top tier.
>>103217207Don't listen to this anon. Next gen is dropping soon so wait for gb200nvl72.
>>103218605so what will be the starting price for that? 500,000+?
>>103218636It's per GPU rack (72 GPU minimum order) so I think 5 million. H100 boxes were 300k each for 8, now inferior, gpus in a full system. Of course one scalable superpod is 8 GPU racks (144 gpus) plus 12 other racks, providing the supporting management, network and storage hardware. Plus cdu per row for liquid cooling. Plus a rack per row for environmental monitoring. Obviously.
>>103217327It's you who are retarded, Cinebench is a cpu benchmarkIt can't measure GPU performance
In a perfect world, you'd be able to run Mac OS on any x86 CPU and be able to run any OS on Apple silicon.
>>103218793in a perfect world your currynigger shit OS and your currynigger silicon wouldnt exist
>>103218100>7b 4bit @ 40t/s>35% as a flagship desktop gpu using 20x less powerBruh
>>103218793no, the walled garden keeps jeets like >>103218826 out. and that's a good thing. jobs was right all along.
>>103218866The higher the entry barrier, the cleaner the environment.
is m4 good for gaymen?
>>103218826>currynigger shit OSSo, does this refer to Windows, Linux or Mac nowadays? I can't keep up with you retards.
>>103218826That silicon claps Incel and gAyMD lmao
>>103219046the base m4? no
>>103208944Still 25fps in gta5 kek.
>$3500 product is worse than $1000 productNot really sure Apple should really be boasting about this.Now compare it to an EPYC or Threadripper which is similar in price.
>>103219305a 4080 build with a similar cpu is $3k, not mobile, sounds like a vacuum cleaner and uses 1kw of powercope
>>103219329>>103219329>4080 build with a similar cpu is $3kPost the build. >not mobilegood>sounds like a vacuum cleanerthe opposite, Macs are terrible for this and throttling>and uses 1kw of poweralso untrue
>>103218636NTA, Unfortunately, we don't have the money to compete with the big corporations, and I don't even think there is such an AI algorithm for less insane machines. The starting price in this market is 200,000 dollars
>>103208944no proof>blenderis a thing in the kitchenIf you want to prove it, run a game let's see what it can do.
>>103208944>even comes close to matching the RTX 4080 SuperA laptop one yeah. A real 4070 Ti wipes the floor with it.So, impressive laptop chip, but a non-starter for real desktop workstation performance.
>>103216596Desktops with laptop SoCs.And that's fine for the Mac Mini.But the Mac Pro is way too underpowered (and overpriced) to be considered a real workstation. Apple silicon is pretty good, not good enough to be on a workstation. If they had a multi SoC setup, then maybe it could be worth it. But just one? Nope.
>>103220427m4 max is a laptop soc
>>103208944>35fps at 1280pWhoah
>>103208944>The gap between the different chips is evident in the pricing as well. A MacBook Pro with the base M4 chip costs $1,599, while upgrading to an M4 Max configuration will set you back at least $3,199LMAO
>>103219329Complete 4080S build with a 9950X is $1900USD you lying fuck.
>>103216276>source was some rando with under 10K subs>clearly trying to justify his many thousand dollar new mac purchase>is actually a fucking retard that hasn't been using his Nvidia GPU properlyAHAHAHAHAHAHAHAHAHA
>>103222507Is it, pajeet?
>>103208944Macs overheat at the slightest workload and then the speakers break and can't be replaced
>>103222739cope
>>103208944round trip latency :^)#pcmasterrace
>>103209629Why is it ok when benchmarks are optimized for x86 or mainstream GPUs but it's not ok when they are optimized for silicon chips? Ideally one should use a benchmark that doesn't give handicaps to either sides to properly measure the chips potential
It's technically impressive for a laptop to be this powerful and efficient but for the price of an M4 Max MacBook Pro you can build a PC with a 4090. These comparisons are stupid anyway, both have different use cases.
>>103219289https://www.youtube.com/watch?v=KxPu3Pt4xEs&t=300
>>103223347>but for the price of an M4 Max MacBook Pro you can build a PC with a 4090false. also only 1kW of power, not mobile, screen not included, dries pussies.
>>103223409>$5000 25fps game console
>>103223469the M4 mini is $499 retard
>>103223482>$500 25fps game console
>>103223557>>103223557minimum settings 200ms frametime $500 25fps game console to be exactcrapple currynigger shitnot even once
>>103223557>doubles as a cute computer to get your day to day done.Seems pretty fucking based
>>103223562this
>>103223565>crapplejeet street shitters cant even afford a computer and have to play pretend on their fruit toy console
>>103223557You didn't even open the link did you? lol
>>103208944>$4000ummm.....
>>103223638$3300
>>103209612Holy cope u shilly slut
>>103209373>apple makes workstations
>>103209612>appo products get you pussy
>>103210513the same one that melts the keyboard? kek
>>103216657It's not poverty anon, why would I spend 8000 dollars on a macbook that overheats and has sloppy os on it that cannot give me any chance to get work done or even play games when for not 8000 but for 4000 i can build an insane PC that will roll nuts all over that overpriced laptop. The fact that this mini pc from apple costs 600 usd means nothing to me as the bare minimum of storage today is 2tb, let's be honest even that you will go through so fast and apple charges a kidney for that. Also ram, 16gb is just not enough, my phones has that much ram. A real computer minimum today i would say is 32 gb for which, again, apple charges insne amounts of money. It's just a bad deal. Sure if you can do anything but basic web browsing on it its good for you.
>>103223941Rich people have all that shit + apple products anon
>>103209210>>103208944>laptop gpu
>>103224261>8000you'te unemployed too
>>103224261AyyMd lostIncelaviv lostlintroons lostTorvalds lostStalmann lostX86 lostWindows lostday of the linux desktop: neverTotal Tim victory.
>>103220238Blender without using optix cause that will make m4 max perform like a laptop 4080, waaay below a standard 4080.
>>103224594yeahnow open wallet, bishhoh, you thought you deserved better than a chromebook? how cute and molestable
>>103223320Then why they didn't use optix on OPs benchmark for the nvidia card?, it's also available on blender but they decided not to use it.
>>103222379Yes and they have no desktop chip, so it has to do double duty there too, resulting in no high spec Macs existing.