[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: 1714013525704519.jpg (152 KB, 1102x1268)
152 KB
152 KB JPG
the M4 Max MacBook Pro wipes the floor with the Nvidia RTX 3080 — and even comes close to matching the RTX 4080 Super.

https://www.digitaltrends.com/computing/m4-max-chip-beats-nvidia-gpu/#:~:text=A%20new%20series%20of%20tests,matching%20the%20RTX%204080%20Super.
>>
>According to Apple marketing dept quoting this one weird microbenchmark
amirite?
>>
Yeah, it's pretty crazy how Apple was able to pull this off.

On the Blender benchmarks website, the M4 Max is the ONLY non-Nvidia GPU to make it into the top 25 GPUs benchmarked for GPU rendering. They've just disrupted a section Nvidia has pretty much had complete dominance over for years. It's beat out the current-gen RTX 4070, the RTX A6000 (Nvidia's highest-end workstation GPU last-gen) and it's very close to beating the 3090. It's also beat out AMD's most powerful Radeon GPUs, the 7900 XTX and the workstation Radeon Pro W7900.

https://opendata.blender.org/benchmarks/query/?compute_type=OPTIX&compute_type=CUDA&compute_type=HIP&compute_type=METAL&compute_type=ONEAPI&group_by=device_name&blender_version=4.2.0

Imagine what the M4 Ultra will look like on the Mac Studio and the Mac Pro.
>>
How long can it maintain that performance before needing to throttle though?
>>
>>103209210
When you look at how much higher the 4080 Super is on the chart than the M4 Max, though, it makes the two renders in the video from OP look more like outliers.
>>
>>103209210
The rtx scores are strangely low. Someone must have flooded the scores with openCL or something
>>
>>103208944
>when intel/nvidia/amd releases new product
>gaming benchmarks!
>when apple releases new product
>blender!
Suddenly everyone here is a 3d artist.
>>
>>103209302
This is from Blender, so the workload is probably going to reflect rendering rather than gaming.

And given that Apple makes workstations, not gaming machines, it makes sense that their computers would show up more in lists of top hardware for work, rather than gaming.

I know this is alien to /g/ since /g/ is full of people who don't have real jobs, but professional artists, modellers, composers/producers, etc., all mainly use Apple computers.
>>
File: 1717979874559763.gif (1.32 MB, 200x200)
1.32 MB
1.32 MB GIF
>>103209373
>Apple makes workstations
>>
It's dead half of a desktop 4090 at 10x less gpu power draw.
The M4 ultra will match the best consumer gpu available at 60-70w.
>>
>>103209403
So it's half now? So it's at 60W? Why do you keep changing the tale? What actual information do you have available?
>>
>>103209402
Behold: one of /g/'s unemployed LARPers.
>>
>>103209419
Yes because we all know you need a "workstation" for every job out there.
>>
>>103209415
it always was half.
the blender benchmark website and youtube tests with asitop and mx power widget showing how much power the 40 gpu cores in the m4 max draw at 100%.
>>
>>103209415
half in this task, namely.
in others which puget reflects the m4 max smokes most desktop workstations.
>>
>>103209483
And many, many more who buy them to read their e-mails and watch gay porn. It's just a way of thinking.
>>
>>103209373
No, it's not from blender, it's just a score tracker from user submissions. The 4090 should score around 13k with optix but here it's 10.8k
>>
Cool, but can I make full HD AI titties and cunny with a base mac mini?
>>
>>103209520
still ~11K
>>
>>103209403
It's half of a desktop 4090 at double the price. Why am I supposed to buy this again?
>>
>>103209334
Ironic considering Apple products are horrific for productivity.
>>
File: blendies.png (81 KB, 831x338)
81 KB
81 KB PNG
>>103209334
Hello there.
>>
>>103209582
A macbook with M4 max and 48GB of V/RAM is $3200.
A 4090 24GB by itself is $2000 and you still need a computer around it.

In the end it's cheaper, portable, can use larger models, draws far less power and won't dry pussies.
>>
>>103209373
>>103209419
because you're employed as a shill? that's worse
yes we know you've successfully gas lit many professionals
>>
And yet no games to be had so this power goes unused. Apple silicon can be as good as it wants in benchmarks but what good is it if it has no use
>>
>>103209334
apple contributed specifically to blender not a long ago
they probably optimized favoring their shillbooks
>>
>>103209616
>And yet no games to be had so this power goes unused.

GPUs are used for a lot more than just playing games now.
>>
>>103209334
It's the same thing everytime. This board shits on videogames all day, but only cares about gaming benchmarks.
They shit on apple all day, but spam the "real world" benchmarks all day as if they spent 24/7 rendering or unzipping shit.
>>
>>103209373
these are literally raytraced blender benchmarks

steve jobs would have fired you
>>
>>103209612
>A 4090 24GB by itself is $2000

Unless you were lucky enough to get it from Nvidia's official store at $1599 when it was in stock.
>>
>>103209550
>apple floods the submissions with spoofed 4090 mobile GPUs to make their shit look better
>>
>>103209550
>what is a median

>what is sample size

lemme know when you fags can do an on stage live benchmark battle like apple used to do in powerpc days
>>
>>103209664
there's no result higher than 11k on optix, regardless of your imagined spoofs.
>>103209662
just give a total for your build. i bet it's still close to $3k.
>>
And here's the M4 Max GPU balls to the wall using whiper via mlx. - 37W peak

https://x.com/ivanfioravanti/status/1856118273080754408
>>
>>103208944
Who will be dumb enough to use blender on a mac when they ask $1000 for 16gb of ram?
>>
>>103209661
Case in point: it's a rendering workload. It's not focused on shitting out swapchain images as fast as possible.
>>
Lmao, it seems the faggot didn't even turn on the denoiser.

https://www.youtube.com/watch?v=0bZO1gbAc6Y
>>
>>103209758
the 16gb M4 base is $499.
>>
>>103208944
tldw, he cheated. CUDA alone can't even fully utilize the RT and AI hardware, while Apple chips on Metal do. Optix is anywhere from 1.3x to like 1.8x faster than CUDA. He also used a 10900k as the CPU, which will also bring the speed down a little
>>
>>103209373
>This is from Blender
>on a mac
Literally not a single soul on the planet earth uses blender on a fucking mac.
>>
Its an impressive machine and I have become a believer after getting an M4 pro on release day, but its one thing to have a benchmark and another thing entirely to actually be optimized for what Nvidia can do. tensor-metal and appleCOPEML among others can't do everything, I'm still running dnn/cnn and computer vision workloads mostly on CPU, so I need to ssh into the actual big boy pcs with A100s at work to get anything other than some quick off the cuff shit done.

>>103209402
It's true though, I don't know what to tell you.
>>
>>103209912
It's absolutely not true, girl laptops aren't workstations.
>>
>>103209912
nvidia knows something that we don't which is why they're making their own ARM desktop cpus.
With Apple PCP launched I'm assuming Apple is going after nvidia's GPU compute business, which they have the silicon for, but so far, lack the software ecosystem.

They've launched GPTK and MLX too so they're digging in.
>>
>>103209948
Is it even worth it for Apple to try that when their shit is already so locked down? That's going to bite them in the ass one day. They treat even the thought of opening up their shit to gaming like it is less than shit. Why are they surprised they are behind? If they simply stop being retarded this wouldn't be an issue for them now. There's literally no good reason for them to have their shit so locked down
>>
>>103209948
>nvidia knows something that we don't which is why they're making their own ARM desktop cpus.
No, they know what we know too. The Qualcomm-Windows for Arm exclusivity deal expires by the end of this year.
>>
>>103209961
if a paywall keeps out the pajeets it'll pay off longterm. you don't want a piece of software made open source by white and/or asian engineers, then made accessible to anyone by white and/or asian engineers only to have millions of jeets using it to create slop and flood the job market with "their" knowledge.
the monetary caste system is based and should be enforced in computing too.
>>
>>103208944
In a fucking laptop too wtf
>>
>>103209885
Why not? Blender fully supports Metal.
>>
>>103209642
so, gooning?
>>
>>103209961
if steve jobs was still alive they'd have probably broken into the pc ecosystem by now, but instead we have these people who grew with and into the apple cult
>>
>>103210041
Because these programs don't exist in a vacuum. You're using several other programs for converting files, managing files multiple people are accessing, (specifically for a studio), using third party plugins or proprietary plugins in studio, and half dozen other pieces of software that just don't run on macOS. You're using windows or linux.

Third party plugins are rarely made for macOS and necessary for most dev flows.
>>
>>103209948
Nvidia already has some monstrous support for their hardware, it's hard to justify not using their Jetson equipment.

>>103209961
It didn't bite them in the ass, and their ARM processor line only strengthened their position. Gaming doesn't matter, not gonna bother getting into it in this thread just go to /v/ if that's what you want to talk about.

>>103210004
pretty great take. Jeets destroyed windows as a work environment.

>>103210113
I'm sure this is a really great opinion but I'm not familiar at all with your use case, so stuff like this doesn't affect me personally.
>>
>>103208944
I'm sure it runs at an ice cold 200° Celsius too because Apple wants you to buy a new one next year.
>>
>>103210142
That's cope dude
>>
>>103209961
>They treat even the thought of opening up their shit to gaming like it is less than shit.
Yeah about that, they appear to be changing their minds, i.e. they managed to convince CD Projekt to write an entire Metal renderer for Cyberpunk 2077 and do a native MacOS build.
>>
>>103209210
>tied neck-and-neck with the 4080 laptop GPU

You want to know something interesting? The laptop 4080 only has 12GB VRAM, that's it. Once that runs out, the performance will tank like a rock. The M4 Max's GPU has access to virtually all of the MacBook Pro's unified memory. The MBP supports up to 128GB.
>>
>>103210142
M1 systems are still operational after tons of abuse, I'm not sure you know how this works. Also, these systems are easy to refurbish internally which is why you still see numbnuts buying outdated refurbished Mac systems for insane prices since they work just fine and cosmetically look great.
>>
I hate Apple but I just hope they fucking crush NVidia because those asshats have become far too complacent in the last 15 years, fuck em.
>>
>>103210150
They're a tad bit late to the party, is the point I'm trying to make. They want developers to do extra work to support their OS when they neglect it the entire gaming industry for basically the entire existence of Mac OS, hell, their entire fucking company. Not for any practical reasons mind you. But because the CEOs and later his successors were so far up their own ass they thought they were above gaming for whatever the fuck reason. They made their bed and they have to lay in it via playing catch up
>>
>>103210149
I couldn't care less. Nobody uses a Mac for anything but video editing, graphic design, and watching youtube anyways.
>>
>>103210170
>tons of abuse
>Yeah bro I watched thousands of hours of Netflix on this bad boy
>>
>>103210172
They won't because Apple will fumble the bag like they always do. They're too far up their ass in a confident to actually make a computer that is appealing to the workhorse power user or gamer. As I said above, they treat it gaming like I didn't exist. It's one of the few areas they will utterly fail in it'll be entirely their own fault because of their laziness.

(Then failing in VR doesn't really count because everyone fails in VR. By failing I mean failing to Make it mainstream, a If you tell task some alien creature known as "the Zuck" keeps trying and failing to do)
>>
>>103210194
I'm glad it's not been ruined by gaming. You'd see too much third party jeetware and other bullshit otherwise.
>>
>>103210194
>As I said above, they treat it gaming like I didn't exist.

https://www.apple.com/apple-arcade/
>>
>>103208944
Cool. Still not buying it, but impressive.
>>
>>103210217
That's THEIR in house shit. I'm talking about tearing enough about game development that gaming on a macbook is actually an attractive option. Haven't you wondered why most if not all AAA games just aren't a thing on MacBooks? It's not because the MacBooks are incapable of running them. Apple just doesn't care enough to provide support for game development. The most you can get is shit like Minecraft working, or if it's on steam, some gooner VN slop at best. It is 100% their own fucking fault. MacBooks are capable machines. They just held it back from being even more widely adopted for whatever the fuck reason. There is no good reason for them doing that. The higher ups were on crack or some shit and that's why they decided to not allow the developers to have proper game development support. No AAA developer is going to Port their shitta Apple arcade. There are a lot more PCs and a lot more Xboxes and a lot more PlayStations than their shitty Apple TV
>>
>>103208944
Impressive. Apple spends a $30B per year on R&D. But what about the RAM? The ram chips run really hot on RTX30 and 40 series, right? How do they handle that? And who owns the IP for this, Apple or ARM? Will be interesting to see analyzes of the die once it releases.
>>
>>103210172
>become far too complacent in the last 15 years
They've done the exact opposite, tard. They've been so hypercompetitive that no one on the planet can compete unless they're deploy shills to cheat benchmarks like >>103208944
>>
>>103210306
Dude, literally everyone can compete with them. Just because they're the most popular brand in a particular hemisphere does not mean they dominate the entire sphere. Drop a pin on any town in the US and you'll find on average just as many Androids (particularly Samsung, any other brand isn't even worth talking about) as the iPhone. The only area they really fall behind in heavily is the compute field because they didn't take it seriously. The M series chips don't even count because the only super exceptional when they are running software that is specifically rewritten to run on the M series ships. Not saying that M series chips are bad in any way but once you have to run something that requires virtualization (ie software that is written to be run on an x86 chip) That's when those amazing and out of this world benchmarks start to not happen as much, if at all. Windows arm laptops have the same issue. They do REALLY well with better life as long as you're only using the browsers or Microsoft office apps that all have dedicated arm versions. Run anything that requires x86 that is computationally intensive and suddenly you hear the fans kick up again
>>
>>103210335
>t. jeet having an ESL meltdown
>>
>>103210347
>no argument
Concession accepted.
>>
>>103210371
>jeet doubles down on his ESL meltdown and declares victory, remaining ever ignorant
Classic jeetery
>>
>>103210293
the ram is directly cooled by the same heatsink as the cpu
>>
>>103210335
>Drop a pin on any town in the US and you'll find on average just as many Androids
drop a pin on any town in India and you will see someone shitting on the street
>>
>>103208944
Why is gaming still so dog shit on mac?
>>
>>103210679
>No arguments
>>
>>103210699
because Apple didn't start caring until recently
>>
>>103210513
Wait, you are telling me that they're connecting the heatsink to the actual parts they want to cool down now?
That's some great progress.
>>
>>103210744
yes, jonny ive is gone
>>
>>103208983
Even better. The Apple gorilla marketing team.
See my fren? No lawsuits when its just strangers on the internet
Apple marketing team plays it safe when these gorilla marketers flood the internet with lies
>>
>>103209373
>professional modellers use mac
lol
>>
>>103212141
>gorilla
>>
wake me when 50 series
>>
>>103208944
woaaah a 600mm+ sq N3E SoC holds its own against a 380mm sq N5 GPU
>>
File: file.png (136 KB, 880x536)
136 KB
136 KB PNG
>>103208944
What's the point if you can't play games on it?
>>
File: 1723265777494214.jpg (222 KB, 1500x1000)
222 KB
222 KB JPG
>>103212177
>>
>>103208944
I have no doubts it can run facebook as good as 4080 but that's not really what those GPUs are for.
>>
>>103208944
Meanwhile, a friend of mine had to sell his M2 macbook and go windows, because it had gotten so damn slow it was impacting with his work.
Slow for no apparent reason.
>>
>>103209373
>Apple makes workstations
LOL. LMAO even.
How much does applel pay you to shill here?
>>
File: 433.gif (1.39 MB, 220x231)
1.39 MB
1.39 MB GIF
>>103208944
That's nice, how much is this shit again? $3000? I can buy a 4090 plus the rest of a high-end PC and still have money left, and I can run real shit instead of the 5 or 6 apps available for mac.
>>
>>103209864
>Apple fanboy running a jewtube channel lied to get views
Shocking, that has never happened before!
>>
>>103212472
Now try putting it in your backpack and traveling with it
>>
>>103212472
Now try getting laid when a chick goes home with you to find your RGB gaymer shit taking up half the room
>>
Didnt they say the m3 was 4090 tier? Lmao what a load of shit that was
>>
The M3 Max had 92 billion transistors. M4 Max has even more, presumably.
AD103 has 46 billion transistors.
I suppose that's why they're cheaper.
>>
File: 1708529320747010.png (551 KB, 1230x1200)
551 KB
551 KB PNG
>>103208944
>crapplejeet currynigger shits the street
a tale as old as time
>>
File: 1672768801679582.jpg (222 KB, 720x720)
222 KB
222 KB JPG
>>103210217
>$7/month to play Super Fruit Ninja, Hello Kitty Island Adventure, Solitaire, Spider Solitaire, Snake.io
PC sisters... our response?
>>
>>103209210
>reddit spacing
>"Yeah, it's pretty crazy how Apple was able to pull this off."
>>
>>103208944
For AI training or just web browsing?
>>
>>103209817
The m4 that is fast enough for blender is the max even the 20 cores pro is quite bad, and when rendering the more ram the better.
>>
>>103212623
Have you seen their ads?
>>
>>103208944
What the fuck is that zero font
>>
>>103212540
>t. Projecting virgin
>>
>>103208944
I CAN'T GAME
>>
>>103209601
Based.
>>
And when it comes to AI training, how fast is the Mac mini m4 pro? Is there a benchmark for this type of task?
>>
File: file.png (156 KB, 1448x962)
156 KB
156 KB PNG
>>103208944
Retard didn't activate OptiX for Nvidia GPUs and that's how he got "close" results.
Also checking opendata.blender.org the benchmark is full of shit too
Apple false advertising reposted by "popular" tech sites once again
>>
>>103216251
Slow, unless you get the max amount of ram and test stuff that uses more ram than what the rival GPUs have.
>>
>>103216292
I was under the impression that AI training went only feasible if your machine had cuda efficiency or whatever the AMD equivalent of it is. Literally no one is even capable of training anything useful on a CPU only set up. You can use small models, yes, but you sure as fuck aren't training anything on them
>>
File: 1705467615392580.png (303 KB, 785x704)
303 KB
303 KB PNG
>>103216276
Apple would never lie to us.
>>
File: 1706597194036773.png (53 KB, 519x426)
53 KB
53 KB PNG
>>103216276
Not a problem for mac fanboys, they'll start posting how the m4 is only 499.99 without realizing the only m4 that can perform as well as a laptop 4080 is the Max with 40 cores version.
Price starts at 3699$
>>
>>103216292
So I'm keeping my m1.
I want a machine for training and not just running llmas
>>
>>103216322
With the unified memory they can train large models as far as you have the ram for it, it'll be faster than trying to do it on a gpu that has not enough ram, just not great.
But nvidia professional hardware is quite expensive and often hard to get.
>>
>>103208944
Still not buying a notch
>>
>>103216379
I don't think you quite understand what cuda cores are. Maybe I didn't mention them in the previous post. Cuda cars are like having a better engine that performs a specific thing better and faster. Having a bunch of ram means fuck all if the chip itself is poorly optimized to do that task. GPUs are really good at doing matrix multiplication. Laptop CPUs aren't. Think of it as like comparing my City 10-year-old Prius to an F-150. Most cars are built have roughly the same gas tank sizes and most of them are built to travel 400 to 500 mi tops. Having a similar size gas tank does not mean that both going to be able to do the same kinds of workloads. Do you get the analogy trying to make? If "just add more RAM sticks" or a feasible solution then GPUs would not even exist. There would be no need for them
>>
>>103216379
>>103216420
Not that anything for talking about even matters anyway since Apple execs have a mental illness preventing them from allowing their machines to actually be useful for things beyond specific niches. For all you know you could be 100% right and they would still find an excuse to make local AI training and possible.
>>
>>103216359
>>103216322
Be aware that it's gonna be slow even with 4 mac studio with the biggest max they have (currently is the m2 max), the only pro is that you can do it locally.
It's better to rent a server with a bazillion cuda cores.
>>
>>103216441
Apparently you have to have a lot of money to enter this market, even taking the cloud into account
Today we could say we're in the 200,000 dollar range?
>>
>>103216420
Ok will try the analogy route too.
Let's say your 4090 is a ford mustang and the m1 with 128gb ram is a shitty old truck that can't run above 50 km/h.
Now lets say you have to move all your stuff to a new house 2000 miles away.
What happens when there's not enough space in your car to put your stuff is the same thing that happens when your gpu has not enough ram to train something big.
The shitty truck will still be faster but it's still just a shitty truck.
If you want to do it fast just hire a moving company that can put your stuff in a plane.
>>
>>103216441
So why settle for that in the first place and just use a Windows or Linux machine that can use an Nvidia (hell maybe even AMD) GPU?
>>
>>103209373
>Apple makes workstations
Delusion is strong in this one
>>
>>103216578
Then what are these?
>>
>>103216538
You are analogy still does not work. The ran means fuck all if the efficiency is bad. Think of it as like comparing a minivan to I pick up truck that has a lot of torque. A Ford F-150 can haul an entire semi truck WITH The trailer attached. A minivan isn't doing that shit. Saying "more RAM" = better it's like saying having a bigger gas tank makes your car engine more powerful. That's just not how they work. GPUs are tailor-made for parallel processing, ESPECIALLY Nvidia GPUs with cuda cores. They completely shit on even their AMD counterparts. You can literally Google search a YouTube video to explain this better than I can but a laptop simply is a comparing to a GPU. As I said earlier, if simply having more RAM sticks solve the problem, we would never have any need or demand for GPUs in the first place.
>>
>>103216511
Depending on what you want to train, I don't train llms but do smaller ml stuff so I don't need massive hardware an A6000 (48gb) goes for ~0.50$/hour and an A100 (80gb) for ~1.50$/hour
>>
>>103216596
Pet projects. No one actually needs these shits. People who have even bought them on YouTube claim they kind of suck compared to Windows setups at comparable price points. Apple does not actually give a shit about making powerful shit that competes with their counterparts. They just want to grift Rich fags
>>
>>103216511
I think you're confusing training with fine tuning. Training and entire model from scratch takes a lot of money. Fine tuning can be done on a shitty consumer grade 8 GB GPU depending on what you're fine tuning. Image models or object detection models are probably easiest things to fine tune on consumer grade GPUs (as far as I can see.)
>>
>>103216251
training on what datasets?
i did a facial recognition model with 100 pictures of a lo...vely person in coreml in a few minutes with 95% accuracy on a 16gb m1.

you'd have to be more specific
>>
>>103216623
What kind of model were you using? I've tried training object detection models a month or two ago using YOLO models. I didn't know you could actually train anything on the M1s.
>>
>>103216613
>rich fags
your poverty must be unvearable
>>
>>103216596
A desktop?
>>
>>103216571
cause having 128gb ram in a mac is still cheaper than a proper nvidia gpu, have you seen how much a workstation with a couple A100 costs?
Tbh I recommend using the cloud cause training on macs is still slow as hell, or go the 3090 route and put 4 with nvlinks, anything above the 3090 will not have nvlink.
I've also heard that there's a hack to enable p2p on 4090s but I haven't looked into it.
>>
>>103216645
just the basic image recognition template in xcode-coreml (they had a bunch already). i gave it the pics, trained, checked against a random sample.
in the rnd it spat out a 30kB-ish model that got it right 95% of the time for my girl.
very easy process
>>
>>103216605
>>103216621
>>103216623
Thanks Anons, now I get it.
I'm more interested in creating models from scratch (even if they are crude), both as a hobby and for work, if such a thing as work exists in the future.
>>
>>103208944
Mods please remove these shill threads
>>
>>103216711
you can train small models on any M series mac. don't listen to the imbeciles who's only grief is "can it gayme?!"

install xcode, install the coreml extensions and it's click click click from there
>>
>>103216599
You don't get it, as far as you have enough space(ram), sure the mustang will fucking crush that shitty truck, but if it doesn't have enough ram you will end up having to travel back and forth 100 times to move your stuff and it'll take a lot of time.
>Ford F-150 can haul an entire semi truck
Yeah well I tried putting a couple 32gb ram sticks on top of my gpu and it didn't work out but thanks for the brilliant idea.
>>
>>103216711
I don't think that's possible with our current tech. Think of fine-tuning software is like a microwave and straight up training like a giant oven. If I want to reheat some mac and cheese or some barbecue ribs I can do that easy. You can't really cook anything properly in a microwave or bake a cake in a microwave right? It can't generate The kind of heat a proper oven could. They don't even generate heat via the same methods. (Sorry for even more shitty analogies). There's a reason these AI companies want to beg nations for billions of dollars. Making What are essentially minor tweaks to a model (That's what fine-tuning is) takes practically no resources compared to straight up training.

t. Fine tuning is my main hobby

https://civitai.com/user/AI_Art_Factory

The closest thing we've gotten to models that were straight up trained from scratch or whatever mixtral labs have but I think they had access to millions of dollars. People aren't doing that shit in their garage unless there are either breakthroughs and chip data processing efficiency or someone figures out how to deep blow to the entire training process in general


>>103216763
The ramp doesn't matter if it sucks at doing it particular task. Could of course or tailor-made from matrix multiplication and other tasks like that. CPUs or not. Give it 200 GB of RAM and you have inferior efficiency compare to a dedicated GPU.


(I say all of this assuming Apple would have been allowed a fucking laptop to ever have 128 gigabytes of RAM. They can but they won't because that would actually benefit the consumer).

>Yeah well I tried putting a couple 32gb ram sticks on top of my gpu

And you thought that would help, why? The bulk of the processing is done on the GPU and last time I checked you can't just upgrade the RAM capacity of GPUs like you can with a motherboard (again, that benefits the consumer and that's bad for business :))))).

You can literally test what I'm telling you right now (cont)
>>
>>103216815
(Cont)

>>103216763
Take a 16 GB Nvidia GP for example, and then place it next to a MacBook, iMac, or a Windows machine with 32 gigs of RAM but no GPU. The Nvidia setup is going to curb stomp The other machines because again come up the GPU is TAILOR MADE but those specific kinds of tasks. You straight up cannot train SD models without a GPU unless you want to wait litteral days or weeks, then wait several minutes or even hours just to generate a few images.
>>
>>103216815
>>103216763
>>103216763
"Just add more RAM :)" is not the answer or, as I said for the third or fourth time, GPUs would not exist. Why do you think they exist?
>>
>>103216831
On an m4 with unified memory let's say 48gb against a 16gb 4080 you will probably wait more on the 4080 if your data set is 30-40gb.
Ram is extremely important to train, as soon as you don't have enough everything slows down to a crawl.
>>
>>103209210
>laptop gpu
>>
worth getting a m3 max on a costco clearance sale (when it happens) or should I just wait for m4 pro deals?
>>
>>103216984
the m4 pro gpu matches the m3 max so in perf they're about the same.
however m3 max has ore bandwidth for llms and that matters.
the m4 pro gpu will be faster though.
and they added SVE to the M4.

if your tasks are cpu bound more, wait for m4 pro; if they're more gpu/ram bound, m3 max.
use the mlx version of models for a free ~20-25% uplift
>>
>>103217025
>the m4 pro Cpu* will be faster though
>>
>>103216934
>if your data set is 30-40gb.
Most fine-tuning tags don't require data sets anywhere near that large. Especially if you're just fine tuning a Lora or object detection model like yolo. I don't know what the hell type of models or data assist you're using but 40 GB sounds excessive.


>>103216934
>Ram is extremely important to train
So what's the type of brand. You can't just use any RAM stick. It needs to be good at actually doing the thing you want it to do. The built-in RAM sticks and your m1 are NOT ideal for training, well, literally anything. Even if it can, it would make more sense to just use a regular GPU.
>>
File: 1715299856035167.png (20 KB, 661x86)
20 KB
20 KB PNG
>>103216841
>Why do you think they exist?
Why do you think professional and AI cards have more ram and nvidia doesn't want to put more ram on the "gaming" ones?
Same for why they have removed nvlink on all but professional cards.
Also why the fuck bother with magnum IO to move data at more than 1TB/s If it doesn't matter everything just works on 16gb vram so no problem every company is just doing it wrong.
>>
>>103209912
>I am consooomer believer
Wow. Shilling is so easy to spot. How much did you get paid for this You? Why are you fagples marketing so hard on /g/?
>>
>>103217058
>Why do you think professional and AI cards
You mean the GPUs? There are multiple kinds of models and configurations. From as low to a gigabytes to as large as 40. An 8 GB Nvidia GPU will curb stop your 8 GB M series MacBook at literally any task unless set task or software is custom written for the m1 chips. What did you think every single GPU came standard with like 80 GB of RAM? They don't HAVE to have a fuckload of RAM to do a task. Think of your regular RAM sticks in your laptop as C grade RAM in the GPU RAM as A+ or some shit like that. Compared to the GPU come up the ram in your laptop sucks at training, assuming it could even train at all. Where do you think the demand for Jeep uses coming from? Yes the AI craze is a bubble but it's also because the Nvidia GPUs are the only ones that can actually do the things the company wants to do and it cost-effective manner. It's more efficient. It's more power efficient if you opt for the data center grade GPUs that can run 16 GB of RAM at only 70 watts peak or less. (See the Nvidia Tesla T4 for an example)

>and nvidia doesn't want to put more ram on the "gaming" ones?
Gamers don't need more RAM? Are you trying to tell me you need 24 gigs of VRAM to play fortnite? I know people who have a gigabyte GPU cards and they run the games fine. All the AAA ones. All that 60 FPS. All good graphics.
>>
>>103217025
thanks anon, I'm interested in trying local LLMs so I'll probably go with the m3 max

not sure if my tasks are CPU bound enough + compiled to make use of the new SVE insturctions
>>
>>103217051
Have you tried to TRAIN a 1.5B model? it gets well over 32gb you can't compare it with fine tuning and as soon as you go above what your gpu can hold you are fucked.
>>
I remember my GTX490, my eyes watered when I paid a few hundred dollars for it. Now people are paying thousands for their gaymer cards as if it's normal. What a crazy world.
>>
>>103217166
5090 will be starting at $2500, 36gb, 650w, 3.5slots
>>
File: 1709543544599789.png (2.85 MB, 944x1398)
2.85 MB
2.85 MB PNG
>>103217132
In that case just use a 40 GB or 80 GB GPU card? Again, that will still curb stomp trying to use a CPU only set up for reasons I'm sure are all these to you.....
https://www.nvidia.com/en-us/data-center/l4/

Would you look at that, This site has a neat little graph showing you what I'm trying to explain

No one in the right mind will even consider in their dreams training an LLM on only CPUs. That just isn't possible or practical. You need entire server racks of CPUs, possibly even entire fucking rooms, running 24 hours a day for weeks on end. You seem to imply you think you can do that shit on a singular computer. I don't know WHY You think you can do that come up but you just aren't
>>
>>103208944
>In our own testing of the M4 Pro in the Cinebench R24 benchmark, the M4 Pro landed in between the RTX 4060 and RTX 4070 mobile GPUs.
how could a gpu land next to a fucking cpu benchmark
are they retarded?
>>
>>103212275
lol
>>
>>103209601
stop rendering porn.
>>
File: 1700916667923062.jpg (54 KB, 554x740)
54 KB
54 KB JPG
>>103217225
>>103208944
>>103209864
>>103216276

Newfag techlet here. They're implying the CPUs are comparable to GPUs right? Even I know that's her ridiculous claim but I think there is confusion on the part of whoever wrote the article. How are people being fooled into thinking a CPU which is the level of a GPU?
>>
>>103217225
M4Pro is an SoC with a gpu included.
Not that hard, retard anon.
>>
>>103208944
now let it run for an hour and send me the temps if it didn't shut off from overheating
>>
>>103216596
fake, 2019 was the last real workstation from them
I wish I owned the 2019 when it came out
>>
>base m2 pro mbp
>qwen-2.5.1-mlx-7b-4bit
>40t/s
I literally don't give a fuck. This shit is magic.
>>
forgot pic
>>
>>103208944
The hardware is really nice. But who are these machines for?
>>
>>103218100
How much storage in VRAM does this use? I might test it out on my machine
>>
>>103218245
storage 4GB, VRAM between 4 and 6GB.
>>
>>103218261
A lot less than I expected :O. When you prompt it for something How does it compare to something like chat GPT? How long do responses take?
>>
>>103218267
it's practically instant, ~1s until first token for a request like this, 35-40t/s afterwards.
>"Please write a haskell function that monitors the contents of a log file as it is being written into, evaluates each new line as a string and, in case a dictionary (saved as variable "b") is found in that string, tries to match it against a predefined dictionary (saved as variable "a").
If variable "a" is a subset of "b", print "success"

I notice that the more esoteric the language the more VRAM is uses. But if I use python or c it stays in the 4-6GB VRAM region
>>
>>103218319
You're the dude with the M2 laptop right? How much RAM does your machine have?
>>
>>103209601
>Windows 11
Yeah, nice Apple Arm machine you got there buddy.
>>
>>103218371
16, it's a base m2pro macbook
>>
>>103218218
anyone who doesn't prioritize games that are not native to macos and who doesn't cpuencode tranime.
video, graphics, compilation, llms are all top tier.
>>
>>103217207
Don't listen to this anon. Next gen is dropping soon so wait for gb200nvl72.
>>
>>103218605
so what will be the starting price for that? 500,000+?
>>
>>103218636
It's per GPU rack (72 GPU minimum order) so I think 5 million. H100 boxes were 300k each for 8, now inferior, gpus in a full system. Of course one scalable superpod is 8 GPU racks (144 gpus) plus 12 other racks, providing the supporting management, network and storage hardware. Plus cdu per row for liquid cooling. Plus a rack per row for environmental monitoring.
Obviously.
>>
>>103217327
It's you who are retarded, Cinebench is a cpu benchmark
It can't measure GPU performance
>>
In a perfect world, you'd be able to run Mac OS on any x86 CPU and be able to run any OS on Apple silicon.
>>
>>103218793
in a perfect world your currynigger shit OS and your currynigger silicon wouldnt exist
>>
>>103218100
>7b 4bit @ 40t/s
>35% as a flagship desktop gpu using 20x less power

Bruh
>>
>>103218793
no, the walled garden keeps jeets like >>103218826 out. and that's a good thing. jobs was right all along.
>>
>>103218866
The higher the entry barrier, the cleaner the environment.
>>
is m4 good for gaymen?
>>
>>103218826
>currynigger shit OS
So, does this refer to Windows, Linux or Mac nowadays? I can't keep up with you retards.
>>
>>103218826
That silicon claps Incel and gAyMD lmao
>>
>>103219046
the base m4? no
>>
>>103208944
Still 25fps in gta5 kek.
>>
>$3500 product is worse than $1000 product
Not really sure Apple should really be boasting about this.
Now compare it to an EPYC or Threadripper which is similar in price.
>>
>>103219305
a 4080 build with a similar cpu is $3k, not mobile, sounds like a vacuum cleaner and uses 1kw of power

cope
>>
>>103219329
>>103219329
>4080 build with a similar cpu is $3k
Post the build.
>not mobile
good
>sounds like a vacuum cleaner
the opposite, Macs are terrible for this and throttling
>and uses 1kw of power
also untrue
>>
File: 1724554492662905.gif (3.79 MB, 336x189)
3.79 MB
3.79 MB GIF
>>103218636
NTA, Unfortunately, we don't have the money to compete with the big corporations, and I don't even think there is such an AI algorithm for less insane machines.
The starting price in this market is 200,000 dollars
>>
>>103208944
no proof
>blender
is a thing in the kitchen

If you want to prove it, run a game let's see what it can do.
>>
>>103208944
>even comes close to matching the RTX 4080 Super
A laptop one yeah. A real 4070 Ti wipes the floor with it.
So, impressive laptop chip, but a non-starter for real desktop workstation performance.
>>
>>103216596
Desktops with laptop SoCs.
And that's fine for the Mac Mini.
But the Mac Pro is way too underpowered (and overpriced) to be considered a real workstation.
Apple silicon is pretty good, not good enough to be on a workstation.
If they had a multi SoC setup, then maybe it could be worth it. But just one? Nope.
>>
>>103220427
m4 max is a laptop soc
>>
>>103208944
>35fps at 1280p
Whoah
>>
>>103208944
>The gap between the different chips is evident in the pricing as well. A MacBook Pro with the base M4 chip costs $1,599, while upgrading to an M4 Max configuration will set you back at least $3,199
LMAO
>>
>>103219329
Complete 4080S build with a 9950X is $1900USD you lying fuck.
>>
>>103216276
>source was some rando with under 10K subs
>clearly trying to justify his many thousand dollar new mac purchase
>is actually a fucking retard that hasn't been using his Nvidia GPU properly
AHAHAHAHAHAHAHAHAHA
>>
>>103222507
Is it, pajeet?
>>
>>103208944
Macs overheat at the slightest workload and then the speakers break and can't be replaced
>>
>>103222739
cope
>>
>>103208944
round trip latency :^)

#pcmasterrace
>>
>>103209629
Why is it ok when benchmarks are optimized for x86 or mainstream GPUs but it's not ok when they are optimized for silicon chips? Ideally one should use a benchmark that doesn't give handicaps to either sides to properly measure the chips potential
>>
It's technically impressive for a laptop to be this powerful and efficient but for the price of an M4 Max MacBook Pro you can build a PC with a 4090. These comparisons are stupid anyway, both have different use cases.
>>
>>103219289
https://www.youtube.com/watch?v=KxPu3Pt4xEs&t=300
>>
>>103223347
>but for the price of an M4 Max MacBook Pro you can build a PC with a 4090
false. also only 1kW of power, not mobile, screen not included, dries pussies.
>>
File: 1718919678794207.png (273 KB, 368x447)
273 KB
273 KB PNG
>>103223409
>$5000 25fps game console
>>
>>103223469
the M4 mini is $499 retard
>>
File: 1716504089422198.jpg (82 KB, 500x707)
82 KB
82 KB JPG
>>103223482
>$500 25fps game console
>>
>>103223557
>>103223557
minimum settings 200ms frametime $500 25fps game console to be exact
crapple currynigger shit
not even once
>>
>>103223557
>doubles as a cute computer to get your day to day done.

Seems pretty fucking based
>>
>>103223562
this
>>
File: 1731130372894832.jpg (31 KB, 500x333)
31 KB
31 KB JPG
>>103223565
>crapplejeet street shitters cant even afford a computer and have to play pretend on their fruit toy console
>>
>>103223557
You didn't even open the link did you? lol
>>
>>103208944
>$4000
ummm.....
>>
>>103223638
$3300
>>
>>103209612
Holy cope u shilly slut
>>
File: cheetah-laugh.jpg (8 KB, 250x250)
8 KB
8 KB JPG
>>103209373
>apple makes workstations
>>
File: mac-styles.jpg (799 KB, 1000x2000)
799 KB
799 KB JPG
>>103209612
>appo products get you pussy
>>
>>103210513
the same one that melts the keyboard? kek
>>
>>103216657
It's not poverty anon, why would I spend 8000 dollars on a macbook that overheats and has sloppy os on it that cannot give me any chance to get work done or even play games when for not 8000 but for 4000 i can build an insane PC that will roll nuts all over that overpriced laptop. The fact that this mini pc from apple costs 600 usd means nothing to me as the bare minimum of storage today is 2tb, let's be honest even that you will go through so fast and apple charges a kidney for that. Also ram, 16gb is just not enough, my phones has that much ram. A real computer minimum today i would say is 32 gb for which, again, apple charges insne amounts of money. It's just a bad deal. Sure if you can do anything but basic web browsing on it its good for you.
>>
>>103223941
Rich people have all that shit + apple products anon
>>
>>103209210
>>103208944
>laptop gpu
>>
>>103224261
>8000
you'te unemployed too
>>
>>103224261
AyyMd lost
Incelaviv lost
lintroons lost
Torvalds lost
Stalmann lost
X86 lost
Windows lost
day of the linux desktop: never

Total Tim victory.
>>
>>103220238
Blender without using optix cause that will make m4 max perform like a laptop 4080, waaay below a standard 4080.
>>
File: tim-cock.png (301 KB, 574x630)
301 KB
301 KB PNG
>>103224594
yeah
now open wallet, bishh
oh, you thought you deserved better than a chromebook? how cute and molestable
>>
>>103223320
Then why they didn't use optix on OPs benchmark for the nvidia card?, it's also available on blender but they decided not to use it.
>>
>>103222379
Yes and they have no desktop chip, so it has to do double duty there too, resulting in no high spec Macs existing.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.