>M4 Max, 128GB unified RAMWhat is G's verdict on running local LLMs on this?Should I drop 5 grands to get my own coding assistant and local anime girl generation ?
>>103295728>What is G's verdict on running local LLMs on thisi think you should hang yourself for paying $5000 on a laptop
>>103295739It's cheaper than four 4090s.
>>103295739>$5k for a fucking laptopName another machine with this kind of GPU memory for lesser price.
>>103295728>5 grand to have the "(v)ram" be slow as absolute shitlol, either rammax for like 300$ and be able to run everything but slowly or rent gpus online or pay for top current model access if you dont care about privacy
>>103295728get the 14
If you can afford it, why not? I got a 14 Pro/20 Core GPU/48GB/2TB Nano display for like $3300.
>>103295728Are giant models that much better?8b LLMs and SDXL has served me well and those run on the base M4.
>>103295728if you're a crypto bro and made a killing recently off the market then hell yes.
>>103295728>272GB/sTrash
>>103295739Get a job fucking poorfag
>>103295728macOS support for open souce apps is trash
>>103297114Thats fucking retarded, the majority of open source software are programmed on macs.
>>103297036i could buy 9000 burgers with that money
>>103296664how is the display?
>>103297007576GB/s on the max>>103297114good
>>103295755Gpu memory isn't everything to benchmark a gpu performance.Sure it helps with some AI training but if its slower why not get a proper GPU?
>>103295728noget the new mac studio m4 ultra next year
>>103297760Its everything to AI workloads. Bandwidth + memory + fast cores.
>>103297760>AI trainingyou literally can't run the model if you don't have enough memory.
>>103296915i bought a quarter bitcoin the moment trump was elected, it's money but not maxed out applefag money