>drop 99.9999% software compatibility in the desperate chase for battery efficiency>alienate all professional customers>alienate all developers>cheat with hardware accelerators in web browsing to make up for ARM being total shit in real workloads >get utterly destroyed in battery life anyway 3 years later by intlel
https://youtu.be/YgnVgVYOqboOp totally forgot to include the link to his benchmark. The funniest thing about this whole 40-hour battery claim is that the Shintel laptop started lagging and basically crapped itself when he opened a simple website, in the same benchmark LMAO.Timestamp 15:50.
>>108668684>>108668718what i'm gathering from this is LPE cores are a good idea but scheduling remains a problem
>>108668684>alienateWrong.People that use apple will keep using it no matter how much the company fucks them in the ass.
Even with the laggy VRR shit turned off, the battery life is still hella competitive. Any x86 laptop going past 15 hours of battery life is arguably dangerous for ARM.>Notebookcheck's Wi-Fi battery test simulates realistic web browsing to measure laptop endurance.>The test runs an automated script (HTML5, JavaScript) that cycles through a mix of websites every 30 seconds.>Key parameters include setting display brightness to ~150 cd/m2>using a balanced power profile, and continuous active Wi-Fi browsing.https://www.notebookcheck.net/Dell-XPS-14-2026-review-Fully-reborn-with-Intel-Panther-Lake-X7.1218670.0.html
>>108668684>drop 99.9999% software compatibilitymost normies only use their web browser so this isn't a problem for most users>alienate all developersa huge amount of devs are on arm macbooks, probably over half of all webdev code monkeys. just werks
What ultimately matters, IMHO, is the average watts per hour consumed during web browsing. Pantherlake was able to get it down to around 4 watts.Which means that at least x86 phones with chunky 10,000 mAh batteries could achieve ~10 hour of web browsing battery life. If x86 can create better-than-celeron performance at say even 3 watts average for web browsing, then the return of x86 phones suddenly becomes a real exciting possibility.
>>108668807there is no equivalent to macbookssoldered ssd, soldered ram, soldered keyboard, etc etc
>>108668684ARM is a dead end that needs modules on top of modules on top of modules. x86 just werks.
>>108668684I use the ultra 9 (200 series) in real time critical robotics applications at work and yes the power efficiency is amazing IF YOU DO NOTHING (TM) cause the low power cores are so low power that a C++ code we have won't run in it's allowed time slot if we don't specifically tell the OS (here Ubuntu) not to go into power saving mode and not to use those cores for this program. So yes it's cool if you browse websites, but if you do anything serious it's dogshit. I yet have to fuck more with the integrated TPU cores so I won't comment
>Chrome>YouTubenon-free shilling
>>108669052>ARM is fast efficient because it's small>let's make it faster by adding our own SIMD>let's add hardware fp>let's add hardware transcendental functions>let's port niche x87 instructions >for some reason we're no longer as power efficient as we used to be
>>108668684iTODDLERS BTFO
it kinda feels like Intel and AMD stopped giving a fuck about laptop cpus and now focus on AI bullshit 100%. Before mobile was kinda a big deal but Apple is eating the market.
>>108670074AMD is completely incompetent at anything system level.Intel actually has the biggest autists you've ever seen designing complete system architectures to bring an "inefficient" 14900k CPU and system to sip a few watts. They still very much care.
>>108669539>ARM is fast efficient because it's smallEven if ARM remained simple, would it really be more power efficient in practice? sure the instruction decoding is simple, but x86 cisc achieves higher instruction efficiency in real-world applications
>>108670162The decode step in x86 is quite costly. Complex instructions have to be broken down into simpler ones, then an optimisation step is performed to figure out the minimal set of instructions so it's not always executing 8 RISC operations for every 1 CISC, then it has to store all that in an instruction translation cache, then if the speculative execution finds a problem with the optimisation it has to bail, invalidate the cache and redo it with the longer complex pipeline. Intel have optimised the ever loving fuck out of this so that it goes screaming fast despite all the insane work that's being done under the hood, but that's where your power budget is going.The ARM decode step is not as complex, but today it's getting more complex. The days of 1:1 instructions in ARM are just as over as they were in x86 a long time ago, ARM just has a head start. Intel tends to win in the "race to sleep" and that's how it achieves its low power gains, but when it comes to long running work ARM wins in power used per unit of work achieved. With Intel you'll be done faster but your battery is dead, with ARM you'll take a bit longer but the battery will still be good to go.
>>108670074The two play into each other. AI's problem is the power budget. Datacenters have an appetite for power that even if the grid could supply the megawatts they need, they couldn't cool it. So if AMD/Intel can shave off a watt that translates to another bunch of compute.
>>108670074Intel just recently released mega-efficient laptop CPUs though
Now imagine the battery with the Intel botnet turned off
>>108669607Baste
>>108668684Forgot to add the last thing:>still sell more laptops and make more money than absolutely everyone else