We don't want just another SLI/CrossFire situation here-where game developers or Nvidia/AMD are largely responsible for getting multiple GPUs working in tandem-multi-die GPUs need to be seen as one and work as one for all intents and purposes.Īs for CPU performance, Intel and Apple have the company equivalent of a blood feud now, so you can imagine there's no love lost on either side. At least for anything that isn't just doing raw compute tasks. ![]() That's the real difficulty in creating a multi-die GPU: it's been exceptionally difficult to make these discrete chips appear as one to a system and not require any bespoke programming. Intel, AMD, and Nvidia have all yet to use a comparable process node at scale.ĬPU cores (high-performance + high-efficiency)Īnd if Apple can get its dual-GPU SoC to be seen by a system as one singular chip, that's mighty impressive too. ![]() Apple is once again rolling out TSMC's 5nm process here, which is another feather in the company's cap and undoubtedly propelling it into new territory in energy efficiency. That power efficiency is very impressive though. Quite a bit lower, even if you account for TFLOPs not being a direct measure of actual performance. The M1 Max was roughly rated to 10.4 TFLOPs, and if you were to exactly double that (as is the case with the M1 Ultra's two M1 Max dies connected together), you'd hit 20.8 TFLOPs. Whereas Nvidia's Ampere architecture more or less is.Įven with a more generalist gauge of performance, TFLOPs, the M1 Ultra is still a little off the RTX 3090's 35.58 TFLOPs FP32. While Apple's chip will be damn good at a lot, gaming is really not what it's designed for. That's especially true if you're looking on as a PC gamer with expectations of gaming performance. Now that's one heck of a claim, but as we saw with the M1 Max, which was roughly purported to be as good as Nvidia's GeForce RTX 3080 (opens in new tab), the reality of it is that there are caveats to everything. Still, Apple claims that this chip is able to surpass Nvidia's GeForce RTX 3090-technically Nvidia's top card as the RTX 3090 Ti is currently a no-show (opens in new tab)-under certain conditions and with far less energy consumption. Highest-end discrete GPU performance data tested from Core i9-12900K with DDR5 memory and GeForce RTX 3090." The company is also once again being incredibly cagey about the exact benchmarks it's used in order to show its relative performance/Watt here-all we know is that it used "select industry‑standard benchmarks" and that its "popular discrete GPU performance data tested from Core i9-12900K with DDR5 memory and GeForce RTX 3060 Ti. You can run games on an Apple machine, of course, but that's not what this GPU is in any way built to run. Now Apple's M1 Ultra chip isn't a game-changer in the sense of changing games, at all, really. "With its powerful CPU, massive GPU, incredible Neural Engine, ProRes hardware acceleration, and huge amount of unified memory, M1 Ultra completes the M1 family as the world's most powerful and capable chip for a personal computer." By connecting two M1 Max die with our UltraFusion packaging architecture, we're able to scale Apple silicon to unprecedented new heights," said Johny Srouji, Apple's senior vice president of Hardware Technologies. "M1 Ultra is another game-changer for Apple silicon that once again will shock the PC industry. ![]() Best CPU for gaming (opens in new tab): The top chips from Intel and AMDīest gaming motherboard (opens in new tab): The right boardsīest graphics card (opens in new tab): Your perfect pixel-pusher awaitsīest SSD for gaming (opens in new tab): Get into the game ahead of the rest
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |