Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Graphics AMD Hardware Apple

More Apple M1 Ultra Benchmarks Show It Doesn't Beat the Best GPUs from Nvidia and AMD (tomsguide.com) 121

Tom's Guide tested a Mac Studio workstation equipped with an M1 Ultra with the Geekbench 5.4 CPU benchmarks "to get a sense of how effectively it handles single-core and multi-core workflows."

"Since our M1 Ultra is the best you can buy (at a rough price of $6,199) it sports a 20-core CPU and a 64-core GPU, as well as 128GB of unified memory (RAM) and a 2TB SSD."

Slashdot reader exomondo shares their results: We ran the M1 Ultra through the Geekbench 5.4 CPU benchmarking test multiple times and after averaging the results, we found that the M1 Ultra does indeed outperform top-of-the-line Windows gaming PCs when it comes to multi-core CPU performance. Specifically, the M1 Ultra outperformed a recent Alienware Aurora R13 desktop we tested (w/ Intel Core i7-12700KF, GeForce RTX 3080, 32GB RAM), an Origin Millennium (2022) we just reviewed (Core i9-12900K CPU, RTX 3080 Ti GPU, 32GB RAM), and an even more 3090-equipped HP Omen 45L we tested recently (Core i9-12900K, GeForce RTX 3090, 64GB RAM) in the Geekbench 5.4 multi-core CPU benchmark.

However, as you can see from the chart of results below, the M1 Ultra couldn't match its Intel-powered competition in terms of CPU single-core performance. The Ultra-powered Studio also proved slower to transcode video than the afore-mentioned gaming PCs, taking nearly 4 minutes to transcode a 4K video down to 1080p using Handbrake. All of the gaming PCs I just mentioned completed the same task faster, over 30 seconds faster in the case of the Origin Millennium. Before we even get into the GPU performance tests it's clear that while the M1 Ultra excels at multi-core workflows, it doesn't trounce the competition across the board. When we ran our Mac Studio review unit through the Geekbench 5.4 OpenCL test (which benchmarks GPU performance by simulating common tasks like image processing), the Ultra earned an average score of 83,868. That's quite good, but again it fails to outperform Nvidia GPUs in similarly-priced systems.

They also share some results from the OpenCL Benchmarks browser, which publicly displays scores from different GPUs that users have uploaded: Apple's various M1 chips are on the list as well, and while the M1 Ultra leads that pack it's still quite a ways down the list, with an average score of 83,940. Incidentally, that means it ranks below much older GPUs like Nvidia's GeForce RTX 2070 (85,639) and AMD's Radeon VII (86,509). So here again we see that while the Ultra is fast, it can't match the graphical performance of GPUs that are 2-3 years old at this point — at least, not in these synthetic benchmarks. These tests don't always accurately reflect real-world CPU and GPU performance, which can be dramatically influenced by what programs you're running and how they're optimized to make use of your PC's components.
Their conclusion? When it comes to tasks like photo editing or video and music production, the M1 Ultra w/ 128GB of RAM blazes through workloads, and it does so while remaining whisper-quiet. It also makes the Mac Studio a decent gaming machine, as I was able to play less demanding games like Crusader Kings III, Pathfinder: Wrath of the Righteous and Total War: Warhammer II at reasonable (30+ fps) framerates. But that's just not on par with the performance we expect from high-end GPUs like the Nvidia GeForce RTX 3090....

Of course, if you don't care about games and are in the market for a new Mac with more power than just about anything Apple's ever made, you want the Studio with M1 Ultra.

This discussion has been archived. No new comments can be posted.

More Apple M1 Ultra Benchmarks Show It Doesn't Beat the Best GPUs from Nvidia and AMD

Comments Filter:
  • I always assumed the benchmarks apple used were cherry-picked, just like all the manufacturers.
    • Not in my orchard, you don't!

      If I assign you to pick from a tree, you pick it clean.

    • I think Apple made a chip for a handful of specific software programs. So as long as you're running those programs you get great performance. Stray from those 4 or 5 programs though and you've got an entry level PC you just paid $2k for ...
      • The breakthrough of the Apple system is the undies cpu, gpu, and tensor processors on the same high bandwidth bus and memory. If you compare this to benchmarks you can run on a gpu alone you miss the point. At present graphics and neural networks are optimized to run on the gpu or the cpu but not both. Thus there is heavy concentration on matrix multiplies and other SIMD type instructions. Branches and different instruction sane data type operations are avoided. But these mixed mode processors can rapi

        • by serviscope_minor ( 664417 ) on Sunday March 20, 2022 @03:21AM (#62373565) Journal

          The breakthrough of the Apple system is the undies cpu, gpu, and tensor processors on the same high bandwidth bus and memory.

          Apple fanbois, I swear man.

          Just because you, personally, heard of something first in an article about Apple, doesn't mean it's (a) new or (b) they invented it.

          Unified memory for graphics isn't new. At all. Neither is having CPUs use something other than regular socketed PC memory. The SGI O2 had this years and years and years ago. For more current ones, The XBox 360 has unified memory, with GDDR3. the Xbox 1X (the current shipping model?) has UMA with GDDR3.

          Fancy unified memory architectures aren't new either. AMD's HSA stuff from years ago moved the GPU to the same side of the MMU as the CPU, making data sharing basically pointer sharing.

          At present graphics and neural networks are optimized to run on the gpu or the cpu but not both. Thus there is heavy concentration on matrix multiplies and other SIMD type instructions. Branches and different instruction sane data type operations are avoided. But these mixed mode processors can rapidly send instructions and data to the processor best suited to the task and combine results.

          Apple get the neural network performance from dedicated silicon, not the GPU, so for anything that's not a perfect match (e.g. you know training), it's actually somewhat worse than it looks, not better. As for branches, kii-ii-ii-iinda, but not really. To be trained, networks need to be differentiable and branches aren't differentiable. So, people avoid branches at almost all costs. This works well for GPUs.

          The apples unified memory means the gpu has access to 64gb or more of main memory . The problem is these benchmarks are not developed or in use

          Yep, but that's largely because those benchmarks don't exist. You'd need to develop one where all the data was needed all at the same time, with the algorithm in effect performing multiple very fast passes through the entire dataset one after the other.

          • You offer a lot of statement that I never said. I didn't say Apple invented unified memory but it's the first major consumer desktop system that is intended for performance not cost savings like intels integrated system. And they accomplish fast memories access to all memory not just some faster small blocks.

            But mostly you didn't address the subject line if my post. The bench marks don't test these innovations but are only apropos to discrete graphics.

            Finally you prove my point by saying that ML algorith

            • Heh lol tedious Apple Fanboi is tedious.

              You offer a lot of statement that I never said. I didn't say Apple invented unified memory but it's the first major consumer desktop system that is intended for performance not cost savings like intels integrated system. And they accomplish fast memories access to all memory not just some faster small blocks.

              yeah boi, but you never said that. You said it was a breakthrough. Remember? Here's what you said.

              The breakthrough of the Apple system is the undies cpu, gpu, and

              • Well, you conceded that they didn't in fact invent it.

                To be honest, the "high bandwidth bus" part is. I can only imagine that AMD wanted to do something very similar with HSA but didn't have the money. So they had to do socketed things. They never made an APU with HBM.

                • Sorry, "the 'high bandwidth bus' part is *new*."
                • WHat you mean HyperTransport back in 2001?
                  • HyperTransport is not a memory interface. Systems with HyperTransport still used bog-standard memory interfaces -- they just used HyperTransport to replace FSB in most cases. Only in NUMA systems was HyperTransport used to access local memory on other nodes. Please correct me if I got anything wrong.
                • To be honest, the "high bandwidth bus" part is.

                  Not even slightly. As an example, I listed the SGI O2 which used a proprietary high bandwidth bus to combine all of the high performance elements on the same bus. You had the CPU, GPU, TV capture card etc and could pass pointers. You could have the webcam rendering on the surface of a sphere in the 1990s for free by passing the capture buffer pointer to the GPU as a texture. This was a big deal then.

                  I can only imagine that AMD wanted to do something very simil

                  • Not even slightly. As an example, I listed the SGI O2 which used a proprietary high bandwidth bus to combine all of the high performance elements on the same bus. You had the CPU, GPU, TV capture card etc and could pass pointers. You could have the webcam rendering on the surface of a sphere in the 1990s for free by passing the capture buffer pointer to the GPU as a texture. This was a big deal then.

                    I remember distinctly that SGI O2 had high bandwidth *system* bus. If SGI O2 also had a high bandwidth memory interface, then I must have forgotten it. It's been a long time, sorry. (And I only had an Indy myself.)

                    I also gave the XBox One X as an example. That as an AMD APU that uses the rather high bandwidth GDDR5 instead of socketed RAM.

                    Did XBox One X ever use HSA, though? If it did, then this is new to me. I thought the software on top of it (and the interface to the hardware) was proprietary to XBox, like with other consoles, not HSA-derived.

                    • Yeah the O2 had a very fast memory bus at the time and the high speed blocks all talked directly to the memory controller. The team was some propriety sort.

                      I have no idea if the Xbox used the HSA spec, I was meant the general architecture of the CPU and GPU all talking directly to a high bandwidth memory bus, not standard socketed memory.

                    • Well, when I said "AMD wanted to do something very similar with HSA", what I meant was that they did all the steps to put *all* the functional blocks (in case of their PC APUs, the CPU cores and the GPU cores, but HSA allows for other types of coprocessors) atop virtual memory -- with the obvious advantages -- but they just only had socketed memory to work with in the only APU that fully implemented HSA, which was Carizzo if I'm not mistaken. So they didn't ultimately make the memory interface extremely fas
        • Undies CPU? Sounds a bit pants to me.
        • But what if they align the Heisenberg compensators, don't cross the streams, clear out the EPS manifolds, uncouple the plasma injectors, and recalibrate the matter/antimatter mix? Will it then be an accurate comparison???

  • by skogs ( 628589 )

    Duh....it is still a general purpose chip. Sure it can graphics...but they aren't just going to walk in and poof into first place with half the silicon and almost zero experience in graphics chip design. This is a NICE phone style SOC .... with the ability to plug in a couple peripherals. This is not a graphics card. They should be sued for misleading advertising.

    • by printman ( 54032 )

      Duh....it is still a general purpose chip. Sure it can graphics...but they aren't just going to walk in and poof into first place with half the silicon and almost zero experience in graphics chip design. This is a NICE phone style SOC .... with the ability to plug in a couple peripherals. This is not a graphics card. They should be sued for misleading advertising.

      Um, you *do* know they've been doing the graphics hardware in their iPad/iPhone products since about 2011, right?

      • by tomz16 ( 992375 )

        Um, you *do* know they've been doing the graphics hardware in their iPad/iPhone products since about 2011, right?

        Um, you *do* know op specifically said "phone style SOC," right?

    • This is not a graphics card.

      It's the only one you get. It's not like you can plug in a better graphics card.

    • Duh....it is still a general purpose chip.

      Errr no. It a SoC with several special purpose chips on it, including a 4 core general purpose die known as a CPU and an 8 core GPU with 128 execution units and 1024 ALUs dedicated purely for 3D graphics processing.

      This is not a graphics card.

      No one ever claimed it was a graphics card. GPU doesn't have the word "card" anywhere in the acronym. And precisely zero people conflate the notion that to get some specific kind of performance you need to have a card that sits in a slot.

      Be less concerned about marketing materials and more concer

      • by skogs ( 628589 )

        Errr no. It a SoC with several special purpose chips on it, including a 4 core general purpose die known as a CPU and an 8 core GPU with 128 execution units and 1024 ALUs dedicated purely for 3D graphics processing.

        Who is reading marketing materials?

        And precisely zero people conflate the notion that to get some specific kind of performance you need to have a card that sits in a slot.

        Exactly everyone accepts that decent graphics performance requires a card in a slot. AMD's APUs are also pretty good...but they aren't great. Same as M1. Functional but not great.

        Be less concerned about marketing materials and more concerned about not making stupid posts.

        The entire slashdot discussion and the article is about real tests...compared to stupidass misleading performance indicators in marketing materials. You aren't really going to stick up for the marketing people too are you?

    • What, no analysis of the silicon layout? Apple moved fast register like cache close as possible to the cpu. Nearly all the progress in graphic cards - besides shrinking the fab size, is adding more vector pipeline processors and attached memory. And bus logic. That begets more heat and power and die size.Expect Apple to add dedicated logic and drivers to get better in benchmarks. As someone said, not bad for what it is.
      • This is likely because everyone else already does this. Why would you put your high speed cache halfway across the die? It would generate extra heat, take up extra space, and increase lag time for transfers. And companies like AMD (ATI) and Nvidia have been making great strides in graphics cards. Are you saying that APL can do better in all factors than the combined might of companies whose lifeblood rely on super-duper performance? I don't think so.
  • Their original graphs weren't labelled right?
    Perhaps it was graphing performance per dollar, in which case they narrowly lost to GPUs at 2x MSRP.
    They'll get ahead shortly. :)

    • I wouldn't count on it, the gpu market is about to bottom out. It probably does lots better energy efficiency though.
    • During their presentation all of their graphs were labelled with gray text in the lower right corner. But in typical "Presentation" fashion they're short on details.

      Like the GPU graph clearly shows "RTX 3090" but doesn't say what test was done to produce the displayed result.

      If you want a great breakdown of the M1 Ultra check out the Max Tech benchmark video: https://youtu.be/pJ7WN3yome4 [youtu.be]

      One of the main takeaways is that Apple needs to further optimize macOS for the new chip as there are several tests w
  • by gweihir ( 88907 ) on Saturday March 19, 2022 @09:14PM (#62373179)

    CPU and GPU both. It is not surprise that it cannot beat a propper external GPU. The really impressive thing is the CPU not the GPU anyways. The only thing remarkable on the GPU is that they got this much power into an integrated one, but it still is an integrated GPU with the limits that implies. Eventually those limits may go away, but that will still take time. The only ones set up to do really high performance integrated GPUs are Apple and AMD though. Intel seems to have another underperformer on their hands with their current GPU and Nvidia cannot do high-performance CPUs.

    • When you're paying more than $6K for a machine, Threadripper CPUs get into the competition. A 3970X (2 year old CPU) coupled with a high end GPU would come in less than that and soundly beat this machine in both CPU and GPU benchmarks.
      • by gweihir ( 88907 )

        True. But the accomplishment is getting that performance with a non-AMD64 CPU, not the price they get it at.

        • Is the manufacturing heritage that important when shopping for a tool at a particular price point? Unless you are doing something with particular and special security requirements.
          • by gweihir ( 88907 )

            Is the manufacturing heritage that important when shopping for a tool at a particular price point? Unless you are doing something with particular and special security requirements.

            This is about the future. Having more high-performance CPU architectures is a good thing. Yes, I know, shoppers are always shortsighted.

            • by q_e_t ( 5104099 )

              This is about the future. Having more high-performance CPU architectures is a good thing.

              Do we know that this will deliver better performance in the future that competitors.

              Yes, I know, shoppers are always shortsighted.

              If I am spending money now I need the best value now. Supporting the development of a future ecosystem which might be better in the future isn't generally a big concern. Sure, it is in the long-term, but as Keynes said, in the long-term we're all dead. A powerful machine now could last you 5 or more years.

            • Having more high-performance CPU architectures is a good thing.

              Sure. But we don't have that. What we have is a high-performance proprietary machine. Apple doesn't sale their CPUs as CPUs. They don't contribute to the CPU ecosystem and play badly with their peers at the machine level.

              Perhaps this will change. That would be very nice. This CPU sounds like it might be awesome (assuming it has all the necessary glue logic for high bandwidth processor interconnect) in datacenter and supercomputer applications where power per performance is an extremely important measure. It

      • It actually beats a 3970X in single and multicore performance.
        • We're talking about the AMD Threadripper 3970X, not the decade old Intel i7-3970X.

          • Yup. We are indeed.
            At base clocks, the M1 Ultra is around 10% faster multicore, and 42% faster single core.

            3970X is a Zen2 core, man.
            All 32 cores of a 3970X barely keep up with the 8 performance cores in a 12900K (about 10% more performance).

            Get out of here with that old bullshit.
            • Sorry- should have been:
              All 32 cores of a 3970X barely beat out the 8 performance cores in a 12900K (about 10% more performance)
            • I can see 10% mattering in a server scenario, based on the same TCO (price and power), but no so much for a desktop. So it depends on target, and Apple has been out of the server market for a while. Yes, there are a few build farms on the cloud, but that's using desktop machines.
              • .. except when it comes to video rendering. Final Cut Pro / Compressor have both been recompiled to meet this new spec, and the software has built-in farming support. FCP is still used by many media production companies. Most small companies use FCP, while AVID remains the mainstay.
            • All 32 cores of a 3970X barely keep up with the 8 performance cores in a 12900K (about 10% more performance).

              Really? [cpubenchmark.net]

            • ? The 3970X which is 2 years older than the 12900K has about 50% more performance in multicore at only 20% more power.

              In addition, the Ultra is just a die with two M1 Max processors. The Ultra isn't showing up in benchmarks that Apple didn't cherry pick yet but we can forecast based on doubling the Max. The Max has a CPU Mark of 23468. Doubling that (generous because you don't usually get a linear increase when adding cores) puts it at 46936, just above the 12900K CPU Mark of 40824 and well below the 3970X

              • ? The 3970X which is 2 years older than the 12900K has about 50% more performance in multicore at only 20% more power.

                Age isn't relevant here. 3970X is composed of Zen2 cores. They're not competitive with Alder Lake cores.
                50% more performance with 400% more CPU cores isn't exactly amazing.

                As for the power- we know damn well we can't compare Intel to anyone else in power consumption. They're the worst, hands down, in pretty much all cases.

                As for the multicore being better than I stated- you're correct.
                I was guilty of looking at a single benchmark.
                In Passmark and Cinebench, the 3970X's multicore lead is obvious, compa

      • by sxpert ( 139117 )

        are those high end GPUs available ?
        my local reseller has them at 3k€ and unavailable
        https://www.ldlc.com/informati... [ldlc.com]

      • When you're paying more than $6K for a machine, Threadripper CPUs get into the competition. A 3970X (2 year old CPU) coupled with a high end GPU would come in less than that and soundly beat this machine in both CPU and GPU benchmarks.

        Unless you're buying a mobile computer, and energy consumption (directly translatable to battery weight/longevity) is also a factor.

    • It's an impressive part, hands down.

      I have an M1 (MacBook Air) and an M1 Max (MBP).
      Anyone who tries to put these things down is clearly simply trying to defend their sports team.

      That being said, Apple engages is flat-out manipulative border-line false advertising to promote these things.
      My old-now RTX 2080 Ti is around 4x faster than my M1 Max 32-core.
      They had no business trying to fucking with charts so that they could compare the twice-as-fast Ultra to cards that are 3 times as powerful as mine.
      Ev
      • +5 True

      • by gweihir ( 88907 )

        Anyone who tries to put these things down is clearly simply trying to defend their sports team.

        With the exception of price. Of course, depending what you do with them, price _can_ be a minor factor.

        Anyway, all that being said- I don't regret my purchases. They're a shot across the bow of the x86 world, and well worth the money I spent (to me, of course)

        And that is the important thing. AMD64 is, overall, not a very good architecture, but it has an incredible number of installations. What Apple did here is demonstrate that other architectures can be competitive and that is a very good thing, because it revitalizes R&D into other architectures.

        • With the exception of price. Of course, depending what you do with them, price _can_ be a minor factor.

          Ya, for sure.
          The value proposition for a 2021 MBP or a Mac Studio is a lot different than for the Macbook Air.

          For example, I think the MacBook Air is probably the best money I ever spent on a computer in terms of value.
          My MacBook Pro? Eh, not so much. It's great, but the fact is, unlike the MacBook Air, the Pro... does have legitimate competition in its price class. Competition that does some important things better than the Mac.

          The Air, though? Nothing competes with the Air. Nothing comes close to co

    • CPU and GPU both. It is not surprise that it cannot beat a propper external GPU. The really impressive thing is the CPU not the GPU anyways. The only thing remarkable on the GPU is that they got this much power into an integrated one, but it still is an integrated GPU with the limits that implies. Eventually those limits may go away, but that will still take time. The only ones set up to do really high performance integrated GPUs are Apple and AMD though. Intel seems to have another underperformer on their hands with their current GPU and Nvidia cannot do high-performance CPUs.

      Just think what Apple will do when they break out their GPUs into their own package, as hinted-to about three years ago.

      • by gweihir ( 88907 )

        Just think what Apple will do when they break out their GPUs into their own package, as hinted-to about three years ago.

        I have stopped doing that. All CPU/GPU makers lie. Some more (Intel, Apple), some less (AMD), some vary (Nvidia). I believe statements when I have several independent benchmarks based on real hardware, not before.

        • Just think what Apple will do when they break out their GPUs into their own package, as hinted-to about three years ago.

          I have stopped doing that. All CPU/GPU makers lie. Some more (Intel, Apple), some less (AMD), some vary (Nvidia). I believe statements when I have several independent benchmarks based on real hardware, not before.

          But real benchmarks do not include benchmarks or applications running under Rosetta2, unless specificallybbenchmarking Rosetta2 itself.

          So, e.g., using Shadow of the Tomb Raider to "benchmark" Apple Silicon is simply asinine at this point, almost two years after the introduction of Apple Silicon based Macs.

    • CPU and GPU both. It is not surprise that it cannot beat a propper external GPU.

      Of course not, that didn't stop Apple claiming it was on par with a 3090 at far less power draw though. They did the same thing when they lied about the performance of the M1 Max GPU being on par with a 3080 Ti mobile. Yes it's a great SoC but why lie and tell people it's better than it is?

  • by leonbev ( 111395 ) on Saturday March 19, 2022 @10:00PM (#62373245) Journal

    We finally have an integrated GPU that can play AAA titles like Shadow Of The Tomb Raider at 4K and 30fps. We've never had Intel or AMD processor come close to those results before.

    I'm curious what Apple can pull off in their next-generation processors! Maybe they'll finally be able to give higher-end discrete GPU's a run for their money.

    • They've been dragging their feet but they're finally going to have RDNA2 integrated graphics. Nothing Earth shattering by discrete GPU standards but it should hang with a decent RX 570, maybe even outperform it a bit.

      I don't think tech is what's been holding back integrated graphics I think it's that Intel isn't that interested in it and AMD sells discrete gpus and doesn't want to cannibalize that market. I suspect AMD is finally relenting and releasing some decent integrated gpus because of how hard i
      • Comment removed based on user account deletion
        • Laptop graphics need a boost.

          Then the laws of physics need some revisions.

          Oh, wait. . .

          • Comment removed based on user account deletion
            • Yes, it's a shame we need the laws of physics changed so AMD can do what Apple has done in this article, produce an integrated graphics system that's significantly faster than the one they put out now without using up much more power or putting out much more heat.

              What specific parts are you talking about?

      • RDNA2 will help a bit, so will AM5 with DDR5.

        AMD APUs today are mostly not limited by graphics processing power. They are limited by memory bandwidth.

        And that is where the M1 really shines: The M1 Max at 400 GB/s with 16 channels , the M1 Ultra at 800 GB/s with 32 channels. Compare that to current AMD APUs on AM4. Even with fast DDR4-3200 RAM, the two memory channels get you just 50 GB/s.

        So, no, AMD APUS won't be able to compete with Apple. And that won't change anytime soon.

        On the other hand, the M1 is in

      • I don't think tech is what's been holding back integrated graphics I think it's that Intel isn't that interested in it

        Seemingly but why? NVIDIA (the company) is now worth over 3 times as much as Intel, and Intel couldn't be bothered to put up a fight because crypto and gaming just aren't their bag?

    • Technically, that already existed. But it's hard to get your hands on them.
      I think at this point, the only thing you'll find them in are Xboxes, Playstations, and the Steam Deck.
    • We finally have an integrated GPU that can play AAA titles like Shadow Of The Tomb Raider at 4K and 30fps. We've never had Intel or AMD processor come close to those results before.

      That's because we can put in higher-performing external GPUs for half the price.

    • We finally have an integrated GPU that can play AAA titles like Shadow Of The Tomb Raider at 4K and 30fps. We've never had Intel or AMD processor come close to those results before.

      I'm curious what Apple can pull off in their next-generation processors! Maybe they'll finally be able to give higher-end discrete GPU's a run for their money.

      Is Shadow of the Tomb Raider even M1 and Metal Native?

      If not, why does everyone use it as some sort of de facto Benchmark for Apple Silicon Macs?

      • Is Shadow of the Tomb Raider even M1 and Metal Native?

        No to AS native, yes to Metal.

        If not, why does everyone use it as some sort of de facto Benchmark for Apple Silicon Macs?

        Presumably because it was one of the games Apple showcased when they first announced Apple Silicon.

        • Is Shadow of the Tomb Raider even M1 and Metal Native?

          No to AS native, yes to Metal.

          If not, why does everyone use it as some sort of de facto Benchmark for Apple Silicon Macs?

          Presumably because it was one of the games Apple showcased when they first announced Apple Silicon.

          So, it's a testament to Rosetta2, which is fine.

          But at this point, surely to Bob there has to be an actual Apple Silicon Optimized AAA game title to use as a benchmark!

    • We finally have an integrated GPU that can play AAA titles like Shadow Of The Tomb Raider at 4K and 30fps. We've never had Intel or AMD processor come close to those results before.

      I'm curious what Apple can pull off in their next-generation processors! Maybe they'll finally be able to give higher-end discrete GPU's a run for their money.

      Actually, AFAICT, SotTR is running under Rosetta2; so who knows what is really possible with Apple Silicon Native (and Metal Optimized) AAA games?

  • Can't upgrade GPU and Mem without changing the whole system. On one hand dream for Apple and nightmare for other manufactures that can't provide own graphic cards. On the other hand I am more likely to upgrade GPU after 3 years while keeping same CPU, in next cycle I would upgrade everything but GPU. Also can Apple compete with CPU/GPU/Mem producers and innovate fast enough?
  • The whole chip design bit is on what, it's second hardware revision? Nvidia and AMD have both had the benefit of learning from prior designs, augmenting their graphics architecture design along the way.

    The benchmarks make it look like Apple did a good job but nothing beats design experience. They may catch up quickly if they invest heavily in this field. Maybe next time?

  • Tom's is the worst PC site out there. Their (short) review of the Mac Studio goes out of its way to prove that. In a nutshell the Mac Studio with the M1 Ultra is a small, quiet, efficient, and powerful workstation that can, if need be, playback up to 18 streams of 8K ProRes 422 video. It is not a gaming PC. If your workflow can take advantage of what it offers, and you value low power draw, low heat, low noise, then I doubt there is a PC out there that can match it. For the majority of Mac Studio custome
  • by bb_matt ( 5705262 ) on Sunday March 20, 2022 @12:40AM (#62373409)

    It's not the best review.

    Despite knowing that anyone shelling out this kind of cash on a mac studio m1 ultra, isn't in it for gaming, Tom's hardware forges ahead by ... comparing gaming benchmarks. The entire conclusion of the article is based on this.

    Sure, Apple asked for it - with their usually sketchy benchmarks, they claimed it outperforms the highest-end discreet gpu - but look at the title on that slide, the comparison is based on power consumption. Yes, it's somewhat misleading, but this is Apple with their ... imaginative ... marketing expertise.

    The article doesn't touch on power consumption at all, but it does mention "whisper quiet" - and clearly, this is a result of low power consumption.

    It really is trying to compare two entirely different use cases - gaming vs. studio production.

    You could absolutely get a top of the line PC with a high-end monitor for the same price, but you'll be looking at a beast of a machine that draws 200W more and is going to be noisy by comparison.
    It will also be windows based and whilst windows is absolutely used in studio environments, macOS still remains the dominant OS in these sectors.

    If you come at this as a gamer or a computer enthusiast, the deal looks terrible - it's a ridiculous looking price.
    If you come at this as a professional producer of video, audio or 3D rendering and prefer macOS, because it suits your workflow, the deal looks great.
    It's a small form factor, super quiet incredibly capable package that will just slide right into your production environment and increase your output dramatically, which ultimately saves money.

    Could a studio professional use a powerful PC instead? - sure they could, but that requires a complete workflow shift. A hackintosh isn't going to cut it, because you want 100% reliability. You would have to shift over to Windows and possibly change the software you use.

    It's a bogus article - I would've been far more impressed if they compared like for like in the studio professional market - a deep dive into what a computer like this will actually be used for, because unless you have more money than sense, it isn't going to be for PC gaming...

    Am I a mac fanboi? - no, not really.
    I use mac for my job and for home use - macOS is my preferred OS.
    But I also have a Linux based gaming rig.
    I'm agnostic, except when it comes to Windows, which I now avoid at all costs.

    • "I'm not a Mac fanboi!!!"

      Proceeds to blather on endlessly about Apple's superiority and how Windows is the work of the devil.

      Sure, Jan. Sure.

      • LOL.

        Nice.

        A mac fanboi is someone who sees absolutely no wrong with Apple products and that is all they will use.
        I see plenty wrong with Apple and their products. I wouldn't touch an iPhone with a barge pole - I just hate the "walled garden" and don't like iOS. I use an android phone.

        I used windows from 3.1.1 right through to Windows 7 as my primary OS. 17 years of it.
        I continued to use it for gaming until recently, but hated what it had become. To me, win 2k was the high point, a damn capable OS. Everything

    • > If you come at this as a professional producer of video, audio or 3D rendering and prefer macOS, because it suits your workflow, the deal looks great. You know where 3D rendering is at these days? GPU renderers like RedShift or professional ones like Renderman XPU (which uses both CPU+GPU). GPU benchmarks absolutely matters in the professional studio space.
    • The M1 isn't a GPU. That should be the basis of the counter-argument to this article. Given that, the performance is impressive to say the least. Imagine what Apple would come up with if they decided to make a dedicated GPU. Full disclosure: I use all three major platforms and the Mac is my favorite. I also don't own an Apple silicon machine other than my iPhone.

  • the M1 mac studios are *available*, the same thing can't be said for most Nvidia and AMD GPUs...

  • As much as you hate Apple, if they can throw billions to the best hardware and software devs, something will eventually happen.

  • Sooo... (Score:2, Informative)

    Mediocre hardware for premium price. Peak Apple for sure.

    Even with 2 CPUs bolted together it still can't beat a single CPU Intel or AMD chip. Surprise, surprise, surprise.

    • by Pieroxy ( 222434 )

      Mediocre hardware for premium price. Peak Apple for sure.

      Even with 2 CPUs bolted together it still can't beat a single CPU Intel or AMD chip. Surprise, surprise, surprise.

      What? Producing comparable performances at 1/4 the power consumption is not "beating" ?

      The only thing worse than fanbois are haters.

      • What? Producing comparable performances at 1/4 the power consumption is not "beating" ?

        The only thing worse than fanbois are haters.

        Correct. It is not. Now if Apple was crowing about how this frankenchip leisurely sipped power compared to the big boys, then *maybe* you'd have a point. But they don't. They falsely and blatantly claim it's a powerhouse chip that smokes the competition. It isn't. It isn't even close.

        The only thing worse than fanbois are apologists.

        • by Pieroxy ( 222434 )

          Sure. Nothing to do with what I was answering to. But don't let that stop you from posting off topic rants all over the place.

          • It was a direct response to your off-topic and tangential reply.

            • by Pieroxy ( 222434 )

              Context is everything. My comment was a response to another comment, not to be taken at face value without context. As such, my comment was not offtopic and certainly not tangential. And Apple's marketing claims were not involved in the comment I was replying to, just sheer performance. Hence, your comment being offtopic.

  • Because no one will ever buy this Mac Studio setup for gaming. I think they'll be used for all kinds of video, audio and photographic editing and production, with the occasional "Angry Birds" or solitaire. The Apple fanboys who buy these probably also have PlayStations in their lounge rooms.
  • The good thing is I don't use my laptop to run benchmarks.
  • Remember that Metal does not see the same kinds of optimisations as Direct3D and Vulkan. It delivers impressive performance on an iPad yet falls short of what the beefier hardware should be able to offer on paper. I suspect when Mx processors become the norm in iDevices that we will see a lot more work in the maximum throughput, rather than the resource conservation department
  • by kopecn ( 1962014 )
    System on chip. Lots of these generic tests rarely leverage the hardware accelerations that SoCs provide. For instance can we confirm that these tests use things like simd? Or is just the bare CPU being tested only? Keep in mind a lot of these programs were written on x86 architecture. Something native coming out of xCode may prove to be far better.
  • This would be an absolute non-story if it wasn't for Apple's history of ridiculous marketing claims.

    In this case, it's specifically this graph [macrumors.com].

    It's the complete lack of units on the Y axis that lets you know it's an Apple graph. Gotta have those clean designs on your graph, can't have any data getting in the way.

    "Relative performance" in this case is being defined any damn way Apple wants to define it, which means it's certainly not being defined as the rest of the world might define it. Notice there's noth

The unfacts, did we have them, are too imprecisely few to warrant our certitude.

Working...