Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Desktops (Apple) Apple Hardware

First Benchmark Results Surface For M3 Chips In New Macs (macrumors.com) 44

Joe Rossignol reports via MacRumors: The first benchmark results for the standard M3 chip surfaced in the Geekbench 6 database today, providing a closer look at the chip's CPU performance improvements. Based on the results so far, the M3 chip has single-core and multi-core scores of around 3,000 and 11,700, respectively. The standard M2 chip has single-core and multi-core scores of around 2,600 and 9,700, respectively, so the M3 chip is up to 20% faster than the M2 chip, as Apple claimed during its "Scary Fast" event on Monday.

It's unclear if the results are for the new 14-inch MacBook Pro or iMac, both of which are available with the standard M3 chip, but performance should be similar for both machines. The results have a "Mac15,3" identifier, which Bloomberg's Mark Gurman previously reported was for a laptop with the same display resolution as a 14-inch MacBook Pro. We have yet to see any Geekbench results for the higher-end M3 Pro and M3 Max chips available in most new 14-inch and 16-inch MacBook Pro models.

This discussion has been archived. No new comments can be posted.

First Benchmark Results Surface For M3 Chips In New Macs

Comments Filter:
  • Same site says the chip has lower memory bandwidth https://www.macrumors.com/2023... [macrumors.com]

    • by DamnOregonian ( 963763 ) on Wednesday November 01, 2023 @07:25PM (#63972830)
      I wouldn't worry about it.
      It had more than was reasonable.
      You'd be very hard pressed to hit it, ever, outside of a synthetic benchmark.

      I've got an M1 Max with 400GB/s, and it's ridonkulous. Heavily intensive GPU applications often didn't use more than 50GB/s.
      If I were Apple and I had to decide where to free up some die space for more processing elements- it'd be the silly wide memory bus.
      The "memory bandwidth" was always a fucking silly marketing point.

      An AMD 7950X3D has ~80GB/s.
      What's the point of infinite bus bandwidth when your CPU can only push so much data per SIMD instruction?
      • by drnb ( 2434720 ) on Thursday November 02, 2023 @12:16AM (#63973164)

        What's the point of infinite bus bandwidth when your CPU can only push so much data per SIMD instruction?

        Well if SIMD is not making good use of bandwidth the real takeaway is that it's time to write code to run on the GPU. :-)

        • Absolutely.

          Though I wouldn't characterize it as SIMD not making good use of the bandwidth.
          A logic-less vector memory mover can hit 200GB/s, which is approximately the throughput of 2.5 entire consumer PC CPUs.
          It simply can't do the full 400GB/s that the memory subsystem is capable of. The GPU can't even do it, it can only do about 300GB/s.
          But together, they can hit 400GB/s, but only when crunching ridiculously parallel workloads.

          For most people, that bandwidth is just wasted silicon.
      • by fred6666 ( 4718031 ) on Thursday November 02, 2023 @07:22AM (#63973626)

        Comparing to an AMD CPU is pointless. Apple's chip shares its memory bandwidth between the CPU and the GPU.
        So you need to compare both CPU+GPU to get a fair comparison. The Nvidia RTX 4080 GPU has 716 GB/s of memory bandwidth. And 960 GB/s for the Radeon RX 7900 XTX.

        • That logic is so broken that I'm worried I gave myself brain cancer trying to parse it.

          AMD chips share memory bandwidth between their CPU and their GPU as well, they just do it over a much slower bus.
          So to make it fair, we need to add a discrete into the equation?
          Wrong.

          Further, the comparison is apt, because the CPU block alone on "Apple's chip" is capable of 200GB/s.

          Shame on every dolt that upmodded you.
          • The reason why Apple's chip has a high memory bandwidth is because it has a built-in GPU.
            Comparing to a discrete GPU puts things in perspective. The 7950X3D is targeting gamers and almost nobody use it without a discrete GPU. And those who do are probably doing word/excel/coding/whatever and couldn't care less about GPU performance.

            • The reason why Apple's chip has a high memory bandwidth is because it has a built-in GPU.

              Wrong.

              Comparing to a discrete GPU puts things in perspective. The 7950X3D is targeting gamers and almost nobody use it without a discrete GPU. And those who do are probably doing word/excel/coding/whatever and couldn't care less about GPU performance.

              Wrong.
              You literally have no idea what the fuck you're talking about.
              Apple's chip, minus the GPU block has 200GB/s (for the 8 perf-core block)
              Roughly 2.5 7950X3Ds.
              Using the internal bandwidth of a discrete GPU makes less than no sense. That GPU can only transfer at most 32GB/s across its PCIe bus between the CPU and the GPU.

              If we had been comparing GPU bandwidth, then it'd be fair to compare the discrete's bandwidth against the 300GB/s of the 32-core GPU in an M1 Max, and show that it comes up w

              • Apple's chip, minus the GPU block has 200GB/s (for the 8 perf-core block)

                Is it 200 GB/s dedicated to the CPU, or 400 GB/s shared between the GPU and CPU, with at most 200 GB/s that can be used by the CPU at any given time?

                Using the internal bandwidth of a discrete GPU makes less than no sense. That GPU can only transfer at most 32GB/s across its PCIe bus between the CPU and the GPU.

                Wrong. We were not talking about CPUGPU bandwidth but memory bandwidth. Computers typically have dedicated RAM for CPU and GPU, each with their own bandwidth. Apple combines both, and therefore needs a high shared bandwidth to be competitive. The main reason being that GPUs usually need higher memory bandwidth compared to CPUs.

                • Is it 200 GB/s dedicated to the CPU, or 400 GB/s shared between the GPU and CPU, with at most 200 GB/s that can be used by the CPU at any given time?

                  There is none dedicated to the CPU or the GPU.
                  The CPU and GPU are both independently wired into the root complex.
                  The root complex can serve a maximum of 400GB/s of requests to main memory.
                  The CPU, at full vectorized load, can issue 200GB/s of requests to main memory.
                  The GPU, at full load, can issue 300GB/s of requests to main memory.
                  The bandwidth of the CPU block does not exist to feed the GPU. The GPU does not get fed. It fetches its own memory from the root complex. It isn't ferried over a slow bus l

                  • The reason discretes need dedicated RAM is because the bus between the CPU and the GPU is very slow. So slow, that the high performance of the GPU cores would be pointless if the GPU had to fetch RAM directly and could not cache in its local fast RAM.

                    The bottleneck is 32GB/s for PCIe4x16.

                    It's actually the other way around. The 32 GB/s PCIe 4 x16 is more than fast enough given the high bandwidth between GPU and VRAM.
                    The GPU needs to transfer a lot more data to VRAM than to the CPU in real-world workloads such as games and CAD. Halving the PCIe link typically do not reduce performances that much, unlike halving the VRAM clock speed.

                    That is because it's a consumer chip. HEDT chips, like Threadripper Pros, have bandwidth of ~150GB/s, much more comparable to the 200GB/s of the CPU block on an M1 (Pro/Max), or the 150GB/s of the CPU block on an M3 Pro.

                    That's not direcly comparable either. The 7950X3D CPU can always achieve its 80 GB/s transfer rate to RAM, at least when paired with a discrete GPU and the iGPU is

                    • It's actually the other way around. The 32 GB/s PCIe 4 x16 is more than fast enough given the high bandwidth between GPU and VRAM.
                      The GPU needs to transfer a lot more data to VRAM than to the CPU in real-world workloads such as games and CAD. Halving the PCIe link typically do not reduce performances that much, unlike halving the VRAM clock speed.

                      You're confused.
                      You literally agreed with what I said.
                      Discretes need VRAM because the bus to the root complex is slow.
                      Well, that's only mostly accurate. The other reason they need it is because L1 and L2 caches can't possibly be large enough to be helpful due to the amount of discrete processing elements a GPU has. So rather than adding 4GB of cache, they use very fast main RAM and small amounts of cache.

                      That's not direcly comparable either.

                      Yes, it is.

                      The 7950X3D CPU can always achieve its 80 GB/s transfer rate to RAM, at least when paired with a discrete GPU and the iGPU is not used, because that bandwidth is not shared with a GPU.

                      Incorrect.
                      The root complex communicates with main memory, as well as the root PCIe switch

    • Same site says the chip has lower memory bandwidth https://www.macrumors.com/2023... [macrumors.com]

      So does this completely untrustworthy rumour mill: https://apple.slashdot.org/sto... [slashdot.org]

    • by Entrope ( 68843 ) on Wednesday November 01, 2023 @09:23PM (#63972974) Homepage

      No, the M3 Pro is what got a reduction in memory bandwidth. These benchmarks are about the base M3 chip, not the M3 Pro or M3 Max. The M3 has as much memory bandwidth as the M2. The M2 Pro, M2 Max and M3 Max all have the same memory bandwidth; the M3 Pro has 75% as much as those three, for reasons that Apple has not shared. Maybe the Pro doesn't have enough GPU cores to justify the extra bandwidth. Maybe Apple wanted more distinction between the Pro and the Max. We can only speculate.

  • Color (Score:4, Funny)

    by TwistedGreen ( 80055 ) on Wednesday November 01, 2023 @09:07PM (#63972954)

    Who buys a mac for the performance? All I want to know is what color it will come in!

    • Who buys a mac for the performance? All I want to know is what color it will come in!

      You are in luck, they are introducing a "space black", which does look better than the current "space gray" or silver. So its a double win, color and performance

    • by CAIMLAS ( 41445 )

      Generally, they know how to read, so that disqualifies a great number of PC owners.

  • I will be happy when I finally upgrade from 2017 27" Intel iMac...to something M3 or M4 !! Just hope it's a bigger 27" iMac.....waiting patiently.
    • what do you need an all-in-one for, exactly?

      • Presumably, being neat and tidy. Some people donâ(TM)t like bits of computer spread everywhere. Especially when itâ(TM)s sat in the corner of their dining room or something.

        • yeah, maybe. I just often think that most people with iMacs would be better with a standalone desktop + monitor. Especially if they complain about monitor size or lack of choice.

          • I have already been considering that option. I will still want at least a 27" monitor cause I can afford it and a stand alone Mac with added RAM. what do you need an all-in-one for, exactly? No extra wiring and slim profile with iMac means it doesn't take up as much room vs. a combo.
            • Get a standalone computer and a 32" monitor. You won't regret it.

            • I have already been considering that option. I will still want at least a 27" monitor cause I can afford it and a stand alone Mac with added RAM.

              what do you need an all-in-one for, exactly? No extra wiring and slim profile with iMac means it doesn't take up as much room vs. a combo.

              I can't imagine that, given that space under/behind a monitor is wasted, that a Mac mini or Studio (depending on your compute needs) plus a monitor takes up any more space than an iMac.

              And I have an old 2011 iMac I use for a Security and Media-server. But I bought it off eBay with a small crack in the front glass for $150; so, that's kind of a different thing. But if it was something I was actually actively and personally using, the flexibility of being able to upgrade the computer separate from the monitor

    • by Equuleus42 ( 723 )

      Looks like Apple saw your post and unfortunately responded with, "don't hold your breath":
      https://apple.slashdot.org/sto... [slashdot.org]

      I know your pain though. I used a maxed-out 2009 27" iMac until last year when I finally upgraded to an M2-based Mac. It was definitely time.

You are always doing something marginal when the boss drops by your desk.

Working...