First Benchmark Results Surface For M3 Chips In New Macs (macrumors.com) 44
Joe Rossignol reports via MacRumors: The first benchmark results for the standard M3 chip surfaced in the Geekbench 6 database today, providing a closer look at the chip's CPU performance improvements. Based on the results so far, the M3 chip has single-core and multi-core scores of around 3,000 and 11,700, respectively. The standard M2 chip has single-core and multi-core scores of around 2,600 and 9,700, respectively, so the M3 chip is up to 20% faster than the M2 chip, as Apple claimed during its "Scary Fast" event on Monday.
It's unclear if the results are for the new 14-inch MacBook Pro or iMac, both of which are available with the standard M3 chip, but performance should be similar for both machines. The results have a "Mac15,3" identifier, which Bloomberg's Mark Gurman previously reported was for a laptop with the same display resolution as a 14-inch MacBook Pro. We have yet to see any Geekbench results for the higher-end M3 Pro and M3 Max chips available in most new 14-inch and 16-inch MacBook Pro models.
It's unclear if the results are for the new 14-inch MacBook Pro or iMac, both of which are available with the standard M3 chip, but performance should be similar for both machines. The results have a "Mac15,3" identifier, which Bloomberg's Mark Gurman previously reported was for a laptop with the same display resolution as a 14-inch MacBook Pro. We have yet to see any Geekbench results for the higher-end M3 Pro and M3 Max chips available in most new 14-inch and 16-inch MacBook Pro models.
Lower Memory Bandwith than prev gens (Score:2)
Same site says the chip has lower memory bandwidth https://www.macrumors.com/2023... [macrumors.com]
Re:Lower Memory Bandwith than prev gens (Score:5, Insightful)
It had more than was reasonable.
You'd be very hard pressed to hit it, ever, outside of a synthetic benchmark.
I've got an M1 Max with 400GB/s, and it's ridonkulous. Heavily intensive GPU applications often didn't use more than 50GB/s.
If I were Apple and I had to decide where to free up some die space for more processing elements- it'd be the silly wide memory bus.
The "memory bandwidth" was always a fucking silly marketing point.
An AMD 7950X3D has ~80GB/s.
What's the point of infinite bus bandwidth when your CPU can only push so much data per SIMD instruction?
Re:Lower Memory Bandwith than prev gens (Score:4, Insightful)
What's the point of infinite bus bandwidth when your CPU can only push so much data per SIMD instruction?
Well if SIMD is not making good use of bandwidth the real takeaway is that it's time to write code to run on the GPU. :-)
Re: (Score:3)
Though I wouldn't characterize it as SIMD not making good use of the bandwidth.
A logic-less vector memory mover can hit 200GB/s, which is approximately the throughput of 2.5 entire consumer PC CPUs.
It simply can't do the full 400GB/s that the memory subsystem is capable of. The GPU can't even do it, it can only do about 300GB/s.
But together, they can hit 400GB/s, but only when crunching ridiculously parallel workloads.
For most people, that bandwidth is just wasted silicon.
Re:Lower Memory Bandwith than prev gens (Score:4, Insightful)
Comparing to an AMD CPU is pointless. Apple's chip shares its memory bandwidth between the CPU and the GPU.
So you need to compare both CPU+GPU to get a fair comparison. The Nvidia RTX 4080 GPU has 716 GB/s of memory bandwidth. And 960 GB/s for the Radeon RX 7900 XTX.
Re: (Score:2)
AMD chips share memory bandwidth between their CPU and their GPU as well, they just do it over a much slower bus.
So to make it fair, we need to add a discrete into the equation?
Wrong.
Further, the comparison is apt, because the CPU block alone on "Apple's chip" is capable of 200GB/s.
Shame on every dolt that upmodded you.
Re: (Score:2)
The reason why Apple's chip has a high memory bandwidth is because it has a built-in GPU.
Comparing to a discrete GPU puts things in perspective. The 7950X3D is targeting gamers and almost nobody use it without a discrete GPU. And those who do are probably doing word/excel/coding/whatever and couldn't care less about GPU performance.
Re: (Score:2)
The reason why Apple's chip has a high memory bandwidth is because it has a built-in GPU.
Wrong.
Comparing to a discrete GPU puts things in perspective. The 7950X3D is targeting gamers and almost nobody use it without a discrete GPU. And those who do are probably doing word/excel/coding/whatever and couldn't care less about GPU performance.
Wrong.
You literally have no idea what the fuck you're talking about.
Apple's chip, minus the GPU block has 200GB/s (for the 8 perf-core block)
Roughly 2.5 7950X3Ds.
Using the internal bandwidth of a discrete GPU makes less than no sense. That GPU can only transfer at most 32GB/s across its PCIe bus between the CPU and the GPU.
If we had been comparing GPU bandwidth, then it'd be fair to compare the discrete's bandwidth against the 300GB/s of the 32-core GPU in an M1 Max, and show that it comes up w
Re: (Score:2)
Apple's chip, minus the GPU block has 200GB/s (for the 8 perf-core block)
Is it 200 GB/s dedicated to the CPU, or 400 GB/s shared between the GPU and CPU, with at most 200 GB/s that can be used by the CPU at any given time?
Using the internal bandwidth of a discrete GPU makes less than no sense. That GPU can only transfer at most 32GB/s across its PCIe bus between the CPU and the GPU.
Wrong. We were not talking about CPUGPU bandwidth but memory bandwidth. Computers typically have dedicated RAM for CPU and GPU, each with their own bandwidth. Apple combines both, and therefore needs a high shared bandwidth to be competitive. The main reason being that GPUs usually need higher memory bandwidth compared to CPUs.
Re: (Score:2)
Is it 200 GB/s dedicated to the CPU, or 400 GB/s shared between the GPU and CPU, with at most 200 GB/s that can be used by the CPU at any given time?
There is none dedicated to the CPU or the GPU.
The CPU and GPU are both independently wired into the root complex.
The root complex can serve a maximum of 400GB/s of requests to main memory.
The CPU, at full vectorized load, can issue 200GB/s of requests to main memory.
The GPU, at full load, can issue 300GB/s of requests to main memory.
The bandwidth of the CPU block does not exist to feed the GPU. The GPU does not get fed. It fetches its own memory from the root complex. It isn't ferried over a slow bus l
Re: (Score:2)
The reason discretes need dedicated RAM is because the bus between the CPU and the GPU is very slow. So slow, that the high performance of the GPU cores would be pointless if the GPU had to fetch RAM directly and could not cache in its local fast RAM.
The bottleneck is 32GB/s for PCIe4x16.
It's actually the other way around. The 32 GB/s PCIe 4 x16 is more than fast enough given the high bandwidth between GPU and VRAM.
The GPU needs to transfer a lot more data to VRAM than to the CPU in real-world workloads such as games and CAD. Halving the PCIe link typically do not reduce performances that much, unlike halving the VRAM clock speed.
That is because it's a consumer chip. HEDT chips, like Threadripper Pros, have bandwidth of ~150GB/s, much more comparable to the 200GB/s of the CPU block on an M1 (Pro/Max), or the 150GB/s of the CPU block on an M3 Pro.
That's not direcly comparable either. The 7950X3D CPU can always achieve its 80 GB/s transfer rate to RAM, at least when paired with a discrete GPU and the iGPU is
Re: (Score:2)
It's actually the other way around. The 32 GB/s PCIe 4 x16 is more than fast enough given the high bandwidth between GPU and VRAM.
The GPU needs to transfer a lot more data to VRAM than to the CPU in real-world workloads such as games and CAD. Halving the PCIe link typically do not reduce performances that much, unlike halving the VRAM clock speed.
You're confused.
You literally agreed with what I said.
Discretes need VRAM because the bus to the root complex is slow.
Well, that's only mostly accurate. The other reason they need it is because L1 and L2 caches can't possibly be large enough to be helpful due to the amount of discrete processing elements a GPU has. So rather than adding 4GB of cache, they use very fast main RAM and small amounts of cache.
That's not direcly comparable either.
Yes, it is.
The 7950X3D CPU can always achieve its 80 GB/s transfer rate to RAM, at least when paired with a discrete GPU and the iGPU is not used, because that bandwidth is not shared with a GPU.
Incorrect.
The root complex communicates with main memory, as well as the root PCIe switch
Re: (Score:2)
Same site says the chip has lower memory bandwidth https://www.macrumors.com/2023... [macrumors.com]
So does this completely untrustworthy rumour mill: https://apple.slashdot.org/sto... [slashdot.org]
Re:Lower Memory Bandwith than prev gens (Score:5, Interesting)
No, the M3 Pro is what got a reduction in memory bandwidth. These benchmarks are about the base M3 chip, not the M3 Pro or M3 Max. The M3 has as much memory bandwidth as the M2. The M2 Pro, M2 Max and M3 Max all have the same memory bandwidth; the M3 Pro has 75% as much as those three, for reasons that Apple has not shared. Maybe the Pro doesn't have enough GPU cores to justify the extra bandwidth. Maybe Apple wanted more distinction between the Pro and the Max. We can only speculate.
Re: (Score:2)
How's that tinfoil hat feeling? A bit scratchy? Apple is the least of your concerns when it comes to privacy.
Re: (Score:1)
I know because I don't use them. Otherwise, wrong. As usual.
Re: (Score:2)
I know because I don't use them. Otherwise, wrong. As usual.
So WTF are you doing Bloviating about the M3?
Begone, Troll!
Re: (Score:2)
Trying to inform the masses that Apple is a fuckstick company betraying the trust of their customers. Name calling and crying won't change any of those facts. Come visit me under the bridge.
Re: (Score:2)
Trying to inform the masses that Apple is a fuckstick company betraying the trust of their customers. Name calling and crying won't change any of those facts. Come visit me under the bridge.
Just like the "Biden Crime Family" allegations:
Prove it, or GTFO.
Re: (Score:2)
I guess you're a flat-earther too?
Re: (Score:2)
I guess you're a flat-earther too?
WTF?
Re: (Score:2)
out of curiosity- what type of smartphone are you using?
Re: (Score:2)
A piece of shit Motorola with android. No one cares what the paupers do.
Re: (Score:2)
No one cares what the paupers do.
heh im glad you see the puddle you stepped in.
Re: (Score:2)
Born in the puddle. It's unavoidable. But if you are implying android is some how on the level of Apple, you, ma'am, are way off course and must re-adjust your oars and keel.
Re: Spec Missing (Score:2)
Heh. I don't know why you'd think apple is hoovering up more data than google.
Jetson (Score:3)
You just described the nVidia Jetson.
Re: (Score:2)
The Jetson line doesn't have upgradeable RAM. If you want 64 GB RAM, your cheapest current-gen option is the Jetson AGX 64 GB, with 22 ARM cores and 64 GB MMC flash for $1800 (or the development kit version for $2000). Nvidia isn't great about long-term OS updates for the Jetson line, either. That combination is not a compelling value proposition for a lot of people.
Re: (Score:2)
Re: (Score:2)
While people do not like soldered RAM, for small form factor computers like the Jetson, it makes logistical and practical sense.
Oh, so now you find that soldered RAM is "logical and practical"?
Got it!
Re: (Score:2)
Color (Score:4, Funny)
Who buys a mac for the performance? All I want to know is what color it will come in!
Win x2 - New color and higher performance. (Score:2)
Who buys a mac for the performance? All I want to know is what color it will come in!
You are in luck, they are introducing a "space black", which does look better than the current "space gray" or silver. So its a double win, color and performance
Re: (Score:2)
Generally, they know how to read, so that disqualifies a great number of PC owners.
I will be happy when I upgrade from Intel iMac (Score:2)
Re: (Score:2)
what do you need an all-in-one for, exactly?
Re: I will be happy when I upgrade from Intel iMac (Score:3)
Presumably, being neat and tidy. Some people donâ(TM)t like bits of computer spread everywhere. Especially when itâ(TM)s sat in the corner of their dining room or something.
Re: (Score:2)
yeah, maybe. I just often think that most people with iMacs would be better with a standalone desktop + monitor. Especially if they complain about monitor size or lack of choice.
Re: (Score:2)
Re: (Score:2)
Get a standalone computer and a 32" monitor. You won't regret it.
Re: (Score:2)
I have already been considering that option. I will still want at least a 27" monitor cause I can afford it and a stand alone Mac with added RAM.
what do you need an all-in-one for, exactly? No extra wiring and slim profile with iMac means it doesn't take up as much room vs. a combo.
I can't imagine that, given that space under/behind a monitor is wasted, that a Mac mini or Studio (depending on your compute needs) plus a monitor takes up any more space than an iMac.
And I have an old 2011 iMac I use for a Security and Media-server. But I bought it off eBay with a small crack in the front glass for $150; so, that's kind of a different thing. But if it was something I was actually actively and personally using, the flexibility of being able to upgrade the computer separate from the monitor
Re: (Score:2)
Looks like Apple saw your post and unfortunately responded with, "don't hold your breath":
https://apple.slashdot.org/sto... [slashdot.org]
I know your pain though. I used a maxed-out 2009 27" iMac until last year when I finally upgraded to an M2-based Mac. It was definitely time.