Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware Technology

Arm Announces the Cortex X4 For 2024, Plus a 14-Core M2-Fighter (arstechnica.com) 81

Arm unveiled its upcoming flagship CPUs for 2024, including the Arm Cortex X4, Cortex A720, and Cortex A520. These chips, built on the Armv9.2 architecture, promise higher performance and improved power efficiency. Arm also introduced a new 'QARMA3 algorithm' for memory security and showcased a potential 14-core mega-chip design for high-performance laptops. Ars Technica reports: Arm claims the big Cortex X3 chip will have 15 percent higher performance than this year's X3 chip, and "40 percent better power efficiency." The company also promises a 20 percent efficiency boost for the A700 series and a 22 percent efficiency boost for the A500. The new chips are all built on the new 'Armv9.2' architecture, which adds a "new QARMA3 algorithm" for Arm's Pointer Authentication memory security feature. Pointer authentication assigns a cryptographic signature to memory pointers and is meant to shut down memory corruption vulnerabilities like buffer overflows by making it harder for unauthenticated programs to create valid memory pointers. This feature has been around for a while, but Arm's new algorithm reduces the CPU overhead of all this extra memory work to just 1 percent of the chip's power, which hopefully will get more manufacturers to enable it.

Arm's SoC recommendations are usually a "1+3+4" design. That's one big X chip, three medium A700 chips, and four A500 chips. This year the company is floating a new layout, though, swapping out two small chips for two medium chips, which would put you at a "1+5+2" configuration. Arm's benchmarks -- which were run on Android 13 -- claim this will get you 27 percent more performance. That's assuming anything can cool and power that for a reasonable amount of time. Arm's blog post also mentions a 1+4+4 chip -- nine cores -- for a flagship smartphone. [...]

Every year with these Arm flagship chip announcements, the company also includes a wild design for a giant mega-chip that usually never gets built. Last year the company's blueprint monster was a design with eight Cortex X3 chips and four A715 cores, which the company claimed would rival an Intel Core i7. The biggest X3-based chip on the market is the Qualcomm Snapdragon 8cx Gen 3, which landed in a few Windows laptops. That was only a four X3/four A715 chip, though. This year's mega chip is a 14-core monster with 10 Cortex X4 chips and four A720 chips, which Arm says is meant for "high-performance laptops." Arm calls the design the company's "most powerful cluster ever built," but will it ever actually be built? Will it ever be more than words on a page?

This discussion has been archived. No new comments can be posted.

Arm Announces the Cortex X4 For 2024, Plus a 14-Core M2-Fighter

Comments Filter:
  • by Anonymous Coward

    Faster performance in Android, on a laptop? Wow how soon can I experience this colossal turd sandwich?

    • by OrangeTide ( 124937 ) on Tuesday May 30, 2023 @09:51PM (#63563033) Homepage Journal

      I feel like Intel's 11th and 12th generation notebook chips (n5095 and n95) perform very well in the 10-15 Watt range that is reasonable for a small laptop. At that power dissipation the architecture doesn't matter that much. Or at least Intel has hit that niche as well as any ARM has done. And frankly it comes down to software, and I'd rather take Ubuntu or Windows over Android or Chrome when I'm in a laptop form factor.

      • Ubuntu or Windows over Android or Chrome

        All 4 of which run on both Arm and Intel CPU's

      • Re: (Score:2, Troll)

        Or at least Intel has hit that niche as well as any ARM has done.

        Not true at all.

        I'll grant you that Intel has made *massive* strides on this front, and I look forward to see them continue to improve, but currently performance-per-watt on pick-any-intel-you-like is going to be about a quarter, or less, that of an M1.
        You can play with some comparisons here [notebookcheck.net] if you like.
        That site is about how the M2 is basically a fart noise compared to the M1 (Apple tried to pull an Intel and call pumping more power into the same architecture an improvement) but you can use the little

        • Don't know what that site says, but M2 is more power efficient, and depending on settings can either produce the same performance with less power or more performance with more power. Your choice. Plus there are some more improvements like slightly more low-power cores and 12 GB RAM chips.
          • Don't talk out of your ass, it's not a good look.

            The M2 is merely an M1 with higher clocks, which results in higher performance, at the cost of efficiency.
            • Don't talk about "talking out of your ass" when you are the only one doing it. M2 has higher power efficiency, so you have a choice between same performance with less power or more performance using more power. Plus substantial other changes.
              • Don't talk about "talking out of your ass" when you are the only one doing it. M2 has higher power efficiency, so you have a choice between same performance with less power or more performance using more power. Plus substantial other changes.

                Are you literally copy-pastaing some fucking marketing material? lol.

                Pathetic. I gave the numbers. It's easily provable that the M2 is less efficient than the M1, because it's just an M1 clocked higher, and higher clocks invariably result in lower efficiency, a physical effect caused by the unavoidable capacitance of the transistors.

                If you thought I was arguing the M2 was less efficient than a competing Intel or AMD part, I was not. As that's just laughable.
                But the M2 is *provably* less efficient than

                • Seriously mate, have you learned to read and write at school? You can choose the clock rate. It's not fixed. Same clock rate as M1, the M2 uses LESS power. Higher clock rate, it uses more, but not as much as you would get from a plain higher clock rate with no other changes.
                  • Same clock rate as M1, the M2 uses LESS power.

                    No, it does not.
                    This is a falsehood.
                    Uses about the same.

                    Higher clock rate, it uses more, but not as much as you would get from a plain higher clock rate with no other changes.

                    Incorrect. They are within 0.5% of expected.

                    It's cool that you can regurgitate Apple's marketing material, but here in powermetrics land, we can do the actual math ;)

        • Apple is disqualified from the comparison because they don't sell chips, they sell complete systems only. Apple is not in the same market as Intel, AMD, ARM, Qualcomm, Samsung, Marvell, etc.

          • I felt it was relevant, since the article is about arm attempting to make their generic cores as good as Apple's.

            Like literally.... that's what the article is about.
            Arm woke up one day and was like, "Our generic cores obviously aren't as good as they could be... Maybe we should step that up a notch."

            The comparison is therefor entirely appropriate.
        • I'm really curious which group of fanbois is trying to mod this into oblivion, lol.
          The PC crowd because they're pissed off that Intel and AMD are flatly non-competitive on a per-clock or per-watt basis, or is it the Apple crowd for me pointing out that the M2 is Apple's Skylake+++ moment?
      • by AmiMoJo ( 196126 )

        Intel's recent chips are very inefficient. They now have a mixture of efficiency and performance cores to try to get around that, but the fact is that their manufacturing process is crap and can't deliver power efficient chips.

        Compare them to Ryzen 6000/7000 parts. No efficiency cores, all of them are performance. Battery life greatly exceeding Intel, on a par with Apple Mx but with much better performance available too. Machines with then run quieter and cooler than Intel equivalents.

        I don't think ARM is a

        • Compare them to Ryzen 6000/7000 parts. No efficiency cores, all of them are performance. Battery life greatly exceeding Intel, on a par with Apple Mx but with much better performance available too. Machines with then run quieter and cooler than Intel equivalents.

          Ryzen 7k low-wattage parts are awesome. But they are not remotely comparable to Mx in terms of performance per watt.

          Well, that's not fair. They're "comparable", just... well, they look really bad when you do that comparison. So you shouldn't do it.

      • I feel like Intel's 11th and 12th generation notebook chips (n5095 and n95) perform very well in the 10-15 Watt range that is reasonable for a small laptop.

        That would depend on how you define "well". Compared to similar AMD CPUs [nanoreview.net] at the same power, it does not look good. That says nothing against the M1 or M2 [notebookcheck.net] where it loses badly on performance.

        • I'm only comparing them to ARM chips available on the market today. In terms of M1/M2 performance, great per for watt. But the M1 is a 39 Watt chip, while AMD Ryzen 7 5800U is a 15 W chip. So yea, little doubt that the M1 can crush it. If you have the overhead for 40 watts of thermals and aim for a low power architecture you have the option of race-to-sleep and modular clock gating. That thermal overhead gives so much flexibility in the design decisions, but it's not available with a smaller thermal rating

          • But the M1 is a 39 Watt chip,

            No. The M1 has 10W and 39W variants. The 10W version is the laptop version and the 39W variant is a desktop version. The website I linked shows the general M1 performance but you can click on the (+) to see how each Mac product variant performed. And in each case the M1 laptop crushed the n95 laptop

            while AMD Ryzen 7 5800U is a 15 W chip.

            Why are you comparing the Intel n95 (Release date January 23, 2023) to a CPU released 2 years and 2 models ago? As of January 2023, the Ryzen 7000u series has been available and the 6000u series was released in J

            • No. The M1 has 10W and 39W variants.

              I don't believe that's accurate at all.
              I'm pretty sure they're all 10W parts.
              I've never seen a benchmark of an M1 Pro performing better than an M1 Air until the Air thermally saturates (being passively cooled, after all)

              Not take away from your point that you CANNOT compare an N95 against an M1... unless you're trying to be ironic.

              • I don't believe that's accurate at all.

                From Apple [apple.com] themselves. The M1 Mac Mini CPU Max 39 W

                I've never seen a benchmark of an M1 Pro performing better than an M1 Air until the Air thermally saturates (being passively cooled, after all)

                M1 vs M1 Pro vs M2, etc. [appleinsider.com]

                • From Apple [apple.com] themselves. The M1 Mac Mini CPU Max 39 W

                  Ya, I've seen that. But I don't think we can take that number as "TDP".
                  First, TDP means different things for each manufacturer. It's never a useful cross-comparison metric.
                  But I have an M1, I've never been able to get the CPU to use more than 15W, or 20W for the CPU+GPU (and only in very intensive OpenCL tests)

                  M1 vs M1 Pro vs M2, etc. [appleinsider.com]

                  Heh- wow, I walked right into that one.
                  What I should have said was:
                  When I said "M1 Pro", what I really meant as M1 in the MacBook Pro (not the separate chip called the M1 Pro). That ambiguity is e

                  • i.e., there is no evidence that there are variants of the M1 with different wattage.

                    All the M1 performance cores can run at one of ten different clock speeds, with each core running at an individual clock speed, and each clock speed using different power. Maximum is about 2.5 Watt per core, much lower when the clock speed is lower. Limited obviously by CPU needed, and cooling. But usually people never get into the area where passive cooling isn't enough.

                    • All the M1 performance cores can run at one of ten different clock speeds, with each core running at an individual clock speed, and each clock speed using different power.

                      Not that simple.
                      Power usage is based on the circuits activated which is based on the instructions being executed. Speed of the instructions being executed (clock) is a factor, nothing more.

                      i.e., my M1 can max out 4P/4E cores at ~15W
                      My M1 Max can do its 8P/2E at ~24W

                      Let's assume your 2.5W number is correct.
                      4 * 2.5 = 10, plus an additional 5W for the 4E.
                      8 * 2.5 = 20, plus an additional 4W for the 2E.

                      Doesn't add up.
                      There is no measurable per-core performance difference between an M1 and an M1 Max, s

                • (And the Pro and Max absolutely have different wattage, since they have wildly different core counts/type configurations)

                  i.e., the following statement:

                  The 10W version is the laptop version and the 39W variant is a desktop version.

                  Doesn't make sense.

                  The M1 exists in the following laptops: MacBook Air, MacBook Pro 13", and the following desktops: iMac 24", Mac Mini
                  The M1 Pro exists in the following laptops: MacBook Pro 14/16", and the following desktops: Mac Studio
                  The M1 Max exists in the following laptops: MacBook Pro 14/16", and the follwoing desktops: Mac Studio
                  The M1 Ultra ex

                  • There's no benchmark anywhere that shows a performance difference in any of those 3 chips which exist in both laptop and desktop form factors, depending on form factor.

                    This benchmark [notebookcheck.net] shows some of the difference between the M1, M1 Pro, and M2. Click on the (+) to show individual Macs but not all models are shown.

                    • You're misunderstanding me.

                      M1 is a different chip from M1 Pro.
                      What that benchmark will not show is a difference between an M1 in a desktop, or a laptop. Nor will it show a difference between an M1 Pro in a desktop, or a laptop. Nor will it show a difference in an M1 Max in a desktop, or a laptop.

                      Apple does not make "laptop" and "desktop" variants of the M1, M1 Pro, or M1 Max. Those just make those chips, and put them in various places.
                      The M1, M1 Pro, and M1 Max go in both laptops and desktops, and the
                    • Apple does not make "laptop" and "desktop" variants of the M1, M1 Pro, or M1 Max. Those just make those chips, and put them in various places.

                      Yes they did. There are laptop and desktop variants of the M1. They do not make variants of the other M1 chips but they certainly have made variants of the M1. Design wise they are most likely not any major differences but there are differences For example the M1 Macbook Air and iMac M1 chips only had a 7 core GPU whereas other models had 8 cores. If this was a case of binning for performance, normally the CPU cores would be turned off not the GPU cores.

                      M1 Pro is definitely not a 15W part, because the 4P/4E M1 is a 15-ish W part. Technically, Apple doesn't publish "TDP", but powermetrics in macOS will give you the current CPU power draw for any test you're doing, so that you can eyeball what the part will actually pull.

                      I never said the M1 Pro was 15W. I said the M1 specif

                    • Yes they did. There are laptop and desktop variants of the M1. They do not make variants of the other M1 chips but they certainly have made variants of the M1. Design wise they are most likely not any major differences but there are differences For example the M1 Macbook Air and iMac M1 chips only had a 7 core GPU whereas other models had 8 cores. If this was a case of binning for performance, normally the CPU cores would be turned off not the GPU cores.

                      No, there are not.
                      The M1 MBA and iMac came with 7 or 8 core GPU- your choice.
                      Are you saying my MacBook Air came with a Desktop GPU, and that the iMac you refer to came with a laptop CPU?

                      That isn't to say there aren't variants of the M1- there are for the M1, M1 Pro, and the M1 Max- but they're not Laptop/Desktop based. All variants were available on all platforms.

                      I never said the M1 Pro was 15W. I said the M1 specifically the 2020 MacBook Air version was a 10W chip.

                      It's the same chip in every single platform it's ever been deployed in.
                      The benchmarks will back that up.
                      It's definitely not a "10W" chip, bu

                    • Here is a full list of variants. [wikipedia.org]
                      With the exception of the M1 Ultra, all variants were available in both desktop and laptop form.

                      I have the M1 8-core GPU, 16GB RAM in a MacBook Air, and the M1 Max 32-core GPU, 32GB RAM in a MacBook Pro 16".
                      These were the beefiest versions you could get of those CPUs, regardless of whether or not they were in the MacBook Air, MacBook Pro, iMac, Mac Mini, or Mac Studio.

                      There was no difference in those parts that was dependent upon their form factor. I.e., my M1 Max did
                    • Here's a list of CPU availability [wikipedia.org] per Mac model.
                      You'll see that it confirms my claims (that you can get any CPU variant (except for the Ultra, which has no variants) in laptop, or desktop.
                      You can then use benchmarks to confirm that there's no difference in the CPU performance in each form factor, where they're using the same variant.

                      And if you take one look at that list, and go "wow, that thing is stupid", you'll get no argument from me.
                      I don't feel like they could have made that shit more confusing if
                    • Correction, 64GB RAM in my MBP.

                      Which is also probably one of the silliest wastes of money I've ever engaged in. I have yet to figure out how to fill up 20GB, much less 64GB.
                • Figured it out.
                  Your Apple numbers are not measuring CPU power, but system power.

                  From the bottom of your link:

                  Power consumption data (Watts) is measured from the wall power source and includes all power supply and system losses. Additional correction is not needed.

                  This makes more sense, as even my 10-core M1 Max 16" MBP doesn't hit 39W at full all-core CPU usage (hits 24W)

            • Why are you comparing the Intel n95 (Release date January 23, 2023) to a CPU released 2 years and 2 models ago?

              Because that's when I bought it?

              No. The M1 has 10W and 39W variants.

              I lifted the 39W number from Wikipedia. "The 2020 M1-equipped Mac Mini draws 7 watts when idle and 39 watts at maximum load"
              There was no indication on watts for M1, M1 Pro, M1 Max, or M1 Ultra variants. I would assume those numbers were for the original M1 found in a desktop system.

              • Because that's when I bought it?

                So you are comparing a 2year old AMD CPU to the new Inteln95 CPU? Do you not see a major problem with that comparison?

                I lifted the 39W number from Wikipedia. "The 2020 M1-equipped MAC MINI draws 7 watts when idle and 39 watts at maximum load"

                I bolded the words you seemed to miss. The M1 Mac Mini draws 39W as it is a desktop. The M1 Macbook Air draws 10W and still crushed the Intel n95.

                There was no indication on watts for M1, M1 Pro, M1 Max, or M1 Ultra variants. I would assume those numbers were for the original M1 found in a desktop system.

                Does that matter? The base M1 still handily beat the n95. The M1 Pro, Max, and Ultra would only widen the gap but require more power.

              • "The 2020 M1-equipped Mac Mini draws 7 watts when idle and 39 watts at maximum load"

                That's the complete Mac Mini drawing from 7 to 39 watts, for CPU, GPU, RAM, video, WiFi, Ethernet, SSD, everything. Probably draws a lot more if you plug in two iPads for charging. The CPU draws a lot less.

          • But the M1 is a 39 Watt chip

            It may be possible that the CPU + GPU + ANE can pull 39W, but I'd have to see it to believe it.
            CPU pegs at 15W in every test I've ever done or seen, on both the passive and actively cooled M1s.
            I have seen CPU+GPU (OpenCL) hit 20W briefly.

  • Re: (Score:2, Troll)

    Comment removed based on user account deletion
    • apple is low on pci-e lanes and really needs m.2 storage ports. Not apples X3-X4 mark up on storage.

      • by Anonymous Coward

        But that would allow you to store files somewhere other than (R)iTunes(TM)(Pat.Pend.), thereby giving you the right to own and use files. Do you really think Apple will allow that?

    • Did anything in the announcement actually mention the M2, or was that completely a fabrication of the article writer?

      Why on earth would ARM care a wet snap about releasing an "M2 killer" when the M2 is using licensed ARM tech? That's kinda biting the hand that feeds you. There is nothing about their 14-core dream chip which is intended to be an M2 killer. It's nothing more than the normal pipeline dream they always stick in their presentations.

      • Apple owns a no payment permanant right to ARM IP. They get zero from cuppertino.

        • Citation needed. It wouldn't make much sense, as ARM would have no incentive into creating more IP if their business model was like that.
          Apple most likely has to pay yearly fees and/or a royalty per unit sold.

          • Historically, a few companies bought a fully paid perpetual ARM license, among them Apple, I think Samsung, and three or four others. So your "most likely" scenario is absolutely wrong. Apple also owned a large share of ARM at some time, around the time they built the Newton. These companies paid a lot of money when ARM needed it to grow.

            Of course not all deals are like that.
        • Except the poster is not claiming ARM receives royalties only that M2 is ARM based. Apple is an ARM licensee and used ARM technology to design their chip. ARM themselves did not describe it as a M2 killer. That was the article's author.
      • Re:An M2-Fighter? (Score:5, Informative)

        by fred6666 ( 4718031 ) on Wednesday May 31, 2023 @08:10AM (#63563947)

        would ARM care a wet snap about releasing an "M2 killer" when the M2 is using licensed ARM tech?

        ARM licenses many things. Apple uses only the instruction set architecture (ISA). They design their own CPU core implementing that instruction set.

        ARM is designing a whole CPU (albeit, only the IP part, it still needs to be licensed and manufactured) to compete against the M2.
        So the X4 + A720 + A520 cores combination may be faster (and/or more power efficient) than the Avalanche + Blizzard combination used in the Apple M2.

        • Yes Arm could design a CPU that is faster in some benchmarks than Apple, I doubt they could design one that performs well with Apple's requirements. For example, the M1 and M2 must reasonably perform x86 instruction set translations.
          • I don't think Apple CPUs are anything magical. They use a good architecture (ARM) and a top of the notch manufacturer (most advanced process from TSMC).
            TSMC is currently regarded as having the best process. A couple years ago Intel foundry held that crown.

            I'd be suprised if Apple CPUs were any more optimized towards the translation of x86 compared to other ARM CPUs. They have large cache memory, but it's the same instruction set.

            • I don't think Apple CPUs are anything magical.

              My point is Apple designs CPUs for their own needs whereas ARM is designing for the general use case. As such ARM could never design a chip that would perform well enough for Apple's particular needs.

              I'd be suprised if Apple CPUs were any more optimized towards the translation of x86 compared to other ARM CPUs. They have large cache memory, but it's the same instruction set.

              I know of no other ARM designed CPUs that translates x86 instructions as part of the hardware layer which up was my point. Qualcomm is working on custom CPUs for Microsoft (based on Apple M1) that does this but I did not read x86 translation was part of the ARM specification. Previous examples of x86 translation was done entirely in software (Windows RT).

              • I know of no other ARM designed CPUs that translates x86 instructions as part of the hardware layer which up was my point.

                According to wikipedia , it's done in software

                To enable x86-native software to run on new ARM-based Macs, Rosetta 2 dynamic binary translation software is transparently embedded in macOS Big Sur.

                https://en.wikipedia.org/wiki/... [wikipedia.org]

                Nothing on Apple's own web site seems to be contradicting this:
                https://developer.apple.com/do... [apple.com]

                • There are some very small changes in Apple's ARM instruction set to make multi-processor data access more compatible with the way Intel processors work, which would be awfully hard doing in software only. Other than that, it's just translation using LLVM. (So basically the same as a highly optimising C compiler).
                • None of your links says anything about what Apple did on their hardware side to assist in x86 translation with software. Specifically, this person claims [twitter.com] that Apple used Intel x86 memory ordering in their silicon instead of letting software handle it. While it is not a major change to the ARM design, it greatly increases the x86 emulation/translation performance that Rosetta needs.

                  Again I am not aware of any other ARM CPUs that uses this memory ordering to enhance x86. MS and Qualcomm have said they would i

                  • I don't give any importance to claims on twitter.
                    Also, some of his claims are dubious at best. Intel and ARM architecture may have converged internally, but what matters here is the instruction set. And the core x86 instruction set is still the same old legacy (x86-64 being an extension of it). So I don't really see how programs compiled on run on old x86 (dating back to the 32-bit Core Duo, the first CPU used by Intel Macs) benefit from any improvements in the architecture of the latest 11-13th gen Intel C

                    • I don't give any importance to claims on twitter.

                      And this code [github.com] to expose the same extensions seemingly to bypass Rosetta means nothing?

                      Also, some of his claims are dubious at best. Intel and ARM architecture may have converged internally, but what matters here is the instruction set. And the core x86 instruction set is still the same old legacy (x86-64 being an extension of it). So I don't really see how programs compiled on run on old x86 (dating back to the 32-bit Core Duo, the first CPU used by Intel Macs) benefit from any improvements in the architecture of the latest 11-13th gen Intel CPUs.

                      Presumably because he meant x86 and x86-64 not x86 32 bit only.

                      Exactly. Which could be because they didn't do anything specific. Large cache and high clock speeds doesn't count as "hardware side to assist in x86 translation". They just help with just about any software. Is Rosetta2 supposed to crash when running on a non-Apple ARMv8 CPU?

                      Let's look at the evidence. Apple's first ARM laptop and desktop effort crushes all of MS ARM efforts in the last 10 years in performance. The performance of Macs running x86 code has been reported by many users to be pretty good. Reviews have noted that this emulation while slower than native ARM code performs well enough. But according to you, Apple has not d

                    • Code written for 32 bit CPUs doesn't run under Rosetta. It only runs 64 bit code. And Apple changing the memory access ordering on ARM to be Intel compatible is really important: Most of the time your code doesn't care, but Intel gives guarantees for memory ordering that an ordinary ARM processor doesn't give. And since you never know whether these guarantees are needed, you'd have to run code for every single memory access without the hardware changes, or once in a while multi-threaded code won't work prop
                    • I don't give any importance to claims on twitter.

                      And this code [github.com] to expose the same extensions seemingly to bypass Rosetta means nothing?

                      Not sure what this code does, but I don't think it bypasses Rosetta.

                      Presumably because he meant x86 and x86-64 not x86 32 bit only.

                      Still. The x86 or x86-64 ISA didn't converge with ARM ISA.

                      Let's look at the evidence. Apple's first ARM laptop and desktop effort crushes all of MS ARM efforts in the last 10 years in performance. The performance of Macs running x86 code has been reported by many users to be pretty good.

                      Let's look at the evidence (and not twitters comments).
                      The Apple Mx are fast CPUs. They are faster than what is used in MS ARM laptops to execute ARM code. Therefore it shouldn't surprise anyone that they are also faster when translating x86 code to ARM.

                      From what I see on wikipedia, the SQ3 CPU used by Microsoft use Qualcomm Kryo 680 cores based on ARM Cortex X1 / A78 / A55. They are

    • Would love to see the M4-MURDERER announcement. Recalls that chestnut of an Onion article about the razor company skipping 4 blades and going straight to 5 [theonion.com].
  • M2 fighter? (Score:4, Funny)

    by backslashdot ( 95548 ) on Tuesday May 30, 2023 @11:41PM (#63563193)

    So they are releasing a CPU in 2024 that will be competing with a CPU from 2022? That's like if Biden has a debate with Trump in 2024 but he trains to debate against the 8 yr old version of Trump. Nevermind, bad analogy.

    • by Ecuador ( 740021 )

      I doubt it will manage even that. The Snapdragon 8cx gen 3 mentioned in the summary came a year after the M1 and it only had about 60% of the performance of the latter.

  • > Will it ever be more than words on a page?

    Lando Mollari: Thery have been outlawed by every civilezed planet!
    Lord Refa: These are uncivilized times.
    Lando Mollari: We have treaties.
    Lord Refa: Ink on a page.

  • If you're looking for "ARM monster chips", you should start with Neoverse [wikipedia.org] and Graviton [wikipedia.org]. Main downside is they're not available for purchase.
  • there are chips with 128 ARM cores guys ... there is nothing mega about 14 cores.
    https://amperecomputing.com/br... [amperecomputing.com]

    • by hey! ( 33014 )

      I think they're talking about mobile device CPUs, not ones designed for data center use. 14 would be a huge number of cores for a handheld device.

    • there are chips with 128 ARM cores guys ... there is nothing mega about 14 cores.

      That's for server chips. Not really comparable. For a server you want tons of cores that each don't really need very high performance, so that might be the equivalent of 128 efficiency cores.

  • You can only count on that about half the time, but there's only a 50/50 chance of that.

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...