Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware IT

Arm Says Its Next-Gen Mobile GPU Will Be Its Most 'Performant and Efficient' (theverge.com) 29

IP core designer Arm announced its next-generation CPU and GPU designs for flagship smartphones: the Cortex-X925 CPU and Immortalis G925 GPU. Both are direct successors to the Cortex-X4 and Immortalis G720 that currently power MediaTek's Dimensity 9300 chip inside flagship smartphones like the Vivo X100 and X100 Pro and Oppo Find X7. From a report: Arm changed the naming convention for its Cortex-X CPU design to highlight what it says is a much faster CPU design. It claims the X925's single-core performance is 36 percent faster than the X4 (when measured in Geekbench). Arm says it increased the AI workload performance by 41 percent, time to token, with up to 3MB of private L2 cache. The Cortex-X925 brings a new generation of Cortex-A microarchitectures ("little" cores) with it, too: the Cortex-A725, which Arm says has 35 percent better performance efficiency than last-gen's A720 and a 15 percent more power-efficient Cortex-A520.

Arm's new Immortalis G925 GPU is its "most performant and efficient GPU" to date, it says. It's 37 percent faster on graphics applications compared to the last-gen G720, with improved ray-tracing performance with intricate objects by 52 percent and improved AI and ML workloads by 34 percent -- all while using 30 percent less power. For the first time, Arm will offer "optimized layouts" of its new CPU and GPU designs that it says will be easier for device makers to "drop" or implement into their own system on chip (SoC) layouts. Arm says this new physical implementation solution will help other companies get their devices to market faster, which, if true, means we could see more devices with Arm Cortex-X925 and / or Immortalis G925 than the few that shipped with its last-gen ones.

This discussion has been archived. No new comments can be posted.

Arm Says Its Next-Gen Mobile GPU Will Be Its Most 'Performant and Efficient'

Comments Filter:
  • by mspohr ( 589790 ) on Wednesday May 29, 2024 @10:40AM (#64507829)

    Is this news?
    Dog bites man is not news.

    Would they ever say "Arm's new Immortalis G925 GPU is its "LEAST performant and efficient GPU" to date" ?

    • Well it is better than Intel whose latest CPUs are "performant"** but not efficient compared to previous generations
      **Up to 7% increase in performance requires over 250W of power.
      • Well it is better than Intel whose latest CPUs are "performant"** but not efficient compared to previous generations

        Yeah but people don't buy intel performance for low power applications. Quite the opposite of ARM. Also technically you're wrong as well. Intel has not released a new CPU that was *less* efficient. It may not be more efficient, it uses more power for the performance, but that doesn't make it "not efficient compared to previous generations".

        • by UnknowingFool ( 672806 ) on Wednesday May 29, 2024 @11:21AM (#64507905)

          Intel has not released a new CPU that was *less* efficient. It may not be more efficient. It may not be more efficient, it uses more power for the performance, but that doesn't make it "not efficient compared to previous generations".

          What kind of doublespeak is this? If we look at the current generation of Intel CPU 14600K compared to 13600K and 12600K they require more power to achieve the same work. [youtu.be]. That is by definition, less efficient.

          • That's not double speak. Efficiency is a comparison between multiple stats. - the question is *what* stats. You're reading the data incorrectly.

            You are talking about the efficiency in terms of power to complete a fixed unit of work for two distinct processors. The 14600K completed the render using more power in less time. The result showed a minor loss in the numbers when divided, which means the 14600K operating in the configuration it was, was less efficient than the 13600K at performing the task.

            However

            • Steve of Gamer's Nexus: "The 14600K's 33.6 Watt hour result has it only barely more efficient than the 14900K and less efficient than the 7700x at 31.1" You didn't bother to listen to the video, did you?

              You are talking about the efficiency in terms of power to complete a fixed unit of work for two distinct processors. The 14600K completed the render using more power in less time.

              You do know energy is measured in Watt hours and not Watts, right? Gamer's Nexus was measuring energy efficiency not power.

              The result showed a minor loss in the numbers when divided, which means the 14600K operating in the configuration it was, was less efficient than the 13600K at performing the task.

              So you admit the graph shows the 14600K was less efficient than the 13600K and the 12600K? And?

              The fact that the graph you're showing doesn't demonstrate efficiency between generations is evident by the fact that the 13900K and the 13600K have wildly different efficiencies, as does the 12900K 12600K or indeed the 14700K vs 14600K. The results showed provide you *no* conclusions about generational changes, they only show you comparison between parts on a given workload.

              The graph shows the same the mid-tier class of Intel CPU, the 14600K, was less efficient than its predecessors. Comparing a processor to other tiers is like comparing two wrestlers of different weight classes; you can make a comparison but being in different tiers puts a huge asterisk on them.

              That is it. Don't read into things what isn't there.

              Hardware Unboxed [youtu.be]: "it doesn't deserve to be called a 14th generation--sorry, I mean "generation" . . . for the most part this was a compete waste of time . .."
              JayzTwoCentz [youtu.be]: "I see a lot of memes of people talking about going from 13th gen to 14th gen . . that's a reall dumb move."
              Tom's Hardware [tomshardware.com]: "The Core i5-14600K only brings 100 extra MHz of E-core boost clocks to the table, and that isn't enough to make a substantial difference to its competitive positioning, either. The previous-gen 13600K has been on sale for as low as $285, or roughly $35 less, muddying the water for potential upgraders. "
              Anandtech [anandtech.com]: "Even with the Core i5-14600K priced at $319, there are no actual performance benefits compared to the previous generation Core i5-13600K; the only difference is a 100 MHz bump to E-core boost clock speeds. At the time of writing, the Core i5-13600K can be bought on Amazon for $285, a saving of $34, which is essentially the same processor; users could opt for that route and save a little bit of money."

              So all of these reviewers are reading into things that aren't there. Or they all arrive to same conclusion that the 14th gen is not really an upgrade to the 13th gen.

              • You do know energy is measured in Watt hours and not Watts, right? Gamer's Nexus was measuring energy efficiency not power.

                I didn't say they weren't. I said you were comparing three variables. Speed, power, and generation, not two variables, power and generation.

                You want to compare generations you need to correct for speed, for that you'd need to overclock 13600K (or underclock the 14600K). The 13600K is more efficient at the task than the 14600K That is what is shown in the data. You can't draw any conclusions into generational differences because the parts don't operate at the same speed and the relationship between speed and

                • I didn't say they weren't. I said you were comparing three variables. Speed, power, and generation, not two variables, power and generation.

                  No I was not. You mistake what is being represented in the graph. The graph is measured in Watt hours which is a unit of energy not power, not speed. Gamer's Nexus was looking at energy required to perform a task as a measure of efficiency.

                  You want to compare generations you need to correct for speed, for that you'd need to overclock 13600K (or underclock the 14600K).

                  Well that is the dumbest thing I’ve heard. What is being compared is each CPU out of the box with factory settings as oveclocking introduces many additional variables in the comparison. That is like saying you cannot compare a car model from one year to the next wi

      • Kind of. It's a bit more complicated than that, really.

        It's not that they're less efficient- it's that they're basically the same, architecturally, but they allow you to push them higher (they have higher boost clocks)
        The higher the clock, the lower the efficiency.
        A 14600K is a 13600K with higher boost clocks. That's it. The lowered efficiency comes from using the extra performance.
    • It's an ad.

    • If I'm being charitable they're trying to make the point that it will both perform better and use less power. Usually you have to take one or the other. My RX 580 draws more power than the Sun. And it's not even the top end of power guzzlers. There are video cards that draw 450 watts by themselves now it's insane
      • Re: (Score:2, Interesting)

        by Anonymous Coward
        Fun fact, the core of the sun generates about as much power on a per-cubic meter basis as a pile of compost. Over the size of the entire sun on average it's a fourth of a watt per cubic meter. The sun isn't a giant atom bomb going off in space it's just so big and fairly warm that nuclear fusion occurs often enough via quantum effects to make a shiny object.
        • by DamnOregonian ( 963763 ) on Wednesday May 29, 2024 @02:38PM (#64508369)

          Fun fact, the core of the sun generates about as much power on a per-cubic meter basis as a pile of compost. Over the size of the entire sun on average it's a fourth of a watt per cubic meter.

          Correct.

          The sun isn't a giant atom bomb going off in space

          Incorrect.

          it's just so big and fairly warm that nuclear fusion occurs often enough via quantum effects to make a shiny object.

          The sun is so non-power-dense because it is large, and the vast majority of its mass are not undergoing fusion, but rather just buffering us (and thank fuck, too)
          Because the part of the sun undergoing fusion, about 20% of its radius, is sitting there at 15 million kelvin. Its energy density is a little more than a fourth of a watt per cubic meter. In case you want to do the math, it's got a density of 150g/cm^3, and a temp stated above.
          The other 80% of its radius- the vast bulk of its volume- is just gas standing between us and that open nuclear fucking reaction, absorbing it and radiating it away at a cool 5700 kelvin.

      • If I'm being charitable they're trying to make the point that it will both perform better and use less power.

        They weren't that specific, though.

        Itâ(TM)s 37 percent faster on graphics applications compared to the last-gen G720, with improved ray-tracing performance with intricate objects by 52 percent and improved AI and ML workloads by 34 percent â" all while using 30 percent less power.

        Is that literally 30 percent less power, or is it 30 percent less per whatever?

    • The summary does include percentages of increase so you can decide if they're significant enough for you to care about.
  • No shit. By how much though?

    • Given a Grunderson mod-wave of 0 degrees, and break furthen of 8th calibur, that's performant 12 platforminal at 340 efficienal quotiency.
      What do they teach kids in schools these days anyways?
  • by Press2ToContinue ( 2424598 ) on Wednesday May 29, 2024 @11:33AM (#64507945)
    *Cries in Goatse*
  • This is just like Microsoft extolling how the version of Windows you are installing, is "the best version of Windows yet!" in the installer.

    I would hope so, even though it's likely not to be true. If it's not "the best version yet" then the one I already have is, and I don't need to install shit. See: Windows 8, Windows 11.

    Other companies do this too (Apple, Google, etc.) and it's always been fucking stupid marketingspeak.

  • I find it interesting the press release from ARM does not include specific numbers on core count. Instead they're talking about a "micro-architecture" for cores, which sounds like obfuscation.

    I am skeptical the ARM R&D budget for chip design compares favorably to Apple. Their ability to innovate beyond what Apple is designing using licensed ARM foundations is limited by a smaller budget. Here is Apple's flagship ARM chip:

    M3 Ultra - 24-core CPU, 60-core or 76-core GPU

    How does the new ARM Cortex
    • You are mistaken about how all of this works. Arm doesn't make chips, or even design chips. Arm designs cores and then other companies use those cores to design chips, and then those chips are (usually) made by a third company. TSMC, most of the time.

      Any number of the cores that Arm designs can be included on a chip. Also, just listing the number of cores doesn't have a lot of meaning if you don't know what each of those cores can do. Apple designs its own cores (though I imagine that they're based on Ar
    • by JBMcB ( 73720 )

      I find it interesting the press release from ARM does not include specific numbers on core count. Instead they're talking about a "micro-architecture" for cores, which sounds like obfuscation.

      That's because ARM doesn't make chips, they design architectures that OEMs license to build their own chips. The OEM can put as many cores in whatever configuration they want to on their SoC.

      Here's the configuration for the latest Qualcomm Snapdragon:
      https://en.wikipedia.org/wiki/... [wikipedia.org]

      Apple designs their own architectures, but still licenses the ARM ISA.

    • M3 Ultra - 24-core CPU, 60-core or 76-core GPU

      Completely irrelevant.
      We're talking architectural design here, so we're concerned about the performance and efficiency per core, not how many cores you can cram onto the fucking thing.

      That being said, it is still likely true that the performance-per-core of a current M3 core is significant better than what Arm is going to pump out, since it's better than anything anyone is pumping out.

  • ...film at eleven...

    I mean, what, we expected it to be slower and less efficient?

  • If your next chip isn't the most "performant and efficient," then you're inept and just wasted a lot of money and time.

    • Lately Intel has been achieving pretty good overall performance, and typically good single thread performance, but their power consumption has been a bit unfortunate...

"Pull the trigger and you're garbage." -- Lady Blue

Working...