Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Businesses Hardware

ARM's New Processors Are Designed To Power the Machine-Learning Machines (theverge.com) 27

An anonymous reader shares an article: Official today, the ARM Cortex-A75 is the new flagship-tier mobile processor design, with a claimed 22 percent improvement in performance over the incumbent A73. It's joined by the new Cortex-A55, which has the highest power efficiency of any mid-range CPU ARM's ever designed, and the Mali-G72 graphics processor, which also comes with a 25 percent improvement in efficiency relative to its predecessor G71. The efficiency improvements are evolutionary and predictable, but the revolutionary aspects of this new lineup relate to artificial intelligence: this is the first set of processing components designed specifically to tackle the challenges of onboard AI and machine learning. Plus, last year's updates to improve performance in the power-hugry tasks of augmented and virtual reality are being extended and elaborated. [...] ARM won't just be powering machine learning with its new chips, it'll benefit from ML too. The new designs benefit from an improved branch predictor that uses neural network algorithms to improve data prefetching and overall performance.
This discussion has been archived. No new comments can be posted.

ARM's New Processors Are Designed To Power the Machine-Learning Machines

Comments Filter:
  • by lobiusmoop ( 305328 ) on Monday May 29, 2017 @03:31PM (#54507125) Homepage

    IMHO what is mostly needed is faster memory. Modern ML often involves working with multi-Gigabyte domain models, stored in DRAM, where the access latency hasn't changed particularly in the last 10 years.

    • latency thermal wall (Score:5, Informative)

      by epine ( 68316 ) on Monday May 29, 2017 @06:56PM (#54507893)

      IMHO what is mostly needed is faster memory. Modern ML often involves working with multi-Gigabyte domain models, stored in DRAM, where the access latency hasn't changed particularly in the last 10 years.

      You should write advertising copy.

      What is needed is faster relief. We've improved the package perforation. Now rips open 2x faster!

      Faster has many dimensions, yet you fixate on just one. It turns out, however, that slapping you down was a royal PITA: all of the vendors involved in HBM{1,2,3} pony up sweet-shit-all concerning latency (wanted: an edible, colour-coded haymark).

      Finally I found this comment by one Tuna-Fish from 2010:

      Memory latency of many devices using GDDR5 (like GPUs) is a lot higher than on the typical device that uses DDR3, but this has nothing to do with the RAM, and everything to do with the controller.

      Basically, GPUs can expect to see a lot of accesses to addresses reasonably close to each other (like reading color values out of a texture) in a relatively short time, and the devices are typically good at finding other work to do while waiting on memory accesses. Because of this, and the fact that larger transfers are more efficient, GPUs tend to delay initiating transfers a bit to wait for opportunities to combine them.

      It's entirely possible to have a memory controller that does this to GPU-like transfers and doesn't do it to CPU-like transfers.

      I'm not the only frustrated person.

      * AMD's upcoming Fiji GPU will feature new memory interface [extremetech.com] — Joel Hruska, 30 April 2015

      Bandwidth, however, is just one characteristic of memory performance. Latency is equally important, but data on HBM latency compared with GDDR5 is much harder to come by. The implication, if I've read the various slide decks and data sheets correctly, is that HBM latency should be modestly better than GDDR5's — but possibly not by much. Certainly it won't improve by anything like the bandwidth jumps we're going to see.

      The gist of the fragments I managed to find is that HBM latency is roughly on par with the concurrent GDR generation, and this is—in most controllers—actually worse than the concurrent DDR generation, hence the industry-wide light-lip syndrome.

      Only that's not the whole story. Because HBM has more channels than GDR and allows more pages to be open concurrently. For a sufficiently parallel workload, HBM latency as a function of bandwidth can be excellent compared to the alternatives.

      And certainly the thermal density is yards superior. Which is itself interesting, because you hardly ever see plots pitting latency against J/bit-ns. Awesome! A brand shiny new thermal wall. Physical distance, aka latency, actually functions as an implicit thermal spreader, and this goes away when the engineers get too pie-eyed over rail-gun-drone–accelerated rolling drive-thru nirvana (recommended: a Kevlar fish net on a titanium pole, and a Quick eye).

      A Study of Application Performance with Non-Volatile Main Memory [ucsd.edu] — Yiying Zhang (2015)

      The fastest of the prospective non-volatile technologies (which are thermally desirable due to lack of refresh) is NRAM.

      Fast NRAM to be released 2019-epsilon by Nantero/Fujitsu [nextbigfuture.com] — August 2016

      It actually has the endurance to be used as an on-chip SRAM replacement with eDRAM access times, but I don't know whether joint fabrication with CMOS is viable (in particular, at the high end). Note that ultimate durability is as yet unknown, because their 10^14-cycle test bench is taking a while to return 0/1.

      [*] I wou

  • No they aren't (Score:5, Informative)

    by locater16 ( 2326718 ) on Monday May 29, 2017 @05:37PM (#54507591)
    No, ARM's new processors are not "designed" to power AI. They added an INT8 instruction, something useful for AI deploying neural nets (not training them though). And that's it. Otherwise it's a standard evolution of both designs taped out on "10nm" rather 14nm. They just know AI is HOT HOT HOT and so hope to grab some of that PR magic quick.

    For those really interested Anandtech has the actual computer engineering of the whole thing: http://www.anandtech.com/show/... [anandtech.com]
  • If the Machines are Learning Machines, who is Learning the Machine-Learning Machine Machines? ;)

  • by tietokone-olmi ( 26595 ) on Monday May 29, 2017 @08:24PM (#54508189)

    Nothing in there points in any way to machine learning. There's just a fancy branch predictor in there, the design of which may have been informed by something related to neural networks, but that's true of all CPUs of the current generation. (kind of like integrated memory controllers were like 10 years ago.)

    But that's just as well, given how AI is a marketing scam anyway.

  • They added an INT8 instruction, something useful for AI deploying neural nets (not training them though. There's just a fancy branch predictor in there, the design of which may have been informed by something related to neural networks, but that's true of all CPUs of the current generation. (kind of like integrated memory controllers were like 10 years ago.)

GREAT MOMENTS IN HISTORY (#7): April 2, 1751 Issac Newton becomes discouraged when he falls up a flight of stairs.

Working...