Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Hardware Technology

Flex Logix Says It's Solved Deep Learning's DRAM Problem (ieee.org) 40

An anonymous reader quotes a report from IEEE Spectrum: Deep learning has a DRAM problem. Systems designed to do difficult things in real time, such as telling a cat from a kid in a car's backup camera video stream, are continuously shuttling the data that makes up the neural network's guts from memory to the processor. The problem, according to startup Flex Logix, isn't a lack of storage for that data; it's a lack of bandwidth between the processor and memory. Some systems need four or even eight DRAM chips to sling the 100s of gigabits to the processor, which adds a lot of space and consumes considerable power. Flex Logix says that the interconnect technology and tile-based architecture it developed for reconfigurable chips will lead to AI systems that need the bandwidth of only a single DRAM chip and consume one-tenth the power.

Mountain View-based Flex Logix had started to commercialize a new architecture for embedded field programmable gate arrays (eFPGAs). But after some exploration, one of the founders, Cheng C. Wang, realized the technology could speed neural networks. A neural network is made up of connections and "weights" that denote how strong those connections are. A good AI chip needs two things, explains the other founder Geoff Tate. One is a lot of circuits that do the critical "inferencing" computation, called multiply and accumulate. "But what's even harder is that you have to be very good at bringing in all these weights, so that the multipliers always have the data they need in order to do the math that's required. [Wang] realized that the technology that we have in the interconnect of our FPGA, he could adapt to make an architecture that was extremely good at loading weights rapidly and efficiently, giving high performance and low power."

This discussion has been archived. No new comments can be posted.

Flex Logix Says It's Solved Deep Learning's DRAM Problem

Comments Filter:
  • by SuperKendall ( 25149 ) on Wednesday October 31, 2018 @04:30PM (#57570915)

    designed to do difficult things in real time, such as telling a cat from a kid in a car's backup camera video stream

    So now I'm really curious which one they think it's OK to back over.

  • From what little I remember from when neural networks made their first buzzword splash back in the 1990s I think all the buzzwords in the summary are basically saying that they need an architecture that is really fast at doing multiplication of large matrices. Yes? If so, this really is not in any way a new problem - fast matrix math has been a staple of high performance computing since day 1, and these guys are just saying (I think) they want to build a processor designed just for that purpose. Or am I
    • by viperidaenz ( 2515578 ) on Wednesday October 31, 2018 @05:00PM (#57571071)

      It appears they need to constantly change the data going through the matrix computations. That requires significant memory bandwidth, especially if the data set is too large to fit in cache. On top of the bandwidth there is latency too. It's pointless doing a mac in one cycle in your 5GHz (0.2ns cycle) processor if it takes 40 cycles to address a new column in your DRAM (first word on DDR4-4800 is 8ns).

      The latency hasn't got any faster since DDR2-1066 CL4, which is 7.5ns. It gets much worse when you need to address a new row instead of just changing the column.

      • by Anonymous Coward

        This sounds like a cross-bar memory tech. Should fit fairly nicely for this sort of thing. Esp if you put processors on each xbar node.

      • It's pointless doing a MAC in one cycle in your 5 GHz (0.2 ns cycle) processor if it takes 40 cycles to address a new column in your DRAM (first word on DDR4-4800 is 8 ns).

        Good algorithms haven't been doing serialized demand load since the first CPU with sixteen whole lines of cache memory was attached to a split-transaction memory bus.

        The first documented use of a data cache was on the IBM System/360 Model 85, introduced in 1969.

        For the record, that was also one of the first microcoded CPUs.

        With proper dat

    • Indeed.

      Considering they use the phrase ... critical "inferencing" computation, called multiply and accumulate ... instead of the terms fused MAC or FMAC [wikipedia.org] I'm wondering why are they just using nVidia's Tensor Cores [nvidia.com] ?

    • by imgod2u ( 812837 ) on Wednesday October 31, 2018 @05:07PM (#57571101) Homepage

      The issue is that data movement is now the bottleneck, not the actual math itself.

      The architecture they propose allows high bandwidth reads from DRAM over the data set using an FPGA tile to do the flexible data routing while being tied to the pins of a single DRAM chip rather than traditional CPU read/write centralized busses that generally have higher latencies and limited bandwidth.

      It's essentially a better memory controller architecture that emphasizes embarrassingly parallel data access that needs both high bandwidth and low latency but little in the way of random access.

      CPU's traditionally optimize for latency but not bandwidth. GPU's optimize for bandwidth but not latency.

      • CPU's traditionally optimize for latency but not bandwidth.

        In my experience, memory latency hasn't essentially improved since the days of SDRAM. As each DDR generation doubled the throughput, it also doubled the latency as measured in clock cycles (e.g. CL2.5 to CL5 to CL10) meaning the actual latency stayed the same.

        The CPUs themselves have gotten longer pipelines and wider SIMD units to improve throughput at the expense of latency (in clock cycles). While the clock speeds have also increased from the days of SDRAM, the design as a whole doesn't exactly seem to

    • Neural nets were "a thing" in the 1950's.

      They are in the business of blinding you with bullshit.

      Its perfectly true there is a problem of bandwidth between memory and the CPU, and that the current popular solution is to have a nice wide path between the memory chips and the cache. One transfer accounts for a whole bunch of bytes, and DRAM usually does four or more data transfers for each address transfer.

      This is based on optimising for what today's hardware does well.

      As for "more memory chips means mor

  • Is that it is not any "bandwidth problem". It is that deep learning is actually pretty bad at solving classification problems. These are just some more people trying to get rich before the ugly truth becomes impossible to ignore.

    • Comment removed based on user account deletion
    • Ha, how do you explain "hotdog" / "not hotdog"? Checkmate!
    • by Anonymous Coward

      It is that deep learning is actually pretty bad at solving classification problems.

      It is faster and cheaper than humans at QC on some of the productions lines where I work. So as we expand we don't have to hire more QC engineers, just better direct the ones we have at the fewer, harder tasks. Our R&D department uses it now for jump starting some optimization problems, which previously took a lot of human tuning to stop from getting stuck at local minima. So considering that is just two examples of deep learning actually being used internally, having already out performed and replac

  • I remember when I used to get IEEE Spectrum mailed to me.

    I don't know why they started doing that or how they got my address. I don't know why they stopped. All I remember is getting it and tossing it in the trash every single time.

  • by Gerald Butler ( 3528265 ) on Wednesday October 31, 2018 @07:13PM (#57571575)

    That I can buy right now? If not, then they haven't "solved the problem", but, "think they have a possible way of solving the problem that is yet to be proven to work".

    • Flex Logix counts Boeing among its customers for its high-throughput embedded FPGA product.

      Apparently, you can buy it right now. Did you try contacting them ?

    • by gtall ( 79522 )

      You have to understand what you are buying. embedded FPGA means to have an SoC design in hand (say, your favorite FrankenARM SoC which you licensed all the IP for and have a foundry lined up). You now get to license their FPGA IP and put that in your SoC as well...after a suitable redesign of your SoC because what was previously off-board is now on-board. It would probably be a significant engineering effort to integrate the FPGA IP with your own designs.

  • cat from a kid

    Yeah, because killing cats instead of kids is a great goal to have. How about just avoiding killing ANYTHING?!
    Oh yeah because to do that you need to go slowly and rich people with self driving cars want to go fast.

  • When your problem calls for an expensive fab that you don't have funding for, FPGA seems like the solution. Again and again.

  • The first phrase extracted from the article for the blurb on a real nerd site would have been "folded Benes network".

    Also, on a real nerd site, it would have rendered the S-with-caron properly, as well.

    Why can't we have nice things?

It is wrong always, everywhere and for everyone to believe anything upon insufficient evidence. - W. K. Clifford, British philosopher, circa 1876

Working...