Flex Logix Says It's Solved Deep Learning's DRAM Problem (ieee.org) 40
An anonymous reader quotes a report from IEEE Spectrum: Deep learning has a DRAM problem. Systems designed to do difficult things in real time, such as telling a cat from a kid in a car's backup camera video stream, are continuously shuttling the data that makes up the neural network's guts from memory to the processor. The problem, according to startup Flex Logix, isn't a lack of storage for that data; it's a lack of bandwidth between the processor and memory. Some systems need four or even eight DRAM chips to sling the 100s of gigabits to the processor, which adds a lot of space and consumes considerable power. Flex Logix says that the interconnect technology and tile-based architecture it developed for reconfigurable chips will lead to AI systems that need the bandwidth of only a single DRAM chip and consume one-tenth the power.
Mountain View-based Flex Logix had started to commercialize a new architecture for embedded field programmable gate arrays (eFPGAs). But after some exploration, one of the founders, Cheng C. Wang, realized the technology could speed neural networks. A neural network is made up of connections and "weights" that denote how strong those connections are. A good AI chip needs two things, explains the other founder Geoff Tate. One is a lot of circuits that do the critical "inferencing" computation, called multiply and accumulate. "But what's even harder is that you have to be very good at bringing in all these weights, so that the multipliers always have the data they need in order to do the math that's required. [Wang] realized that the technology that we have in the interconnect of our FPGA, he could adapt to make an architecture that was extremely good at loading weights rapidly and efficiently, giving high performance and low power."
Mountain View-based Flex Logix had started to commercialize a new architecture for embedded field programmable gate arrays (eFPGAs). But after some exploration, one of the founders, Cheng C. Wang, realized the technology could speed neural networks. A neural network is made up of connections and "weights" that denote how strong those connections are. A good AI chip needs two things, explains the other founder Geoff Tate. One is a lot of circuits that do the critical "inferencing" computation, called multiply and accumulate. "But what's even harder is that you have to be very good at bringing in all these weights, so that the multipliers always have the data they need in order to do the math that's required. [Wang] realized that the technology that we have in the interconnect of our FPGA, he could adapt to make an architecture that was extremely good at loading weights rapidly and efficiently, giving high performance and low power."
Wait you don't car about what now??? (Score:4, Funny)
designed to do difficult things in real time, such as telling a cat from a kid in a car's backup camera video stream
So now I'm really curious which one they think it's OK to back over.
Re: (Score:2)
Can you please explain? You keep posting this shit. I want to know why.
Re: (Score:2)
Can you please explain? You keep posting this shit. I want to know why.
Just a mental disease. Ignore, unless you're his physician.
Re: (Score:2)
Cars a pretty easy to hit, much easier than hitting a cat
Re: (Score:2)
Matrix Multiplication? (Score:2)
Re:Matrix Multiplication? (Score:5, Interesting)
It appears they need to constantly change the data going through the matrix computations. That requires significant memory bandwidth, especially if the data set is too large to fit in cache. On top of the bandwidth there is latency too. It's pointless doing a mac in one cycle in your 5GHz (0.2ns cycle) processor if it takes 40 cycles to address a new column in your DRAM (first word on DDR4-4800 is 8ns).
The latency hasn't got any faster since DDR2-1066 CL4, which is 7.5ns. It gets much worse when you need to address a new row instead of just changing the column.
Re: (Score:1)
This sounds like a cross-bar memory tech. Should fit fairly nicely for this sort of thing. Esp if you put processors on each xbar node.
anorexic matrix nervosa (Score:2)
Good algorithms haven't been doing serialized demand load since the first CPU with sixteen whole lines of cache memory was attached to a split-transaction memory bus.
For the record, that was also one of the first microcoded CPUs.
With proper dat
Re: (Score:3)
Indeed.
Considering they use the phrase ... critical "inferencing" computation, called multiply and accumulate ... instead of the terms fused MAC or FMAC [wikipedia.org] I'm wondering why are they just using nVidia's Tensor Cores [nvidia.com] ?
Re:Matrix Multiplication? (Score:5, Interesting)
The issue is that data movement is now the bottleneck, not the actual math itself.
The architecture they propose allows high bandwidth reads from DRAM over the data set using an FPGA tile to do the flexible data routing while being tied to the pins of a single DRAM chip rather than traditional CPU read/write centralized busses that generally have higher latencies and limited bandwidth.
It's essentially a better memory controller architecture that emphasizes embarrassingly parallel data access that needs both high bandwidth and low latency but little in the way of random access.
CPU's traditionally optimize for latency but not bandwidth. GPU's optimize for bandwidth but not latency.
Re: (Score:3)
CPU's traditionally optimize for latency but not bandwidth.
In my experience, memory latency hasn't essentially improved since the days of SDRAM. As each DDR generation doubled the throughput, it also doubled the latency as measured in clock cycles (e.g. CL2.5 to CL5 to CL10) meaning the actual latency stayed the same.
The CPUs themselves have gotten longer pipelines and wider SIMD units to improve throughput at the expense of latency (in clock cycles). While the clock speeds have also increased from the days of SDRAM, the design as a whole doesn't exactly seem to
Re: (Score:3)
They are in the business of blinding you with bullshit.
Its perfectly true there is a problem of bandwidth between memory and the CPU, and that the current popular solution is to have a nice wide path between the memory chips and the cache. One transfer accounts for a whole bunch of bytes, and DRAM usually does four or more data transfers for each address transfer.
This is based on optimising for what today's hardware does well.
As for "more memory chips means mor
The actual reality here (Score:2)
Is that it is not any "bandwidth problem". It is that deep learning is actually pretty bad at solving classification problems. These are just some more people trying to get rich before the ugly truth becomes impossible to ignore.
Re: (Score:1)
Re: (Score:3)
These do not do the inferior "deep" learning. They do proper learning where the neural network is designed for the task. Of course they perform better.
Re: (Score:1)
Re: (Score:1)
It is that deep learning is actually pretty bad at solving classification problems.
It is faster and cheaper than humans at QC on some of the productions lines where I work. So as we expand we don't have to hire more QC engineers, just better direct the ones we have at the fewer, harder tasks. Our R&D department uses it now for jump starting some optimization problems, which previously took a lot of human tuning to stop from getting stuck at local minima. So considering that is just two examples of deep learning actually being used internally, having already out performed and replac
IEEE Spectrum (Score:1)
I remember when I used to get IEEE Spectrum mailed to me.
I don't know why they started doing that or how they got my address. I don't know why they stopped. All I remember is getting it and tossing it in the trash every single time.
Is there a product on the market I can buy? (Score:3, Insightful)
That I can buy right now? If not, then they haven't "solved the problem", but, "think they have a possible way of solving the problem that is yet to be proven to work".
Re: (Score:3)
Flex Logix counts Boeing among its customers for its high-throughput embedded FPGA product.
Apparently, you can buy it right now. Did you try contacting them ?
Re: (Score:3)
You have to understand what you are buying. embedded FPGA means to have an SoC design in hand (say, your favorite FrankenARM SoC which you licensed all the IP for and have a foundry lined up). You now get to license their FPGA IP and put that in your SoC as well...after a suitable redesign of your SoC because what was previously off-board is now on-board. It would probably be a significant engineering effort to integrate the FPGA IP with your own designs.
Cat Killer (Score:2)
cat from a kid
Yeah, because killing cats instead of kids is a great goal to have. How about just avoiding killing ANYTHING?!
Oh yeah because to do that you need to go slowly and rich people with self driving cars want to go fast.
FPGA startups (Score:2)
When your problem calls for an expensive fab that you don't have funding for, FPGA seems like the solution. Again and again.
nerd threshold: double FAIL (Score:2)
The first phrase extracted from the article for the blurb on a real nerd site would have been "folded Benes network".
Also, on a real nerd site, it would have rendered the S-with-caron properly, as well.
Why can't we have nice things?