FPGA Supercomputers 237
olafva writes: "You may be interested in this new breakthrough! See NASA Press Release and a couple of today's local stories for a remarkable paradigm shift in "Computing Faster without CPUs"." CmdrTaco said he'd believe it when he saw it. Well, they've got pictures. (Update: 03/29 5:02 PM by michael : At NASA's request, we've modified the links in the above story to reduce the load on their Public Affairs website. The same content is at the new links.)
FYI: Star Bridge=Fraudish Hype (Score:1)
If you're interested in FPGAs look for real research. Anyone can put together a board with a few hundred FPGAs. The real research is in designing a way of really using the chips.
SB's original benchmark that gave them the claim for "supercomputing performance" was "How many 1-bit adders can I stick in a couple of hundred FPGAs?. Bogus.
Fraud..
Re:FPGA? (Score:5)
An FPGA is a combination hardware/software device. If you passed that Digital Circuit Design class back in college, you remember that you can implement a 20-bit divider using - what - 84 NOR gates or something like that? There are orders of magnitude more gates in these devices, and orders of magnitude more complicated tasks can be accomplished.
You write a 'program' as a collection of declarative statements from the "Predicate Calculus" around the internal structure of input and output pins, and the FGPG compiler figures out which "gates" to "program" in the "field".
As the number of gates, intermediate terms, inputs, and outputs has grown, so has the complexity of the expressions, thus programs, that these puppies can handle.
Other groups working on similar stuff (Score:5)
There are a lot of groups working on similar stuff:
There are several more groups - you can find a more complete list on the People section of ISI's web site.
Smoke and mirrors? (Score:5)
The important things to note:
1) Even though you can reprogram an FPGA in about a millisecond, the logistics of getting all the right programs to all the right FPGAs on a very dense board is left as an exercise to the reader (hint -- it is not a simple walk in the park).
2) Even though you can reprogram an FGPA in about a millisecond (yielding the claimed 1000 times a second machine re-configuration), it takes many minutes (sometimes hours) for the typical VHDL or similar program to produce the code that you will want to download to those FPGAs. And, of course, if you want disimmilar loads for various groupings of those chips, you will need to repeat the above with feeling, over and over, and over.
3) This particular company was crowing about their patented graphical programming language last year, and also didn't have anything real to show. In other words, no one had actually seen them push buttons, and have this magical language actually produce runnable code for all those FPGA's to do anything useful.
As near as I can tell, this whole thing is based on some guy's idea of raising money so he can drive fast cars, etc, etc. What really hurts is seeing NASA geeting sucked into this black hole...
Re:This just in... (Score:1)
Or allow the *real* positives through...
Re:All I know will be useless! (Score:1)
Stability is obviously not an issue in those cases.
--
Re:Press release (Score:1)
Hint: strings nasapressrel.doc | less
-cbd.
How does this differ from my CPU? (Score:2)
It takes many minutes (sometimes hours) for my compiler to build a medium or large project. But I don't store the source code on my computer to run, I store the object code, so I don't care how long the compiler takes to produce it.
I've never used an FPGA; would it not be possible to do the same thing for them? Compile a program once into "FPGA code" which then gets stored as the executable file to be sent to the chip when invoked?
The real question is... (Score:2)
Re:The emperor looks great in those new clothes (Score:1)
Kythe
(Remove "x"'s from
Re:So we learn a new skill (Score:1)
> Linux 10 years ago
Simple.
Linux Kernel 0.01 - Released 01 Aug 1991
Linux Kernel 0.02 - Released 05 Oct 1991
I was there too....
Stephen L. Palmer
---
Re:So we learn a new skill (Score:1)
> that it is time to start the 10 year
> celebration planning
Most definately! Where's the party!?!?!
Stephen L. Palmer
---
The Nasa Press Realease (Score:5)
National Aeronautics and
Space Administration
Langley Research Center
Hampton, Virginia 23681-2199
Bill Uher
NASA Langley Research Center, Hampton, Va.
(757) 864-3189
For Release: March 26, 2001
For those you can read the Word Document
RELEASE NO. 01-021
NASA Langley to test New Hyper Computer System
Computing Faster Than Engineers Can Think
NASA Langley engineers are exploring new tools and techniques that may move them and the projects they develop beyond the serial world into a parallel universe.
Via a Space Act Agreement, NASA Langley Research Center will receive a HAL (Hyper Algorithmic Logic)-15 Hypercomputer from Star Bridge Systems, Inc. of Midvale, Utah. The system is said to be faster and more versatile than any supercomputer on the market and will change the way we think about computational methods.
Taking up no more space than a standard desktop computer and using no more electrical current than an hair drier, the HAL-15 is the first of a new breed of high performance computer that replaces the traditional central processing units with faster Field Programmable Gate Arrays (FPGAs). These are specialty chips on a circuit board that can reconfigure themselves hundreds or thousands of times a second. This makes it possible for multiple applications to run at the same time on the same chips making them 1000 times faster than traditional commercial CPUs. This maximizes the use of millions of transistors (gates) on each silicon array. Traditional processors, because of their general purpose design, are wasteful, since for most applications they use only a small fraction of their silicon at any time.
HAL is programmed graphically using the company?s proprietary programming language, VIVA. This language facilitates rapid custom software development by the system?s users. Besides NASA Langley, other users will include the San Diego Supercomputer Center, Department of Defense, Hollywood film industry and the telecommunications industry.
-more-
NASA Langley is among the first in the world to get ?hands on? experience with the new system. It will be implemented to explore:
-Solutions for structural, electromagnetic and fluid analysis
-Radiation analysis for astronaut safety
-Atmospheric science analysis
-Digital signal processing
-Pattern recognition
-Acoustic analysis
Media Briefing: A media briefing will be held at 9 a.m., Tuesday, March 27, at the Pearl Young Theater Newsroom, Bldg. 1202, 5 North Dryden Street at NASA Langley Research Center. There will be a news briefing and short demonstration at 9 am followed by a demonstration and discussion for scientists and engineers. HAL developer Kent Gilson and Star Bridge Systems, Inc. CEO Brent Ward will conduct the demonstration. Two Langley researchers, Dr. Robert Singletarry and Dr. Olaf Storaasli, trained on the new system and will report on their first-hand experiences with the hypercomputer.
-end-
didn't we see this before (Score:2)
--
How? (Score:1)
I've talk talked about this stuff at a highy conceptual level for years and have a very strong CS background, but I keep getting lost in the marketing literature.
Thanks,
Joe
HAL 15? (Score:1)
Re:Nice usage scenario. (Score:1)
Finally I can actually make coffee at home; I've always wondered how they ran the coffee pot at 7-11 - where I buy all my coffee - but now I know: They use a supercomputer!
This also explains why Starbucks coffee is so expensive... they've been using these "hypercomputers" in a secret back room at each store.
Tks for the 2day old "news" (Score:1)
Geez, I coulda gone to see this in person.
Offtopic Msft bash seen on 3COM:
"The performance of the server connection depends heavily on the network operating system and underlying protocols. UNIX operating systems appear better adapted to handling Gigabit Ethernet speeds, while the TCP/IP protocol running under Microsoft NT 4.0 still has much room for improvement. TCP/IP is a connection-oriented and complex protocol that requires high CPU bandwidth to process packets at gigabit per second rates. "
Re:Other groups working on similar stuff (Score:1)
Bullshit. No more expensive and painstaking than it was to make a pentium processor and a Windows operating system. Christ but both of those architectures are nightmares of complexity, and yet they still got built.
No, the real problem is that it's a wholesale change in the way of thinking about solutions and applications, and we don't have enough engineers and programmers trained to think that way.
Yet.
Pic don't load and article has glaring errors. (Score:1)
Re:So we learn a new skill (Score:2)
Re:So we learn a new skill (Score:2)
Point well taken. I wasn't trying to disprove anything, merely cite by example that learning a new skill as one grows older isn't a problem at all. As for it being easier to learn when young, that is true of some things (languages) at particularly early years (before six or eight years of age being the typical ages cited), but is certainly unproven for anything beyond that. For example, IFAIK it is unproven that learning German at 21 is easier than learning German would be at 31 or 41.
To answer your question, it is quite likely that my screenplays will suck far worse than the Matrix.
So we learn a new skill (Score:3)
It is extremely cool to have this technology emerging. As for our years of skills translating, or not, it isn't really all the important. We will simply learn how to program this new equipment, from scratch if necessary.
It is a myth that the young learn better than the less-young. As an example, I learned German at 21 (and am now very fluent), Linux at 26, how to fly a plane at 33, and am now learning to write screenplays at 36. (As an amusing counterpoint I will almost certainly never learn to spell, even at 60. Not because I cannot, but because I have better things to do with my time, and a spell checker when absolutely necessary, but most of all, because I take perverse pleasure in yanking the grammar nazis' chains). While I doubt I'll be performing any airshows, or attending the Oscars, anytime soon, the point remains: we have already been taught how to think and learn. Learning how to use and program FPGAs won't be that big of a problem, with or without years of programming experience behind us.
Re:Umm (Score:1)
I know you're just trolling, but why is everything always money money money.
--
Delphis
Re:"I'm sorry NASA, I can't do that..." (Score:1)
You're reading comprehension skills must be amazing. Please tell me -- what is your IQ?
d
Re:So we learn a new skill (Score:1)
I too have learned stuff as I've gotten older, but that wasn't what I meant. The history of computer science since the 40s and 50s really hasn't been as shallow as people like to think. Somebody else's comment to the effect that "things of an essentially linguistic nature can all be learned in 21 days" is short-sighted. Go to any computer science library in any university. Look at the shelves and shelves of books, journals, and papers. Stuff on compiler design, cache performance and optimization, several hundred decades-long debates percolating under the general heading of language design, relational databases and object databases and which is better...
It's easy to forget that five decades of very smart people have dedicated their careers to advancing this whole "computer science" thing. In our current historical situation, the entire field has been flattened down to "what can I do with web browsers and servers?" in the popular mind. People start to believe that something like J2EE represents all of human thought regarding computer science, or at least, all of it that's worth preserving.
programming FPGAs is different (Score:5)
New amazing technology never comes around. (Score:2)
Wow, sound's like it could be usefull for CERTAIN things, but still amazing nonetheless. I always hear of this amazing new technology coming out, FPGA Supercomputers, solid state hard drives, REAL 3D monitor's that cost $5 to make from existing LCD displays, emulated gills to breathe under water, etc.
I just wish some of these things could make it to my house. Is it because of the ridicilous marketing and business planning that these inventions depend on to succeed, or is it just because they don't want to market these ideas and sell them to dead end companies?
I'm not totally sure, but i'd like to know whats stopping some of these things from making it to the end user.
Probably only faster for simple operations (Score:4)
I couldn't read the press release (MS Word - bah), but judging from the websites, the FPGA is dynamically programmed to perform very specific tasks in hardware.
Since these specific tasks can run in hardware, they will run 1000 times faster than a Pentium. There is no way in the world this machine is going to run general purpose applications at this speed. Only very specific, small, algorithms. Sorry, no 6000 fps for Quake ;-)
This makes the machine useless for everyday use in your home. However, I agree this machine may be very usefull for flight-control computers.
What does it look like? (Score:1)
But I looked at the pictures and that was simply not the case! The case being, it didn't look like a case. Uuuhh, should I be writing this in upper case?
Aargh, that damn coffee. How fast will it compile my kernel?
Re:Nice usage scenario. (Score:1)
Nice usage scenario. (Score:5)
Yeah, that's exactly what springs to my mind when I try to come up with uses for a supercomputer the size of a PC. To run my coffee pot.
Finally I can actually make coffee at home; I've always wondered how they ran the coffee pot at 7-11 - where I buy all my coffee - but now I know: They use a supercomputer!
Re:Viva means Life?? (Score:1)
Re:The emperor looks great in those new clothes (Score:1)
HAL said it too near the end of the same movie.
Re:Press release (Score:4)
Further, most of the 95% of the World that you believe use MS Word are not the people that will have any interest in reading about this. The people who are interested are mainly scientists and engineers, two groups who tend to be more likely than average to use a platform other than a PC running some version of Windows. These guys are more likely to write things in LaTeX than Word. But they will have an equal chance with everyone else of being able to read HTML.
I certainly don't have any software installed on my system that can read Word files. I know of several programs that could do an aproximate conversion, but why should I install extra software, using my time and computing resources, to read this, when its not even close to the format that any reasonable person would have expected it to be in anyway?
Re:WOW!! Joint NASA/Sony Announcement!!! (Score:2)
Bryan R.
WOW!! Joint NASA/Sony Announcement!!! (Score:3)
Bryan R.
FPGA? (Score:2)
Specs for the HAL-15... (Score:2)
HAL-15, desktop model (the one NASA is testing)
http://www.starbridgesystems.com/prod-hal1.html
HA-300, the rack-mounted, 12.8 TeraOp version
http://www.starbridgesystems.com/prod-hal3.html
The Star Bridge website seems strangely non-Slashdotted, considering how much trouble I had getting the NASA sites to load.
When I saw this one, I was sure it had to be an early April Fool's joke, but it looks like they're for real. The company's hype still sounds pretty pie in the sky, but if they can deliver even 10% of what they're promising, a hell of a lot of computational power could be available in a few years.
They cite cost savings in chip design (simpler, lower power, etc.) and chipfab retooling as a point in their favor (a single type of chip, customized for different applications). They cite it for speed of implementation, rather than reduced cost, but presumably that would come later. The HAL-300 is priced somewhere around $26 million, so don't bother to check E.bay for a few months yet.
Not that much of a big deal (Score:2)
I think the only real reason that NASA is going to be `one of the first', is simply the fact that nobody seems to buy these things. Which is a pity. What's really REALLY sad, is that their claim to have a $1000 version available by now (link to /. article [slashdot.org]) is still vaporware.
Re:The emperor looks great in those new clothes (Score:2)
75 GFLOPS for the GeForce 3 - kinda hard to beat.
Re:All I know will be useless! (Score:2)
What'd be neat would be if they sold this thing at a price reflecting it's cost (an FPGA chip) rather than the customers ability to pay...then we could all play with them.
Re:All I know will be useless! (Score:3)
"Star Bridge" sounds familiar... (Score:2)
It was used for calculating the gravitational interaction of thousands of bodies -- a very parallel and complex problem. The solution was many custom processors in parallel, and it was so successful (and cheap!) that it outperformed multi-million dollar supercomputers at a fraction of the cost.
The downside was that it was a single-use system -- it could only to the calculation it was hard-wired to do.
Since the site is slammed, I can't see what they're actually doing... but the name is sure close. The FPGA idea is neat, because it would relieve the single-use limitation.
I'm still not holding my breath waiting for one of these to appear under my desk, though...
Correction (Score:2)
Actually, there is [microsoft.com].
Re:FPGA? (Score:2)
Logic operations can be described with truth tables. FPGAs contain programmable truth tables (called lookup tables, or LUTs), so you can implement whatever logic operation you want. They also contain programmable interconnects that allow you to join your LUTs in any way you want.
Usually, they also contain some memory, because it takes a lot of LUTs and interconnects to build memory, and the resulting memory would be very slow and wasteful.
How is this faster than a CPU? Well, the win comes when you design a custom circuit to perform a certain task, rather than using a general-purpose CPU. For instance, if you could make a citcuit to do something at 100MHz when it would take, say, 100 Pentium instructions, then your FPGA would outperform a 10GHz Pentium!
Used in this way, FPGAs are the ultimate parallel computer. They have many thousands of very small processing units (LUTs).
--
Patrick Doyle
Re:The emperor looks great in those new clothes (Score:2)
--
Patrick Doyle
Re:All I know will be useless! (Score:2)
--
Patrick Doyle
HAL (Score:2)
HAL, yeah, right, "Open the goatsex link HAL" "I'm sorry Dave, you know I can't do that"
And we're still 2 days from 01-04-01.
Re:"Star Bridge" sounds familiar... (Score:2)
Re:FPGA? (Score:2)
The hard part is designing the circuit. Compilation down to silicon is a known hard job, with layout and drawing abstraction boundaries being two main stumbling blocks.
blue sky musing: On the horizon for mainstream acceptance are profiling feedback optimisers, which produce specialised versions of code that run very fast for a limited set of [common] inputs. These currently go from a higher level language to a lower level language (java JITs like HotSpot or transmeta's codemorphing) or from the lowlevel language to the same lowlevel language (HPs dynamo).
It would be really cool to see this technology applied to creating FPGAs, where the meta software notices that a certain basic block is taken often, and has mainly bittwidling operations. If it is taken often enough, and is long enough (this is where the specialisation of dynamo comes in -- it basically just creates optimised long basic blocks) it makes perfect sense to compile it to silicon.
Eventually, the Ghz race WILL pewter out, and we'll be forced to this sort of generalised specialisation for getting the 90/10 any faster.
Fig leaves are in fashion again (Score:2)
I wonder; would it be more useful to market these as reprogrammable CPUs? Ie, don't make the poor hardware designer design the whole CPU, but give them a few instructions that you'll take care of the decoding and commit-in-order and speculation, but they get to design the actual instruction.
Outlinish they'd declare: this instruction reads registers x,y,z, writes a,b,c, and will require so many cycles to complete after inputs.
Has this been tried and failed, is this what they do, or are there other reasons why It Would Never Work?
Re:FPGA? (Score:2)
Also, the logic primitives in a programmable (e.g. a slice in a Xilxin Virtex FPGA) can run extremely fast. It is not the limiting factor in getting speed from an FPGA. The much bigger issue is routing.
In addition to the actual logic and routing, there's configuration bits (the SRAM/FLASH/antifuse bit that are used to actually cause the device to implement the logic you want) and the support logic to program the configuration bits. There are millions of configuration bits on larger FPGAs. And don't forget the fact that the IO cells in most FPGAs support multiple I/O standards and usually contain a flip-flop and a small amount of miscellaneous stuff (e.g. a couple muxes for the output enable and clock select).
On the software side, generating logic equations is well known. The issue is in taking advantage of the specific architecture of the targeted device and all it's special features. And the other issue is finding the optimal routing between the logic resources and memories you've used. Both of these issues have been and continue to be researched.
Automated Gate Development (Score:2)
I guess what I'm getting at is that yeah, a programmer could design & layout the chip according to his needs, but wouldn't it be better to describe the chip (ala C-Program), and run it through another system that would program your chip most efficiently?
Re:So we learn a new skill (Score:2)
Having gone completely through the process myself, it's as easy as skiing for me, so I can't objectively analyize it.
The biggest problem is in debugging; you have to trace through dozens, hundreds or thousands of "signals" on a simulator. Logging is also not always an option.
-Michael
Re:FPGA? (Score:2)
The details are exploited or emulated by the synthesizer stage (if memory serves). Thus you can abstractly program with VHDL or what-have-you and not worry too much about what's really happening. I'm curious to learn what 'VIVA' adds to the development environment. Maybe it's Visual VHDL (tm) with drag and drop widgets.
-Michael
I want one, and have for a long time! (Score:2)
I want to see a grid of 1000x1000 single bit clocked cells that can be reprogrammed on the fly... I'll pay up to US$300 for one to play with, provided it does the clocking as I specified above. At a bare minimum I could do FFTS in real time on a 100Mhz 12 bit data stream with it.
--Mike--
Re:I want one, and have for a long time! (Score:2)
As far as "storage" (RAM) in the traditional sense, there is none... just the states of the individual bit computers. Taken in combination, you can program anything from a pipeline multiplier through string comparison, etc. Pipe the data in one corner, and out the other, using DMA to feed it from the main system bus.
I hope that all makes sense... I'm tired, and need my bandwidth fix, thanks to NorthPoint's demise.
--Mike--
Re: Press release contained a virus? (Score:2)
great quote from press release (Score:2)
Re:Press release (Score:3)
Abiword [abisource.com] runs on just about any platform you can use on a PC and reads MS Word files pretty well. It reads this press release just fine.
SteveRe:Erm.... The Name.... (Score:2)
Rehash of Starbridge systems (Score:2)
What Happened To Starbridge's Supercomputer [slashdot.org]
Reconfigurable Supercomputers [slashdot.org]
Not NAND but LUT (Score:5)
'Gates' figures on FPGAs are thus rough estimates of how many NAND gates would be needed to provide similar functionality.
Savant
Re:Probably only faster for simple operations (Score:2)
Why not? The algorithms may work best for small-grain problems, but what is any graphics program but something that computes thousands of pixels at the same time? I'd imagine image-processing (in general) is highly parallelisable at the pixel level.
Re:This just in... (Score:3)
All I know will be useless! (Score:2)
Suddenly Quicksort is not the best sort algorythm, and the traveling salesman becomes possible to solve!
Even though we touched on hypercomputing at university, some of the basic premises I have, and rule-of-thumb knowledge I have will be outdated.
I have to learn anew to program using logic, and logic blocks, at least I'll get back to my scientific (mathemetic) roots!
Whee...
For once Computer Science may actually become more of a Science!
Re:Traveling salesman tractable? Not. (Score:2)
It does get easier... this machine's design gives it an order of improvement, and not just N times faster. This is, sadly, still below the factorial scale of most NP-complete problems.
Just making a point that what we have been taught may be nullified by advances in technology. Things like quantum computers may however approach that computing capacity, and I see this machine as a step in the right direction.
Re:programming FPGAs... It's not that hard (Score:3)
2001 (Score:2)
Re:Nice usage scenario. (Score:2)
"Make me a cup of coffee."
"I'm afraid I can't do that, Dave."
Another use? (Score:2)
I wonder if you could make a specialized machine, with a bunch of FPGAs, solely for the purpose of AI for massive scale online games. Most MMORPGs have famously stupid AI because making smart creature AI takes both lots of cycles and very good code. Could a specialized box designed for these computations be a salable device?
Traveling salesman tractable? Not. (Score:2)
Re:All I know will be useless! (Score:2)
Erm.... The Name.... (Score:2)
http://www.bootyproject.org [bootyproject.org]
Re:Erm.... The Name.... (Score:2)
Yeah, after I saw the headline my first thought was to check the date and make sure it wasn't April 1st!
http://www.bootyproject.org [bootyproject.org]
Re:Smoke and mirrors? (Score:3)
I'm assuming that what they're planning is to have a sort of standard library of FPGA loads for different functions, and programmers will write programs by picking the right loads for each device group. This, no doubt, is what that special language is for, so that programmers won't have to understand all the gory details in order to write code for it. Any custom loads that need to be created will be synthesized at compile time; compilation will be slow, but the run-time can be fast.
Admittedly, programming all those individual FPGAs on the fly is a complex and difficult task, but then, I doubt that most programs will be reconfiguring so often in the real world. Their 1000/s number is a maximum, and may not apply when you're trying to program multiple loads into multiple devices.
Re:FPGA? (Score:2)
There's no space or cost benefit of a NAND over a NOR - they're both 2N transistors for N inputs.
Re:Probably only faster for simple operations (Score:3)
Humm.. 1000 times faster, 6000fps in Quake with this, do you really mean to imply that you only get 6fps in Quake with current technology?
This just in... (Score:5)
Re: Press release contained a virus? (Score:2)
Um... that Word file tried to change my normal.dot template. Did anyone else encounter this? Is NASA spreading infected Word files?
For some reason, Word always does that to me whenever I try to open two or more documents at the same time. I don't know why and I wish it would stop, but it doesn't seem to be a virus. (I just scanned with NAV and the document came up clean.)
--
BACKNEXTFINISHCANCEL
Imagine... (Score:4)
Re:How does this differ from my CPU? (Score:3)
The emperor looks great in those new clothes (Score:2)
But what are its specs on the dreaded Q3 fps test?
"Dr. Chandra, will I dream?"
"No, but you will be sued to oblivion over your name."
These guys jumped the gun. April 1 is a couple of days off. [ridiculopathy.com]
Re:Coming soon to a bedroom near you? (Score:2)
I've been watching this company since 1999 or so. Back then they were claiming they would have a box on the market priced in PC-range within 18 months. Looks like that's going to remain vaporware for the foreseeable future. Now the only mention I can find on their website about it is this:
Personal computers. The company believes that some day PCs will come equipped with the same supercomputer technology found in the company's Hypercomputers.
Re:FPGA? (Score:2)
April Fools joke? (Score:2)
Check their claimed speeds (Score:2)
I emailed them about this at the time, but didn't receive a reply 8o)
See "The Economist" (Score:2)
Scroll down to the "Machines that Invent" heading for the really interesting part. David
http://www.economist.com/printedition/displayStory .cfm?Story_ID=539808 [economist.com]
Re:FPGA? (Score:2)
Re:Coming soon to a bedroom near you? (Score:2)
Besides that, I wonder how well their software really works. From what I've heard about conventional FPGA design software, you code in a C-like language (Verilog or VHDL), then run a simulation to verify the code, then you try to compile it to a physical layout -- and try, and try, and try. If fast operation is needed, you've got to intervene manually to arrange the layout so connections on critcal paths are short. If you want to use even half the gates on the chip, you've got to intervene manually in the layout so it doesn't run out of connection paths in the densest areas. I don't think it likely that these people have found a magic way around that. More likely, their system will only work if you never try to use more than 1/4 of the possible gates or speed...
Re:Coming soon to a bedroom near you? (Score:3)
A) It's only faster on certain problems where the computations can be performed massively in parallel. And most CPU's already spend 99% of their time waiting for data to arrive from memory or the hard drive, or for the operator to click the mouse.
B) It's a s-o-b to program. You aren't writing software, you are designing a custom hardware circuit to solve the problem, which is then implemented by programming logic gates and connections in the chips. In other words, on a computing job where you could write a program in C in a week and it would run in 1 minute on a PC, on FPGA's it might take a year to design and run in a millisecond. So if reducing the run time is worth paying six figures for software development, go for it... Maybe the HAL people have found a way to ease the programming, but it's still going to be quite a lot harder than normal programming.
Just guessing this box might hold 100 FPGA's at $25 each. Plus it has to have a normal computer in there to hand the programs and data out to the FPGA's. So it costs more than a PC, but maybe not as much as a top-end workstation (depending on how big a profit margin they are taking). It's great for a rocket navigational system, but the only down to earth applications I can think of for a machine this big are professional video processing, weather prediction, and some really heavy engineering simulations.
On a smaller scale, cell phones and future modems are likely to include some FPGA-like circuits, probably as a small part of a custom chip rather than as a separate FPGA. When a new protocol comes out requiring revised circuit design, you do the changes in the FPGA program and distribute it to be downloaded.
No government could stop this; FPGA's are sold worldwide and used extensively for prototyping and occasionally for production. Maybe they'll try to restrict the HAL programming language.
alternative home heating? (Score:4)
Re:Erm.... The Name.... (Score:2)
Re:FPGA? (Score:2)
Claric
--
Things too Note!! (Score:3)
Re:Smoke and mirrors? (Score:2)
BTW, if anyone is really interested in FPGA's, Xilinx [xilinx.com] has a hellass pile of info here [xilinx.com].
Finally, I wanted to ask any current FPGA users if they find that they get different performance stats on the same design on different compiles. When I was doing work on Xilinx, I found that the compiler would produce designs of various speed, based on routing and the number of CLB's it used. On a couple of occasions, my longest path delay was decreased by about 25% just because i recompiled a couple of times.
FPGAs (Score:2)
The company who makes these computers has been around for a few years.
As to reconfiguring 1000s of times per second, that seems a bit unlikely. Typically programming time on a Xilinx FPGA is at least a second, in my experience.
Hamish
Disclaimer: I work with FPGAs for a living.
Modded Funny (Score:4)
Not Truly 1000 Faster (Score:2)
These devices are fantastic if you have a very specific application that you wish to design them for (e.g. Image processing, voice analysis, SETI@Home). With the ability to be reconfigured at a moments notice, they are also much more reusable than an ASIC. But don't be misled by the speeds given in the marketing info. Get a demo chip from Altera [altera.com] or Xilinx [xilinx.com] and play with it for a while. Then make your own judgements about speed.