End of The Von Neumann Computing Age? 243
olafo writes "Three recent Forbes articles:
Chipping Away, Flexible Flyers and Super-Cheap Supercomputers cite attractive alternatives to traditional Von Neumann computers and microprocessors. One even mentions we're approaching the end of the Von Neumann age and the beginning of a new Reconfigurable computing age. Are we ready?"
Jerry said it first ... (Score:3, Funny)
Pronunciation (Score:2, Informative)
I could be wrong, I don't speak freaky-deaky dutch.
Re:Pronunciation (Score:2)
Re:Pronunciation (Score:3, Informative)
From a citation in another comment leading to http://ei.cs.vt.edu/~history/VonNeumann.html
Hungarian, not Dutch.
Re:Pronunciation (Score:2)
The lighter side... (Score:5, Funny)
Roger Kaputnik where art thou?
If The Usual Gang of Idiots is designing (Score:2, Funny)
from the word processor to the ATM, he's invented so many things in jest that 15 years later appear on the market, it's a wonder we don't speak his name in the same breath as Edison.
Re:The lighter side... (Score:4, Funny)
Re:The lighter side... (Score:2)
I've seen it. It's a picture of a computer with two keyboards labeled his and hers.
Von Neumann machines? (Score:2, Interesting)
Re:Von Neumann machines? (Score:5, Informative)
Compare this to the Harvard architecture used on some embedded processors: a processor hooked up to two separate memories, one containing the program, and the other containing the data. This is useful when you have your program in an EEPROM and your data in a little static RAM. Two types of memories naturally fit into a Harvard architecture, though it's simple enough to do the same thing with some memory mapping circuits.
Re:Von Neumann machines? (Score:2)
Rus
Re:Von Neumann machines? (Score:2)
Re:Von Neumann machines? (Score:3, Informative)
John von Neumann
A Hungarian-born mathematician who did pioneering work in
quantum physics and computer science.
While serving on the BRL Scientific Advisory Committee, von
Neumann joined the developers of {ENIAC} and made some
critical contributions. In 1947, while working on the design
for the successor machine, {EDVAC}, von Neumann realized that
ENIAC's lack of
Re:Von Neumann machines? (Score:2)
You're definition of von Neumann architecture is wrong; a von Neumann machine has one data bus for connecting with memory. That means it has to share the bus for program and data memory.
As far as I know, a VLIW chip that uses one memory bus is still von Neumann architectur
Re:Von Neumann machines? (Score:4, Interesting)
The implication is that we are approaching a transition to some seriously wacked out computer designs. I look forward to seing what these people are coming up with. DNA computers, for example, have a different model of computation.
Re:Von Neumann machines? (Score:3, Interesting)
A von Neumann architecture treats memory as one big serially addressable hunk of unlabeled "stuff". There's no way to look at the memory and know what anything is (instruction or data? what type of data? what's the meaning of this data?) until you try and execute the memory an
Re:Von Neumann machines? (Score:2)
La la la
Re:Von Neumann machines? (Score:2)
icky tape heads wandering around on an infinite tape - or a finite tape if you knew in advance what algorithm you were about to run
An implementation of a Turing machine doesn't have to be tape, ect. It's mathmatical abstraction, the tape description is just a metaphor for visualization.
Re:Von Neumann machines? (Score:5, Informative)
Von Neumann was smart enough that there is more than one thing named after him. A Von Neumann machine is a self-replicator. A Von Neumann architecture is a computer architecture where programs and data are stored in the same manner.
Sometimes the latter is also referred to as a Von Neumann machine.
Re:Von Neumann machines? (Score:2)
Same guy, two different ideas (Score:3, Informative)
You're confusing "Von Neumann device" with "Von Neumann {computer,architecture}", which is an easy mistake to make.
VN devices are what you said they are, and no, they don't exist yet.
A VN architecture (or "stored-program architecture") is one where the code for the program gets loaded into the same memory as the data for the program, i.e., essentially everything that you use today. This was in contrast to earlier architectures where the memory was used to store only runtime data, and the code was read
Re:Von Neumann machines? (Score:3, Informative)
The distinguishing characteristic of a Von Neumann machine is that code and data are treated the same. Both are stored in the same memory, which seems natural to a modern user, but was revolutionary back when it was introduced.
One might say th
Well, (Score:5, Insightful)
"no stranger to this candy?" [semi-OT] (Score:2)
(Only "semi" off-topic because technically it's a clarification request about someone's on-topic post, which is therefore itself on topic; "off-topic" because the clarification request is a thinly veiled slam based on one of the oddests turns of phrase I've seen in a while.)
Re:"no stranger to this candy?" [semi-semi-OT] (Score:3, Funny)
oddests turns of phrase? What on Earth does that mean?
Re:"no stranger to this candy?"[hemi-demi-semi-OT] (Score:2)
And for the stuff I put in the title of the post, look here [bartleby.com] and do a find for "semi-". "hemi-demi-semi" is a really fun way to say "one-eighth"...
Re:Well, (Score:2)
The Von Neumann architecture presented us with a model for the conventional computer, where instructions are stored as data, which helped us to think of computing and programming in an abstract manner. Even as researchers are trying to advance into new computing architectures, such as FPGA's or quantum computing, the idea of storing instructions as data is permanently plastered into our heads. Universal quantum computers
What do you mean by Von Neumann? (Score:2, Informative)
Re:What do you mean by Von Neumann? (Score:3, Informative)
Three articles (Score:4, Insightful)
Anyways. The FPGA machines sound intriguing, but really arent as 'all powerful' as the non-techie Forbes piece makes them out to be. Not everything is parralellizable, not everything is conducive to dynamically altering the instruction set as you run it.
The traditional von neumman architecture is the best solution for many processing tasks, lots of stuff is just conducive to a sequentially operating processor. It's probably the best for all around general computing.
And 200 grand is probably better spent on a beowulf cluster of something than one of these boxes, but I'm sure they have a niche of usefulness somewhere.
I dont expect to see the traditional computer go anywhere anytime soon.
Re:Three articles (Score:2)
Re:Three articles (Score:2, Informative)
An interesting quote regarding a FPGA web server application [forbes.com] (in case you didn't get your free login ID just like
Cell phones and embedded.. (Score:2, Informative)
A family member is working here [qstech.com], and the biggest markets they have lined up for their new design are the mobile-phone vendors, and image processing [olympus.co.jp]. They aren't interested at all to pitch it towards general-purpose computing.
Interestingly enough though, the software-defined-radio teams have been eyeing the product with drool in their mouth ever since it was demonstrated [eedesign.com]. Said family member remembers trade conventions the company's been to, where the SDR teams sh
Re:Three articles (Score:2)
The only problem being that they sometimes come up with solutions which are unreplicable, due to the fact that they've solved the problem using the unique imperfections in the fpga itself.
Ahah! (Score:2)
I'll believe it when I see it. (Score:5, Funny)
But *I* say the REAL VNBN is that only 90 % of all computer scientists are only 10 % as smart as Von Neumann.
Re:I'll believe it when I see it. (Score:2)
90% of all "computer scientists" are really code monkeys with cagy managers.
The other 9% are merely not as smart as Von Neumann.
Re:I'll believe it when I see it. (Score:2)
Bilbo's Party Chart [theonering.net].
not a hoax... (Score:5, Informative)
Allan Snavely, a computer scientist at the University of California at San Diego Supercomputer Center, has been using a Star Bridge machine for about a year. He says he originally contacted Star Bridge because he suspected the company was pulling a hoax. "I thought I might expose some fraud," he says.
But after meeting with Gilson and seeing a machine run, he changed his mind. "They're not hoaxers," he says. "As I came to understand the technical side I thought it had a lot of potential. After talking to Kent Gilson I found he was very technically savvy."
Silicon Graphics has also asked Star Bridge to send along a copy of its hardware and software. The $1.3 billion (fiscal-year 2002 sales) supercomputer maker wants to explore ways to make a Star Bridge system work with a Silicon Graphics machine.
Over the past two years Star Bridge has sold about a dozen prototype machines based on an earlier design to the Air Force, the National Security Agency and the National Aeronautics and Space Administration, among others. It has also sold seven of the new models.
Olaf Storaasli, a senior research scientist at NASA's Langley Research Center in Hampton, Va., has been using Star Bridge machines for two years and says they are very fast but not yet ready to handle production work at NASA. "It's really a far-out research machine," he says. "It's more about what's coming in the future. I would not consider it a production machine."
One problem, Storaasli says, is that you can't take programs that run on NASA's Cray (nasdaq: CRAY - news - people ) supercomputers and make them run on a Star Bridge machine. Still, he says, "This is a real breakthrough."
"A microprocessor can only do one thing at a time" (Score:4, Interesting)
...Well, that's what the article says. I guess they haven't heard about pipelining, multiple execution units, SIMD etc etc.
Re:"A microprocessor can only do one thing at a ti (Score:5, Informative)
Even hyperthreading is only a minor improvement in parallelism, exchanging one instruction pointer for a small number (2? 4?). Hardly a different architecture.
Re:"A microprocessor can only do one thing at a ti (Score:2)
Re:"A microprocessor can only do one thing at a ti (Score:2)
But even on an out-of-order CPU, you CAN completely describe the state of all gates exactly at all times (at least, asuming the behaviour is that of an 'ideal' digital circuit). This is not true for a quantum circuit.
But you have a point, a p
This is really cool, but... (Score:2)
Another point the article makes is that it has been traditionally very difficult to build general purpose FPGA based machines. This got me thinking, anyone else remember a slashdot article from a couple of years ago where a fellow used genetic programming to produce an FPGA instruction set that could differentiate between tw
Futureware (Score:4, Insightful)
Yeah, I have a computer doing 1 trillion giggaflops a second powered by my pet hamster. No test results can disprove me yet!
"I live in the future."
Clearly.
"'It's really a far-out research machine,' he says. 'It's more about what's coming in the future.'"
Yep. So the title is kind of misleading. This is all stuff in the future, like flying cars and such. We could make flying cars if we wanted to, but we really don't want to yet (economic and regulatory reasons). This technology has the impedments of still really being explored and economic feasibility.
It'll rock when they're ready, but it's nothing to go nuts over yet.
F-bacher
Re:Futureware (Score:2)
Uh, no, we can't make flying cars. We can make small airplanes, but they can't stop at an intersection like a car can. We can make helicopters, but rotors have a much bigger footprint than a car. We can make vehicles with small rocket thrusters, but probably not with the range of a car.
Re:Futureware (Score:2)
No amount of air traffic will ever require an intersection. It's a three-dimensional world out there.
Still von Neumann based computing (Score:3, Insightful)
The hyped 'we are on the eve of the next generation of computing era' seems added by the startup companies marketing departments and eagerly taken over by the reporters.
Not to say that the new generation of reconfigurable computers (FPGA are what...30 years old now?) arn't a cool thing to have.
Nit Picking (Score:2, Insightful)
You are correct in the general case BUT there are cases where this is not correct. Let's suppose that we've got a task which, using von Neumann architecture, will take an amount of time that exceeds the expected lifetime of Earth. Now, using a parallel computer, in the theoretical sense will see this task take a reduced amount of time. Ignoring the possibility that the von Neumann based computer is shuttled to a safe environment before the destruction of Earth, the task will nev
Re:Nit Picking (Score:2)
while we are doing that...your argument doesn't hold. The sequential (I incorrectly used the word 'serial' earlier) computer runs on a clock speed and all it needs to do EXACTLY what the parallel computer is doing is run at a higher clock speed. FPGA's are known for their slowness, so it is not trivial to claim that the parallel implementation of an algorithm on an FPGA can be done faster then with an equavalent ($$) amount of sequential processors.
Re:Still von Neumann based computing (Score:2)
I'd like to know what you mean exactly. I think there may be some limitations to that assertion.
I agree that an SMP computer or a Beowulf cluster can't do anything that a serial computer can do. The main reason being that in some ways (e.g. memory access) these devices are somewhat serial. Any given bit of data can only be accessed (e.g. written to) by one process at a time.
On the
Re:Still von Neumann based computing (Score:2)
But this isn't the technical definition of a parallel computer (pick up a textbook on the theory of computation that describes Turing Machines, DFAs etc. Try the one by Sipser).
Sure, why not.
Any computation that can be performed by a parallel computer can be performed by a serial one.
I was hoping to get some
Yeah, and look what happened to BOPS (Score:3, Informative)
Also see this thread [google.com].
What I think might have merit... (Score:4, Insightful)
In general this "partitioning" process seems to be somewhat domain-specific and difficult. If you could do something like integrate into a JIT environment something that identified computationally intensive, repetitive, small-sized chunks that aren't I/O constrained, and be able to generate FPGA code on the fly, that would be tres cool.
Can anybody really explain why it's so hard to make a somewhat higher level language that can be compiled down to VHDL and combined with various chunks of library code into a specific FPGA configuration?
Re:What I think might have merit... (Score:4, Informative)
Usually when you are trying to compile something down to logic gates, you have to handle instruction scheduling. For example, in any conceivable situation, division always takes longer than addition. So, you have to make sure that while you're waiting for a division to complete all the rest of your data doesn't evaporate.
This isn't like a general purpose processor == there are no persistent registers here. Use it or lose it. So you have to stick in tons of shift registers everywhere, as pipeline delayers.
So it's not as simple as just saying res = (a + b)
If you've done multithreading programming and understand those difficulties, then take that and multiply the difficulty by a couple times, and you just about have it.
All that said, though, you're right: it shouldn't be that hard. If all you want to do is use C to express a calculation, that is fairly easy to boil down to a Verilog or VHDL module.
The problem is that most of the 3GL-to-HDL vendors try and boil the whole ocean. They want you to use nothing but their tool, and never have to look at Verilog. That is where things really start to break down.
An example of this done mostly RIGHT is a company whose name I can't remember. (AccelChip?) They make a product that takes Matlab code and reduces it to hardware. That's easier in a lot of ways, because Matlab is really all about simply allowing you to easily express a mathematical system or problem. There aren't all these control flow, I/O, and other random effects. My understanding is that this Matlab-to-VHDL tool works quite well.
So, it all depends on what you want to do with the FPGA.
Re:What I think might have merit... (Score:4, Interesting)
I think you're right - handling arbitrary control flow, branching and so forth is a complex part of modern compilers, and of modern CPU hardware - and it is only possible because the CPU hardware handles all of the crazy stuff like ordering instructions, managing register contents (especially with all the voodoo that goes on behind the scenes in a modern CPU) and so forth. If you tried to do all of that in the compiler (which is effectively what you are talking about here), the compiler seems like it would have to do a lot more work than standard compilers generating machine code.
The instruction set of a modern CPU serves as the API, the contract between software land and hardware land, and that is what allows the CPU designers to go behind the scenes and do all sorts of optimization, only incrementally versioning the instruction set for large changes (like SIMD). When you eliminate that contract with the generalized computing hardware, and basically are compiling down to arbitrary HDL and gate configurations, it seems like too many degrees of freedom to manage the complexity, without additional constraints (like only trying to solve matrix or other mathematical problems, like the interesting product you point out).
Re:What I think might have merit... (Score:2)
I like your API analogy, I think I will remember that and use it in the future myself. It's a good way to think of it.
Re:What I think might have merit... (Score:2)
What about division by a power of two? For those non-CS people out there multiplication and division by powers of two can be implemented by shifting bits. Shifts are commonly faster than addition/subtraction.
Anyway, interesting point.
Re:What I think might have merit... (Score:2)
Von Neumann Machines Defined (Score:4, Informative)
Some implementations add a step between 1 and 2 that says "increment the program counter" and leave jumps up to specific instructions. Others associate program counter changes with every instruction (i.e. jumps go to somewhere specific, every other instruction also implies PC++.)
There's nothing more to Von Neumann machines. They are unrelated to finite state machines or Turing machines, except that every Von Neuman machine can be modelled as a Turing machine. The difference is that a Turing machine is a mathematical abstraction, whereas Von Neuman machines are an architecture for implementing them.
Whoo hoo. And yes, I am a computer scientist. Or maybe a cogigrex.
Re:Von Neumann Machines Defined (Score:2, Interesting)
The program counter stuff and instruction cycle is just an implimentation. It's not the important part.
Wilkes Machines Defined (Score:4, Informative)
You're talking about Maurice Wilkes [vcsu.edu], not Von Neumann.
Computer Defined (Score:2)
Von Katzen de Flingen (Score:2)
Re:Von Neumann Machines Defined (Score:2)
Sure, it's important to store your results, but it's also important to load your operands, and the Von N
The Von Neumann age will be here for decades (Score:2)
Oh god, here we go again with the hype... (Score:5, Informative)
First, you have to understand what they are: basically an FPGA is an SRAM core arranged in a grid, with a layer of logic cells (Configurable Logic Blocks, in Xilinx's parlance) layered on top. These logic cells consist of basically function generators that use the data in the underlying SRAM to configure their outputs. Typically they are used as look-up tables (LUTs) -- basically truth tables that can represent arbitrary logic functions -- or as shift registers, or as memories. On top of THAT layer is an interconnection layer used for connecting CLBs in useful ways. The FPGA is re-configured by loading the underlying SRAM with a fresh bitmap image, and rebuilding connections in the routing fabric layer.
You write for FPGAs the same way you build ASICs. You use the same languages (Verilog, VHDL) and sometimes the same toolchain. The point being: this is HARD. Trust me, I've been doing it. Verilog is damn cool, but remember that you're still building this stuff almost gate-by-gate.
There are a number of tools out there that do things like translate 3GL languages (such as Xilinx's Forge tool for Java, or Celoxica's DK1 suite for Handel-C) to an HDL like Verilog. Other tools like BYU's JHDL are essentially scripting frameworks for generating parameterized designs that can be dumped directly into netlist (roughly equivalent to a
My job for the past several months has been to obtain and evaluate these tools. I can tell you that these tools are not there yet.
So what do you use FPGAs for? Well, for the next 5 years, likely one of two things: either really cheap supercomputers (which is what we are working on) or as a "3D Graphics card play." The supercomputing play is obvious, the the other one bears explanation.
Anything you can think of goes faster if you implement it in hardware. 3D graphics is a great example: most cards today consist of a bunch of matrix multipliers plus some memory for the framebuffer, and a bunch of convenience operations that you do in hardware as well (like textures and lighting and so on.) Because it's in hardware, it's way faster than anything you could do on a general purpose processor.
Now, the problem is that hardware means ASICs (until recently.) ASICs are only cheap in large volumes. Thus, for applications that are not mass-market (like graphics cards are) it is not practical to build out an industry building hardware accelerators for them.
That's where FPGAs come in. FPGAs cost more per ASIC, but less than ASIC in small volumes. This suddenly makes it practical to make custom hardware accelerators for almost anything you can think of.
This is also true of supercomputing: supercomputers are still general-purpose, just not THAT general-purpose. Your algorithm still benefits when you can just reduce it to logic and load it onto a chip. You might only be running at 200MHz, but when you get a full answer every clock cycle, you suddenly do a lot better than when you get an answer every 2000 cycles on your 2GHz processor.
So to get back on topic, where will we see FPGAs? Well, you might expect to see an FPGA appear alongside the CPU on every desktop made in a few years; programs that have a routine that needs hardware acceleration can just make use of it. (Think PlayStation 4, here.)
You might also see things like PDAs come with FPGA chips: if your car's engine dies, you can just download (off your wireless net which will be ubiqutious *cough*) the diagnostic routine for you car and load it into that FPGA and have your car tell you what's wrong.
Aerospace companies will love them, too. Whoops, didn't catch that unit conversion bug in your satellite firmware before launch? Well, just reprogram the FPGA! No need to send up an astronaut to swap out an ASIC or a board.
What you're NOT going to see is every application ported to FPGAs willy-nilly, because like I said, this stuff is not easy. I'm coming a
Re:Oh god, here we go again with the hype... (Score:2)
How fast are modern FPGAs? Can you actually run data through and get the result back in a clock cycle? If not, can you pipeline?
Are these clocked as fast as modern CPUs?
Re:Oh god, here we go again with the hype... (Score:4, Informative)
The Virtex II (Xilinx's latest) clocks at up to 200MHz, though the more complicated your circuitry, the lower it gets. 200MHz is a theoretical max -- like Ethernet; you never quite reach it in practice.
It includes a number of on-chip resources, such as block memories (which are more like cache SRAM than DRAM DIMMs you are probably used to) and 18-bit-wide hardware multipliers. The Virtex II Pro line is a Virtex II plus an actual processor core -- PowerPC, ARM, or their own MicroBlaze I believe. (That alone is proof enough that von Neumann machines aren't dead -- Xilinx INCLUDES one in some of their FPGA parts!)
You can get them in various sizes, which basically means how many CLBs they have. Xilinx measures these in "logic gates" though that is really a somewhat sketchy metric (like bogomips, sort of.)
And yes, you can actually run data through and get results back one per cycle. To accomplish this, you usually HAVE to pipeline the design. Typically you end up with a scenario where you fill up the chip's logic with your design, and start feeding it data at some clock speed. Then a few hundred cycles later, you start getting results back. Once you do, they come at one per cycle.
We have an application where we are actually clocking the thing at 166MHz -- which is the speed of a memory bus, not coincidentally. Given this config, we are basically clocking the chip as fast as the memory can feed us data. The idea is that we read from one bank at 166MHz, and write to another at 166MHz.
One way to think of this is as a memory copy operation, with an "invisible" calculation wedged in between. When you consider what a Pentium 4 would have to do (fetch instructions from cache/memory, fetch data from cache/memory, populat registers, perform assembly operations, store data back, not to mention task switching on the OS, checking for pointer validity, and so on) you begin to see the advantage of FPGAs.
Re:Oh god, here we go again with the hype... (Score:2, Interesting)
There is a least one PDA with one... (Score:2)
The IBM PDA reference design using a PowerPC chip also contained an FPGA. I haven't seen any reports on what it would be used for.
FPGA's for Sw engineers:so how hard is this stuff? (Score:3, Informative)
Hw vs. Sw - which is more difficult to "doodle" with?
Me also having a software background allowed me to relate
Re:FPGA's for Sw engineers:so how hard is this stu (Score:3, Insightful)
The next step up is useful things, like the recent colored globe thingy. That's mostly electronics, with a little bit of hardware thrown in for good measure. Replace the PIC with an FPGA or CPLD and away you go. I once wrote a framebuffer that talked to the RAMDAC -
Re:Oh god, here we go again with the hype... (Score:2, Interesting)
You usually have to do more than a linear amount of pipelining, put it that way.
As far as aerospace applications, i doubt this very much. Being that they are a vast sea of SRAM, charged particles and
Re:Oh god, here we go again with the hype... (Score:2)
The first I heard about all this was MIT's Oxygen project (I think it was called.) I haven't heard anything much since... I think the proof is in the pudding.
FPGAs will have their niche, but for anything truly mass-market, and for really really huge designs (like motherboards, for example) ASICs will always rule.
Re:Oh god, here we go again with the hype... (Score:3, Informative)
I could go on, but I think to do so might be uncharitable so I'll stop here.
I can't help but think... (Score:2)
billions of operations: on what? (Score:4, Interesting)
- it was a *tremendous* pain in the ass. This Star Bridge machine isnt a general-purpose solution, it's only for applications that can stand writing 100% custom software in a custom language.
- the data has to come from somewhere. So you can do 1G operations per second. What's the I/O like? Do they use a PC for a host or an SGI or
Configurable Coprocessors (Score:2, Insightful)
Hardware would just be a PCI-X card with a bunch of FPGA's thrown on, and a microcontroller to handle programming of them and PCI arbitration.
The real trick isnt the hardware, its standardizing the software to make it readily accessible to anyone and everyone. When Quake can start using your FPGA, it'll be a happy day in the neighborhood (RIP).
To he who gets r
When they first came out . . . (Score:2)
I thought for certain they were vaporware at that point. Not sure now.
an article on the topic from the IEE (Score:2, Interesting)
1. This article is worthwhile reading:
"The future of computing-new architectures and new technologies" [iee.org]
By Paul Warren (04-Dec-2002)
The worlds of biology and physics both provide massive parallelism that can be exploited to speed up lengthy computations-with profound consequences for both everyday computing and cryptography.
2. Yes, it's been apparent for the last few years that computing is entering a new phase with diversity of computing 'substrates' as one key theme. Ameoba, Java, .NET, CORBA an
Who cares? (Score:3, Insightful)
What do I need more processing power for exactly? Seriously?
Most applications that need more grunt probably already have ASICs designed for them (e.g. graphics cards), and ASICs are much more efficient anyway; and in quantity, cheaper.
So you're looking for an application that doesn't already have any hardware for it, and can't be attacked by a bunch of cheap Athlons or Intels or other supercomputers. What exactly?
Re:Who cares? (Score:3, Insightful)
After a point, you're not just running the same software slightly faster; you're running whole new classes of software. Just think, ripping and playing audio mp3s wasn't possible on home computers of 10 years ago. Even if you'd had the software, it would have taken forever to rip stuff, and you wouldn't get realtime playback of good enough quality. Now You can rip, mix and burn on almost anything you b
Re:Who cares? (Score:2)
I compared the peak MFLOPs quoted for each. Frankly, if my laptop can't beat a ~20 year old machine (even a supercomputer), bearing in mind Moore's law; then my laptop would be pretty lousy.
Actually, my laptop is probably considerably faster than a Cray-1 for a lot of things- the Cray-1 gets its speed from vector processing, my laptop doesn't, so it is easier to program. And the MIPs rating for my laptop is much higher than Cray-1.
I figure that a Cra
Cray speed (Score:2, Informative)
Here : http://www.thocp.net/hardware/cray_1.htm
Top speed 133 MFLOPS
And from : http://www.theregister.co.uk/content/1/14840.html
CPU
PIII 1GHz: CPU: 2694 MIPS, FPU: 1333 MFLOPS
P4 1.5GHz: CPU: 2866 MIPS, FPU: 882 MFLOPS
Athlon 1GHz: CPU: 3111 MIPS, FPU: 1395 MFLOPS
Snooping around more
SGI Origin2000: 114 MFlops
Macintosh G3 ZIF/400: 93 MFlops
Macintosh G3/333: 77 MFlops
Intel Pentium II/450: 72 MFlops
Macintosh G3/300: 71 MFlops
Macintosh G3/266: 64 MFl
Why me worry? (Score:2)
Three articles, one author (Score:3, Interesting)
I can only wonder what sort of favors Daniel Lyons is receiving from Star Bridge. The only news here is that Forbes is being so blatant about whoring themselves out as a PR machine for a troubled company. No wait, that's not news either.
Be Skeptical of Forbes... (Score:3, Interesting)
Interesting (Score:2)
Once a Goonie, Always a Goonie (Score:2, Interesting)
Backus said it in 1977 (Score:2)
An article on the VIVA language, sample code (Score:2)
The aim of Forth has been to compile to silicon... (Score:2)
Smalltalk might be another IDE for FPGAs as objects can be defined which represent gates and
I think I'll shut up now and find a Xylinx manual on the web somewhere.
Already there (Score:4, Informative)
For better or worse, most of the PlayStation2's computing power is locked up in a non Von Neumann architecture.
So the evolution of computing to non Von Neuman architectures isn't so much news as a gradual shift that began about 5 years ago with 3dfx, and is really starting to happen large-scale right now.
The justification for FPGA's in consumer computing devices could be seen as a generalization of the rationale behind 3D accelerators: they bring you the ability to get a 10X-100X speedup in certain key pieces of code that are inherently very parallel and have very predictable memory access patterns.
I think the timeframe for mainstream FPGA style devices is quite far off, though. They need to evolve a lot before they'll be able to beat the combination of a Von Neumann CPU augumented with several usage-specific non Von Neumann coprocessors (the GPU, hardware TCP/IP acceleration, hardware sound...)
Here are the major issues:
- You'll need a lot more local memory than these devices have now -- there is a very limited set of useful stuff you can compute given a 32K buffer (a la PS2) and significant setup overhead.
- The big lesson from CPU's (and I expect from GPU's in the next few years) is that things REALLY flourish once you have virtualization of all resources, with a cache hierarchy extending from registers to L1 to L2 to DRAM to hard disk. For virtualization to make sense with FPGA's, Star Bridge's quoted reprogram times (40 msec) would need to improve by about 10,000X. Without this, you can really only run one task at a time, and that task can only have a fixed number of modules that use the FPGA.
Even then, it's not clear whether the FPGA's will be able to compete with massively parallel CPU's. In 3 more process generations, you should be able to put 8 Pentium 4 class CPU's on a chip, each running at over 10 GHz, at the same cost as current
Generality (Score:3, Insightful)
One of them is a specialised web server. Fine, there are a lot of web pages out there that need serving. I can well believe that you can build an FPGA-based static-page web server which will beat the pants of a Sun/Intel server doing the same thing. But what about dynamic content? is their DBMS as good as the latest Oracle or MySQL? Willit, say, handle the internationalisation issues that those systems will? Bet it won't. Will it runs PHP or Python natively? I doubt it - I bet it hands that over to a traditional back-end processor.
As has also been said elsewhere, thus kind of hype is a repeated event. A specialist machine outperforms a generalist machine at its specialist task, and journalists claim that the world has turned upside down. Connection Machine, Deep Blue, GAPP, transputer... Just a few I can call to mind.
Re:This just in... (Score:2)
1. It was a joke.
2. Mohammed Saeed al-Sahhaf is a complusive liar.
Lighten up, who cares is BSD is dead or alive? Get over it!
In other words... (Score:2)
Re:The Sceptic (Score:2)
But does he mind being paid with cheques dated "in the future" as well?
Re:End Of handybundler Trolling Age? (Score:2)