Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

A $1000 Supercomputer? 143

Sean Mooney writes "CNN is reporting that $1000 pc that is 60,000 times faster than a PII 350 may be on the market within 18 months. Star Bridge Systems is making the field programmable gate array (FPGA) computer. These are the same guys who are making HAL, reported earlier. " I'll believe that when I see it. Although I can't think of a better way to break Moore's Law.
This discussion has been archived. No new comments can be posted.

A $1000 Supercomputer?

Comments Filter:
  • DON'T BELIEVE THE HYPE.

    They spent some time in Saucelito, CA as Metalithic promoting the same sea of FPGAs, but this time for a more specific music mixdown system. They were never able to get the system to work properly but did succeed in bilking some investors out of lots of money.

    The internet will follow these jokers forever, if I were them, I'd learn how to sort vegetables.

    FPGAs in nice sea of FPGA configurations are available today from companies like IKOS http://www.ikos.com/ and are programmed using VHDL..... often used to sim BIG chips. And yes you could RC5 _really_ fast if you wanted to.

  • by Chilli ( 5230 ) on Tuesday June 15, 1999 @04:27PM (#1848990) Homepage
    The problem with the type of calculation that they use to predict the performance of the machine is that, given todays state of the art in parallel computing, a machine with a million processors doing 10 operations per second is not the same as 10 processors doing a million operations per second each.

    Your average C program has very little implicit parallelism (= parallelism not explicitly introduced by using some library of parallel operations or so). Even the best compilers on this planet won't make these programs run much faster on a massively parallel computer than on a single processor (on the contrary, the additional communication overhead can easily make the execution slower with each processor that you add).

    Remember what a fuzz it has been to make the Linux kernel perform well on SMPs with more than two or three processors; how do you want to make this scale to tousands and millions of parallel processing units? BTW, the last company that went for many small (and slow) processing units instead of a few very fast ones was Thinking Machines (the machine was called CM-2). Do a search on the Web to see where they are now...

    Chilli

    PS: Such a machine can be useful for some things, called embarrassingly parallel problems/algorithms in the parallel computing community.

  • by substrate ( 2628 ) on Tuesday June 15, 1999 @04:27PM (#1848991)
    or performance claims in this case. Notice that for the performance they compare the IBM Pacific Blue running real code to their machine doing a 4 bit adder. The reason for this provides insight into the technology they're using.

    Their computer is based around FPGAs (Field Programable Gate Arrays), in particular they are using the XILINX family of FPGA. These are devices that are composed of thousands of small logic blocks wired together through a switching network. The functionality of these small logic devices is user definable by setting bits in an SRAM. The connectivity between pins and the logic blocks and other logic blocks is also user definable by setting bits in static RAM.

    So what they're doing is setting each of these programmable blocks to implement a 4 bit adder and wiring them together such that they're all operating at once. It isn't actually doing any useful calculation. There performance claim is based on wiring together a bunch of useless logic and running it all in parallel. Once you start doing useful things the amount of parallelism will reduce. It'll reduce a lot. FPGA's aren't very fast devices, they'll only get a few percentage points (if that) of their performance claim for real applications.

    Porting code to this machine would be non-trivial as well. Rather than the normal programming languages computer scientists and programmers are familiar with you're actually controlling the flow of electrical signals. They've probably got synthesis tools that will take some variant of a program language and translate it into the native data needed to program the device. The synthesis tools are most likely very crude and to get real performance you'd probably have to hack bits. Not fun. I say this because of my experience with synthesis tools used for ASIC design. They're fine if you're doing boring design of maybe 50 or 100 MHz. Beyond that you're pushing there technology and it will probably break. These synthesis tools are designed by billion dollar companies. It would take massive amounts of man hours and money to create a well designed synthesis package for something of this magnitude.
  • That's not what the artical is saying. You just reiterated in more detail what I was saying... Thank you for proving my point...

    As too your tone thank you for proving my point that ACs should be disallowed from posting.


    "There is no spoon" - Neo, The Matrix
    "SPOOOOOOOOON!" - The Tick, The Tick
  • Currently the fastest supercomputer available, according to an article posted here a few days ago, is the ASCI RED from Intel. It proforms 1.6 trillion calculations per second. That means if you are trying to multiply matricies, do vector math, or any other type of calculation, it would be at or below this mark. If this new technology can achieve 100 trillion calculations per second, by any specialized means, it is still near 100 times faster than the ASCI RED is specific tasks.

    On a different note, their website lists the possible tasks of this hypercomputer to be "ultra-fast scalar processing, digital, broadband signal processing and high-speed, low-latency switching and routing." Funny, no mention of vector processing. Without that it will never kill the modern supercomputer. The web site uses too many buzz words for my likeing as well
    "massively-parallel, reconfigurable, third-order
    programmable, ultra-tightly-coupled, fully linearly-scaleable, evolvable, asymmetrical multi-processors. They are plug-compatible"

    If it works, this is a huge step forward, if not, it is a lot of hype.

  • First we have "flash bios" that a virus can blank and render the hardware useless, and now we have chips in development which can be physically shorted out in software. Oh yeah, there's a step forward... Infinite monkey attacks happen in real life: if it can go wrong some clueless newbie will stumble across it. Count on it. (That's how we debug Linux, isn't it? :)
  • The fact of the matter is, Windows emulation could be damned near impossible. Just how many people understand the whole of Windows9x? Just slap Intel emulation in there, it will keep most people happy, including me.. There's no way I would be caught without one of these things, if they turn out to be worthwhile. Buying it the first day could be risky.

    BTW, just what assembler code would this machine use? Would it have to be written in "ViVa", or could it be written in x86 assembler? I'm all confused now =op



    Before criticizing a man, walk a mile in his shoes. That way, when you do criticize him, you'll be a mile away, *and* you'll have his shoes!
  • Their first "customer" was touted as Larry Wilcox. If the name doesn't ring a bell, then how about "the white guy on CHiPs"?

    If their "hypercomputer" was as good as they're saying, it's likely that they'd have somebody who is both famous and technically competent speaking for them. Not Eric Estrada's old cohort.

    Food for thought.
  • Obviously. If it's true that is.
  • Quote:

    Viva Active Experts can become software entrepreneurs by
    organizing groups of Viva Developers to write libraries and
    application software for Viva and be paid, either in compute
    cycles or money.
  • > The problem here is a question of scale: can I
    > fit all of Quake 3's rendering pipeline into
    > the hardware? If I can, it should
    > cream a dedicated processor. If I can't, I
    > lose major amounts of speed switching the
    > gate array, or to using a less-efficient
    > general layout on one part of the array.

    > To my understanding, FGPAs are slower and
    > larger than dedicated circuitry, which limits
    > the transistor count if you're looking at a
    > reasonable die size.

    Bearing this in mind, I fail to see how useful these devices would be for something like a 3D application. By putting a 3D pipeline on a FGPA you're just using it as a dedicated 3D chip like your typical nVidia TNT, 3dfx VoodooX etc, except that your FGPA is built on slower bigger technology compared to the CMOS competition (TNT, Voodoo etc). Which do you think is going to perform better?

    But thinking again, perhaps FGPAs could be produced cheaper than normal chips. Should be possible as you only have to produce one kind of chip, instead of a different chip for CPU, FPU, DSP, 3D, etc. Then instead of buying computer with a CPU and a DSP (sound) *and* a gfx 3D chip etc, you just get a box that's packed full of these cheap FPGAs and configure them for what you need. Since the FPGAs are so much cheaper, you just buy a lot more of them and beat 'standard' computing using sheer numbers (and parallelism). (and then all the 3D chip companies transform into software companies and live happily ever after).

    I hope I have made some sense.

    --Simon

  • Dude, while you seem highly intelligent, I guess the obvious just went over your head. Don't worry about it. It happens. Anyhow, it seems fairly clear that he likes Linux, as do a lot of people here at Slashdot. Linux runs on a wide variety of platforms. A new one comes out, a Linux fan asks if it will run on it, and this surprises/annoys you? You're an odd little man. Dude.
  • According to the article, this HAL thingee will "run PC applications in emulation mode, in a manner similar to how the DEC Alpha runs NT, but it will run it a lot faster." Eh... I always thought NT ran as native code on the Alpha... that must explain why it is so slow...

    If you read it correctly, you will notice that this didn't come from the writer of the article; this is actually a quote from Kent Gilson, Star Bridge System's CTO. Well, with a CTO like that, I can just imagine what kind of product they will come up with.
  • The point is speed. Sure mp3encoder or bladeenc or whatever work. An FPGA could (in theory) work much faster. Just "rewire" the FPGA into a mode that will work at encoding mp3s and you essentially have a hardware encoder, that the next second could be a hardware quake3 engine (not a 3d card but a chip designed soley for playing quake3).
  • Damn, I was just gonna quote that... but I'll re-quote just a bit:

    "become software entrepreneurs by organizing groups of Viva Developers"

    Wow... this sounds like Amway to me.... multi-level marketing crap.
  • Don't y'all remember this? I distincly remember seeing this on /. like, 6 months ago at least.

    It was a whole paradigm shift, with on-the-fly FPGA re-programming and all that...

    MoNsTeR
  • The thing is, wether it works or not, their web site has been updated. They are still in business, The specs for the I/O are resonable, and it looks like they are for real. I'd like to see a demonstration of the system to be safe though...

    But this would make a great stand-alone rendering cluster.
  • http://slashdot.org/articles/99/02/10/0852241.shtm l

    so it was Feb, not quite 6 months. blah.

    MoNsTeR
  • IIRC, it is 60,000 times faster than a P-II 350
    if all you want to do is 60,000 4-bit additions.
    It might be reasonable at DES cracking too. But for running Quake, you're still better off with a P-II.
  • Wanted FPGA pci card and linux drivers for
    RC5 cracking, MPEG2/3 compression, glquake3
    playing , 3D graphics ........
  • Take a good look at the clock speeds on the "state of the art" in FPGAs. While a system like this could, in theory, mark a significant step forward by erasing most of the hardware/software boundary it will still take a huge amount of effort to rebuild our existing base of computing infrastructure to take advantage of such a system. A computer like this is far more likely to find use in niche applications like routers and packet switches (e.g. put the logic for the current packet flows into hardware) and for strange little AI projects.

    Don't start short-selling Intel and AMD yet...
  • If you read the comments on the old story, there were quite a few people that were shooting holes in Star Bridge's announcements, saying that their misuse of technical terms showed that they knew nothing about what they were trying to develop. I'm not an engineer myself, but after seeing so many people say that the computer design is full of holes, I'm guessing we can write this one off.
  • They say it runs "Unix" and NT (yeah, right!).

    Their specs say it has a 1600W power supply. Does that come with a wall plug or a set of jumper cables?
  • The FGPA speed advantage over dedicated chips would come from two sources: i/o overhead and specificity. If the TNT-esque FPGA is sitting in the 'processor core,' the geometry data the FPU-esque FPGA is spitting out doesn't have very far to go before it gets crunched into pixels in the frame buffer. If the FPGA is large enough and deep enough, it can implement larger chunks of the 3D pipeline than the 3D card can, because it doesn't have to be a general (i.e. OpenGL) solution: it will only implement those features which the rendering pipeline uses, and things that aren't accelerated by the 3D card at all. Presumambly, this will make it faster than the equivalent generic accelerator.

    AFAIK, FPGAs are not cheaper than dedicated ASICs, although this company might change that...

    -_Quinn
  • Imagine for a moment that such a thing is possible. 60,000(!) times faster than a PII-350. Ok, so we get this speed by a machine that re-implements itself into a specialized hardware processor for whatever it needs to do next.

    Hmmm... that sounds like a hard program to write -- the part that re-optimizes the hardware. How many different virtual hardware processor "personalities" will it need to achieve 60K x PII speeds? Of course, in order to get full advantage from it, it will have to be done frequently. How fast will *that* be?

    I can't wait to buy the equivalent of a 21 Tera Hertz PII in 1.5 years. I assume the "hardware compiler" will be ready as well and included in the $1K.
  • Their web site says that they are open to visitors. Maybe some slashdot readers in the SLC area should check them out.

  • This article sounded pretty convincing. Sounds like a CPU made out of FlashROM. Does it really only do small addition? Anyone know?
  • by Velox_SwiftFox ( 57902 ) on Tuesday June 15, 1999 @05:37PM (#1849025)
    Does the warranty cover damages caused by one of these machines should it attain self-awareness?

    And what about the human rights, personnel, and vacation time issues concerned with the resulting employee, should the box be owned by a corporation?

    If the system had been owned by an individual, should they file manumission papers or would the former owner now be considered a parent responsible for their new cyberchild for the first eighteen years?

    And would you want one to marry your sister?
  • Consider that Transmeta still doesn't have a webpage out; it's not unknown for a relatively young company, which hasn't established its core business, which has no PR staff, to do its work with a reasonable lack of attention.

    That says, it doesn't mean they have anything, either; I daresay that if the report is correct, they'll be going for the supercomputing, heavy number crunching market, where they can attempt to recover their investment before going for the low-margin, mass commodity, PC market. There are no doubt a few applied mathematics or physics researchers raising a pint in anticipation right now.

  • I guess I'd studied FPGA's when I first got a digital design course at univ. Back then, it looked interesting but the overhead for "switching" the circuitry seemed awful.

    Now, I recall some news when reconfiguration time was reduced in a significant proportion. I also remember that some guys at Amiga were very keen on it. Hopefully, the FPGA is more than plain old parallel stuff. Wanna see if we can get a hacker's regular hourly thought exercise. ;)

    I think reconfiguration is particularly useful if your system is bit wiser than a traditional number crunching procedural system. I'm not suggesting that you can get some NN to let h/w to converge to the ideal. (That's a too difficult problem in itself) Sure I won't. But the thing is, if you let your software know how FPGA can be utilized it can make a difference.

    Especially, it occurs to any demo-coder that those tiny cute loops that do the tricks would fit nicely in a hardware design. So, I think you could make your DSP(audio,video,compression,etc.) & 3d stuff really faster. However, I suppose there are other ways in which you could actually improve the existing implementations. A key point is making your algorithms adaptive. Then, they are not the usual kind of "perfect tool" instruments but ones that use some heuristics that try to find the best hardware design for the job.

    I suspect that the simplistic kind of translation [ say a 3d algorithm to an FGPA spec., then reconfiguration when the algorithm's needed (probably over one of the custom processors alloc'd for this task) and using it as a subroutine ] might be generalized to implementing a programming paradigm as hardware. It seems that OS and compilation systems would better be revised to get it done effectively, but still it is very interesting in its own right. The array of possibilies might be larger than the excitement in implementing cryptography and NN apps, or fast Java VM's. When I imagine that the cruical parts of an expert system, or an inference engine, or just about any complex application out there could be done that way, I'm awed.

    Nevertheless, I don't know the theoretical "sphere" of the work precisely. It would be very satisfactory, for instance, to see some work on the computational complexity induced by such devices. Stuff that says "In x domain, FPGA's are useful" preferred, not the kind of stuff that says "Generally, it's NP-complete" or "Oh no, it's undecidable"....
  • by monk ( 1958 )
    Since these guys are actively soliciting investors
    and have "sold one"(!?!) although probably not for
    $26 million. The incredible claims constitute
    felony fraud in any state if they should prove
    false. I think we can see intent in the claims for
    applications. (Holography no less!)
    Where's the state attorney general?
  • Is it just me, or does it seem they have several 'sources' underlined to make it seem they links to other resources on the web? Has anyone checked out the other accomplishments to see if they are correct? If one of them supposedly created the world's fastest plotter with this technology, who is using it?

    It all sounds too fishy. Notice that they have a partner in the internet search engine market. The partner iCaveo [icaveo.com] has nothing but some intro animations and a comments page. Sounds like they are trying to get the investor who will put money into anything related to the internet, regardless if the company can make any money. Even, the president of the company doesn't look like someone I would trust.

  • Even if these folks had the compilers that would allow you to take large chunks of code, convert it into a hardware representation and program the FPGA to execute it you still have to get have some DATA to feed the instruction stream! The only people that seem to understand true parallel programming models seem to be the people at Tera Computer [tera.com]). They have the only architecture that can do a context switch on each instruction to allow the processors to execute those instructions that happen to be executable because the operand data fetches are complete. Everyone else (Compaq(DEC), Intel, AMD, Sun, SGI, etc.) consume huge amounts of chip real estate with primary & secondary caches rather than really solving the problem of memory latency. The old CPU/Cache IS DEAD in the long run (the chips get too hot). What will work are architectures like Tera's and/or approaches like " Processor in Memory [nsa.gov]"/" Intelligent RAM [berkeley.edu]"/" Embedded DRAM [webnexus.com]" that are innovative ways of dealing with the problem of operand latency and memory bandwidth.
  • OK, so devote one percent of the machine to an expert system which looks for new adjustments...
  • You don't really need to reprogram the array thousands of times per second - just program it once with dozens/hundreds of "virtual microprocessors". (...and don't forget the virtual-SMP OS to go with it!)

    But if that's all your're going to do with it, it would be considerably cheaper and faster to just put dozens/hundreds of real microprocessors in it.

  • Okay, I'm not currently in industry doing stuff like this, however I have made enough machines with FPGAs and what not and even a reconfigurable machine, so I know what it involves.

    Here is the first thing that makes me skeptical.

    Eventually, reconfigurable computing [a term coined by Gilson, referring to the underlying technology behind the hypercomputer] will permeate all information systems, just because it's faster, cheaper, and better," Gilson predicts.

    Does this bug anyone else who this guy supposedly coined the term "reconfigurable computing"? I read an article in EETimes (I believe) from 1996 that used this term. Hrmpf.

    In addition it surprises me that he thinks his company can sell hundreds of the $26 million dollar boxes. I'm not entirely sure how many StarFire's SUN is able to sell each year, but I doubt its much more than that. I'm pretty sure its less. Sounds like just another start up trying to get noise about themselves.

    While I do believe that reconfigurable computing is going to be one of the future trends, I don't think these guys can do it. People are skeptical to pick up on new technology, especially like this. Maybe if Sun or IBM was putting its weight behind it people would do it. But Star Bridge systems? It may work, but I doubt it.

  • This looks exactly like the data compressor the kid in Australia developed in 1997 that compressed a 1 gig hard drive into a floppy. The only catch was the data was all 0's.

    What about the other kid who developed the video compressor that compressed hour long TV shows on a floppy, as long as the screen was black.

    The hypercomputer can process all the hundreds of billions of instructions they claim and the whole thing is for real. Except for the one or two highly redundant, staged instructions it runs at hypercomputer speeds don't expect anything else to run faster than a pentium.
  • They claim that they will be able to run x86 software via emulation. Perhaps if they can reprogram the FPGAs to look like an x86 chip they can "emulate" at full speed.
  • A friend of mine has done so research (ahhhh the joys of acadamie) and found that using a FPGA to substitute for a processor is a really bad Idea(TM). FPGAs can take centi-seconds to reprogram. This would make Hard drive access seem fast. I guess though that would mean no more nasty ram. Just a HD and an FPGA use fpga cells to build ram when needed use the HD for everythings else.
    My friend found that an FPGA makes a good adition to a processor for things like rendering, and photoshop/gimp filters. He found that on dedicated repetative tasks an FPGA is pretty good.


    "There is no spoon" - Neo, The Matrix
    "SPOOOOOOOOON!" - The Tick, The Tick
  • But you've got to admit it looks like the Heaven's Gate's kids came down off their comet and put together one last web page. (it even looks like their site)
  • their web page doesn't even make sense. They say that they have a proprietary operating system, but then on their hardware page it says that it will run either UNIX (I guess any flavor!) or Windows NT.


    What do you think Solaris, AIX, IRIX, Digital Unix, HP/UX and all them are? They are Unix OS and they are also propietary products. WinNT is a propietary product as well. I can't see what doesn't make sense there, sounds as if you were implying something is either Unix or propietary, not both.

    Alejo.
  • by Anonymous Coward
    Sigh. You don't get it, do you?

    First of all, as several other people have mentioned, some FPGAs can be reconfigured 1000 times a second today.

    Second, yes it is stupid to emulate an existing CPU design instruction by instruction. But in any typical working set consisting of a OS and any number of application, there will be code "hotspots".

    That is, tight loops that are executed a lot more often than anything else. There will also be even more cases of instruction sequences occurring in somewhat less often executed loops, all over the place. All in all, there's always some operations and sequences of operations that are more common than others.

    So instead of just emulating a generic CPU, you reconfigure the FPGA to handle the instruction sequences that take up most of the execution time at the moment directly in hardware.

    I've had programs where 80% of the processing was string compares. And you've mentioned the other obvious examples: rendering, audio processing.

    The point in this case is: Yes, a specially configured FPGA will always be more efficient FOR THAT PARTICULAR TASK. But how many people create FPGA configurations for their applications?

    However, this concept (reconfiguring to handle commonly executed sequences), will AUTOMATICALLY optimize for the rendering cases etc. It probably won't do it as well as a hand code algorithm would. However, when you hand code an algorithm for a FPGA, you'll stick to the only what is needed to speed up that particular task, while reconfiguring on the fly will optimize for whatever task you are currently running.

    Just like Suns hotspot technology do special optimizations and JIT compilation on the java bytecode executed most often. Only that in this case it isn't assembly that is generated, but microcode for the FPGA.

  • It could conceivably use a genetic algorithm/evolvable hardware approach. This would be REALLY cool (would come up with rather unique solutions...) but I SERIOUSLY doubt it is possible. GAs take exponentially longer to "get it right" as the problem gets harder. The most complex problems I've seen that use GAs with FPGAs was like very simple signal processing and the algorithms took like 3 months to find the "final" solution. I can imagine using the same hardware (allowing for much better algorithms and better/faster FPGAs would really make THAT much of a difference) to solve something as simple as tracing a ray (which is actually quite simple, especially compared to the sims that most supercomputers of this magnitude would be used for) would require MANY years of evolution (maybe a hundred plus) before it got faster than generic hardware.

    The REALLY bad thing is that if your problem changed even a tiny bit, the optimization program would have to start over (probably not from scratch, but still a HUGE amount of work).
  • probably the same one that limits the lives of our computers, to about 5 years... the fact that better stuff comes out...

    but I really really that they could have a tenfold increase just from a download of software. in essance there saying that in 5 years they can 'optimize' there softare 10x.

    while CPU densitys may halve every 18 months, I don't think software folowes the same route
    ---------------
    Chad Okere
  • A quick look at an old Xilinx databook shows that you can build 1,500 4-bit adders in an old (1997) XC4085XL. The switching characteristics show a 1.6ns delay for each adder.

    If you don't include innerconnect delay, you can build your own 1 TegaOp supercomputer for about $100.

    Xilinx has come a long way since 1997. They now claim to have 1 Million gate FPGAs, that run quite faster than the old XC4085XL-09.

    But if you really want to really go for the TeraOps record, I'd suggest Xilinx's latest Virtex parts, and a benchmark doing 2-bit binary NANDS operations.

    It may take some additional work to get such a chip to emulate WinNT, but think of the press coverage your benchmark will get.
  • A quick look at an old Xilinx databook shows that you can build 1,600 4-bit adders in an old (1997) XC4085XL. The switching characteristics show a 1.6ns delay for each adder.

    If you don't include innerconnect delay, you can build your own 1 TegaOp supercomputer for about $100.

    (1/1.6e^-9 * 1600 adders)

    Xilinx has come a long way since 1997. They now claim to have 1 Million gate FPGAs, that run quite faster than the old XC4085XL-09.

    But if you really want to really go for the TeraOps record, I'd suggest Xilinx's latest Virtex parts, and a benchmark doing 2-bit binary NAND operations.

    It may take some additional work to get such a chip to emulate WinNT, but think of the press coverage your benchmark will get.
  • FPGAs are well known technology. Wiring enough of them together, and programming them to do some specific task, will get that specific task done fast. No news there.

    But it is a very special design. Reprogramming the FPGAs may be fast, but it is hard to program them to do a sequence of very different operations.

    This is not quite unlike the Connection Machine (from Thinking Machines Corp.). A full CM has 64 thousand processors, but they can only do very specific tasks. If you program a CM to do matrix multiplication, it's lightning fast (or, at least it was in the days of the CM). But if you run a Perl interpreter, or any other not completely trivial or simple (matrix multiplication _is_ trivial and simple) piece of code on it, you will be _very_ disappointed.

    Ofcourse these things are justified. Simple operations are done a lot in mathematical modelling. It will be very interesting to see what the supercomputer vendors can make up of a bunch of these FPGA boxes, wired to some standard processor boxes (to do the non-trivial stuff).

    But don't think for a second, that we will be putting these things on the desktop, and have them running ``normal'' applications at a speed that is even comparative to a PII.
  • Compressing an audio signal with MP3 (mpeg level 3 compression) is like compressing a video stream with MPEG 2 or 3 as growing some sea monkeys is to becoming the Pacific Ocean. Compressing video is quite processor intensive.


    Complete waste of money IFF your intention is to play Quake 3. NOT a complete waste of money if you want to do many other tasks that computers are good at.
  • No, he's talking about the x86 emulation on NT-alpha.
  • Given that they just made computing cycles 4 orders of magnitude cheaper than before, those cycles are suddenly looking a lot less valuable... ;-)
  • My PhD topic was in reconfigurable computing

    What these guys are doing is fairly banal
    compared to the more interesting research
    being proposed.

    The speed claims that they make are based on
    large arrays of simple adder circuits, doing
    no real useful work.

    I wouldn't say it was a con, but it is a lot
    of marketing hype and mis-information from what
    I can see.

    The only really interesting thing about their
    system is that they took massive amounts of
    knackered FPGAs and found a way to make a useful
    system from them. This is imporant if they can
    use it to hugely increase the usable yield of such
    devices. It also means the systems can be very
    cheap.

    The FPGAs they use arn't really suitable for
    Genetic Algorithm type exploration of configurations, as they arn't tolerant to incorrect configurations. Devices like the XC6200
    from Xilinx is one of the few that can take
    erroneous bitstreams without shorting out.

    For this device interesting stuff is being done
    evolving the basic logic structures.
    However, the research into that depends on
    parasitics and temperature effects, which are all
    the things that digital design has been classically trying to supress and remove from the
    design process. Makes it more a of niche market,
    especially if you can't just re-use the bitstream
    you've developed on another chip, as it'll have
    different characteristics, even across the same
    process batch.

    But reconfigurable computing is a technology
    who's time has come. It isn't even a matter of
    when, it is a case of 'how much' will be in the
    next generation systems. You'll be seeing a lot
    more systems with embedded FPGAs in the future,
    providing application specific logic when and
    where it is needed.

  • Did you read the bios of the starbridge guys ? The president was a car salesman (the bio spends much space bragging about his ability to build cars since a young age). The CTO, who is supposedly doing all of the technical work, doesn't have any references other than typical wiz kid has been programming computers with one hand tied behind his back since he was 6 months old type stuff.

    Transmeta, on the other hand, is run by a former Sun executive, backed by a Microsoft cofounder, and employs a gaggle of engineers with awesome track records (ala Linus).

    Starbridge may very well be the next greatest thing, but there is considerable reason to doubt that they will amount to anything. Transmeta may not be the next best thing, but they've got as good a chance as anyone to do something interesting.

  • That was the dream 10 years ago. By now we know that even in a functional or logic language (like Lisp, Prolog, Haskell, ML, Mercury, you name it) the implicit parallelism is good for a handful of processors at best. For massively parallel systems you need programs that are specifically designed for parallel execution and that is hard work (nothing that a compiler can do).

    A good language makes it easier for the programmer to specify parallelism and easier for the compiler to exploit the parallelism, but in the end, it is a matter of program design (and I wouldn't hope for a significant change of this situation in the near future).

    Chilli

    PS: I happen to know, as I have written a PhD thesis and a number of research papers in this area. (You can get the stuff from my Web page, if you are interested. There is also a compiler project [tsukuba.ac.jp] targeting massively parallel systems.)

  • If you read carefully, it turns out that the
    computer that's 62,000 times faster (in theory,
    as has been pointed out already) is several
    million dollars. It doesn't give specs for the PC-
    like computer, it just says that it's "like today's supercomputers". Disappointing. But I suppose it will still be interesting to see if their PC is any good...
  • Just out of curiosity:

    Given that I live in Europe. If I had made the above claim, and the company decides to sue, is there anything those guys could do to me?


    My opinion? FRAUD! SCAM!

  • As far as I know, this company existed in a different form out in SV. And, as a start-up for a new company, it doesn't take much time to :

    1) Find VC (if your ideas are good)
    2) Find a location (people can be convinced that this will be Something Big(tm))
    3) Build fabrication facilities (considering Xylinx makes the chips, this won't be difficult either)

    Getting into the swing of production is the important part.
  • /. already went through this one in a previous article [slashdot.org]. Summary: yes, it may be able to perform all those additions, but to compare a mass of reduced-speed logic gates to a real supercomputer is outrageous.

    I live not too far away from Star Bridge Systems. If there really were major developments, I would read about them in local newspapers more than once a year.

  • I remember reading about these guys here 6 months ago. I was stunned and amazed, and I thought, "Well, I might be able to buy one of these in ten years time." Guess I was very wrong. I'm getting one as soon as they come out.

    BTW...the point of they earlier article was an announcement of the companies new HAL systems. This one is reporting the news that they are building PCs with this technology too. And they run windows NT under emulation mode. Wonder if that means they run Linux. Probably does, since it would have to be intel emulation, rather than windows emulation. So they would probably be quite useful, and easily integrated into current applications. Can't see how switches and routers could possibly have a problem integrating. They seldom resemble closely the systems that they are communicating with anyway.
  • Given the patents we've seen from Transmeta, I wonder if this is the same sort of thing they are working on? The suspense of what exactly it is that they are doing is killing me.
    Deepak Saxena
    Project Director, Linux Demo Day '99
  • Well, if it wasn't for the massive mumbo jumbo and lack of any real world stats, this might have me jumping up and down..let me know when this thing can run QuakeX or Photoshop for a good benchmark and I might care a little more. Looks pretty cool, tho.

    Sig? Who needs a fucking sig with a name like this!
  • Just in case, maybe I'll put off getting that new computer I was planning on to kick off the final release of Quake III.

    ----

  • Clock speed is not the same issue with these systems as it is with traditional chips architectures, as is my understanding. They avoid wasting clock cycles processing complex operations by performing those operations in hardware as much as possible. This is a method similar to how crays get their speed. The difference is cray put's as much into hardware as they think reasonablew when they design the chip (far more than anyone else hinks is reasonable). These systems (presumaby only for software written in their native language) translate software to hardware on the fly. So everything runs mostly as hardware. But you are correct...I wouldn't write off intel quite yet. The issue isn't clock speeds though, it's (in my opinion) the infrastructure, such as software, and various peripherals that will prevent this from taking 60% out of intels market share next year.
  • Is that 3 orders of magnitude price/performance for everything? How cheap is this? What price/performance are they comparing it to?

    Additionally, it's the HAL system that's supposed to be up to 60,000 times faster, not this one.

    I don't know if I believe this, as they say they're going to focus on the supercomputers, because they somehow couldn't make money on the home copmuters. If they sell 100 supercomputers a year for a maximum $26 million each, that's only a gross profit of $2.6 billion. Couldn't they give themselves a profit margin of 50% (not great considering the supposed 1000x improvement of price/performance), and sell a lot of pc's and make more than this in a year? At the very least, they should have investors galore trying to give them enough money to do this, or to hire enough people/places to focus on the HAL, and the home use.

  • First and foremost, how does this work on a multitasking system? A system running multiple programs at once requires multiple configurations at once. This is simply not possible, and programs require a standard architecture underneath, else the programs' binary code itself would have to morph along with the processor itself.

    Second, if programs running on the amorphous processor themselves need to morph, what's changing the processor's configuration.

    Now, I believe that FPGA actually has an application, although dynamic processor configuration is not truly it's niche. However, suppose there is Flash ROM on a supporting BIOS that will configure the processor prior to bootup?

    This will provide us with a definately novel idea: A processor that can be hacked as easily as a kernel.

    The supporting BIOS itself would be accessible from the operating system, so that redesigning the default configuration could be done from inside the operating system. (And processor upgrades would be performed via software upgrades)

    Another possibility would be to allow a processor capable of running virtual machines directly, as opposed to software emulation. This is possibly what they were hinting at when they mentioned x86 compatibility similar to the Alpha.

    I believe this is actually a possibility about what Transmeta is up to. In fact, the two patents that Transmeta took out might actually involve the error-correction and programming methods of this type of processor.

    But one thing's for certain, I believe this WILL have immense impact in the next three years.

    ******* DISCLAIMER *******

    I am a software type, and a user. Men run screaming if I ever wrap my fingers around a soldering iron's handle. I am not qualified to actually understand this any more than I can tell what a circuit board will do by looking at it (without the printed info on the chips). I am not guaranteed to know exactly what I'm talking about.

    Okay, anyone want to... correct my ignorance?

    --
  • by Anonymous Coward
    It's not really that hard in theory. What you need to do, is to identify what instructions or group of instructions take most of the time of the CPU currently, and optimize that instruction or sequence of instructions.

    If you for instance have an application that does string compares all over the place, you'd need to be able to recognize it's inner loop, and configure part of the hardware to do the same operation without decoding the same few instructions over and over (that is, you decode them once, find out that this should be handled by the special string compare hardware, and off it goes).

    You'd need a good profiler in hardware, that finds code hotspots, and that tell the optimizer which code would be most beneficial to "compile" into microcode for the FPGAs.

    Just lets say that the software isn't really the problem here. I'm more reserved about their ability to deliver on the actual hardware side (especially with regards to speed - I don't doubt their concept works in theory, but will it really be as fast as they claim?)

  • The concept may well be to do performance monitoring at the instruction level under x86 emulation. This isn't what their performance claims are based on though, not by a long shot. I don't even see where they made that particular claim in the article. All I saw was a claim that they could emulate an X86 machine much as the DEC Alpha can, but oh, as Mr. T would say, we'll do it helluva fast!

    Right now because of all the erroneous information they've released my guess is they're high tech snake oil salesmen. I doubt very much that they coined the term Reconfigurable Computing. It's been in fairly common usage for a while. There claim that it outperforms the IBM Pacific Blue with the caveat 'Oh, we ran a different performance measure so direct comparisions are different' is a huge understatement. IBM tested their machine doing real work, real code, albeit on their site rather than the customer site. Star Bridge tested theirs running a useless code perfectly chosen to make their machine look best.

    The question isn't whether this machine will work, the question is if it even exists.
  • who were claiming huge performance increases over an Alpha-based Cray supercomputer by comparing their performance on 4-bit integers to the Cray's performance on 64-bit floats.

    Also remember there will be a huge penalty associated with reprogramming the FPGA. Based on the specs of the devices they are using, I would say several hours.

  • I can't connect to the CNN news site (Server error - Slashdotted?) right now, but I think this sounds very much like the crap posted back in February [slashdot.org].

    If I'm completely out of my mind or am making a fool of myself (because I haven't read the article), please bear with me.
  • by Anonymous Coward
    Sigh. It doesn't have to be reconfigured on each context switch. It has to be reconfigured over time. I think you misunderstand the idea.

    I doubt the system would even know about context switches, or about the OS at all.

    My understanding of the idea is as follows:

    For the hardware, at any given time, you have a working set. The working set is all the code that belong to programs that are currently running.

    So the CPU profile the code that is executed. It won't know about context switches or OS's or application boundaries at all. All it will know is that at positions X,Y,Z in memory there are code that accounted for, say, 70% of the total execution time the last minute.

    The optimizer assumes that this code will keep running for a while longer. It then examines the code at those locations, and generate specific microcode for the FPGAs to handle those cases.

    Thus, the longer a CPU intensive process runs, the more time the optimizer would spend on it.

    The more diversity in what you process, the more generic the optimizations will have to be to get a net advantage. If you switch programs every second, then no specific parts of any program will influence the execution time spent in any set of instructions that much, and time will be spent on optimizing simple, common sets of instructions.

    Needless to say, the more specialized your applications are, the more it will be able to optimize for speed.

    And programs that are long running will have more of an impact on optimizations than applications that quit after a second.

    Thus for instance the OS and system libraries will likely be heavily optimized.

    And if the system is good, it will optimize short generic instruction sequences first, not highly specific code paths.

    Oh, and the point is that no special compiler would be needed. You just compile into any instruction set that the system is configured for, and the system itself would then optimize the microcode for that instruction set.

    Actually, to benefit more from this system, creating a simpler, higher level bytecode would probably be a great benefit (and simplify compilers...), since it would be a lot easier for the optimizer to generate good microcode for a small set of high level constructs than for some low level machine code (in the same way as it is a lot more difficult to translate from assembly for one CPU to assembly for another efficiently, than it is to translate from a high level language to assembly for any of the CPU's).

  • The article is old news. I've seen it on /. once before as well as in Forbes. From what I've read doing things fast in parallel is normally faster than just doing it fast singularly. The trick that this is trying to come up with is the code which DYNAMICALLY programs these FPGAs. They're pretty quick to reprogram (not blazingly fast supercomputer speed). So if your doing one task for awhile (or many tasks for awhile) which is what the mainframes where I work do, then you get a cheap, fast supercomputer. In fact we could really use about 100,000 4 bit adders tacked on to our mainframe here.

    However if I were to replace it with my workstation, where I multitask programs AND start new tasks and stop old tasks continuously, then the time it would take to reprogram the FPGAs would be substantial (in comparison to the number of operations it could do in that amount of time).

    This technology isn't for the benefit of every average person (yet), unfortunately they wish it was and mis-advertise it occasionally. One of their biggest partners is a Cable company which wants to use their computers for cable encoding and such. A task that just needs a ton of FPGAs.

    Enjoy
    Umbro2
  • This is from the same people that made that the guy made the super computer basically in his garage and you can shoot a bullet through it and it will still run and it's faster than a cray and deep blue and it's about 4 feet squared. There was a previous slashdot article, but I'm not at home on the cable, so I'm not waiting 15 minutes to search for the URL, sorry.
  • The origional story was for their top of the line million dollar server that was the size of the PC.

    This is a new machine that costs $1000 from the same company.

    That would make this a completely new story.
  • Given that transmeta's patents point to intel emulation, etc, i'm surprised that people are poo-pooing starbridge. Sorry to be a bit inflammitory, but just because Liuns works for them, doesn't make them any better than this company. They are probably both funded by strong backers, have very good talent behind them and have a potential to turn tables. But we as /.'ers take it upon ourselves to trash SBS with the usual "wrong term here" or "bad grammers" or "kant spell dis term" instead of realising that marketroids put the web site up, and perhaps the engineers are too busy designing the system to check every word of it.
  • While the FPGA approach should have great advantages over conventional RISC/CISC approaches, two major problems have to be addressed. First, there's the too little, too late effect: SSE & 3DNow! instructions accelerate in hardware the most common types of instructions, so that those same instructions won't be accelerated in the FPGA by comparison -- and will probably suffer. Performance will be impacted in three major ways: first, the cost of reprogramming the FPGA -- "thousands of times a second" simply isn't that impressive when you're talking about core processor speeds of a gigahertz in the same timeframe. Assuming 10k changes/sec, you're still looking at 100ms to make the change. The problem here is a question of scale: can I fit all of Quake 3's rendering pipeline into the hardware? If I can, it should cream a dedicated processor. If I can't, I lose major amounts of speed switching the gate array, or to using a less-efficient general layout on one part of the array. Second, how deeply can you pipeline a FPGA on the fly? To my understanding, FGPAs are slower and larger than dedicated circuitry, which limits the transistor count if you're looking at a reasonable die size. Pipelining is necessary, even in massively parellel enviroments, to achieving supercomputing speeds; the bottleneck tends to be i/o, which lends the speed advantage to pipelining, which is parellel over time rather than space. (If you can execute 7 instructions simulatenously at 1/7 the speed of the 7-stage pipelining, you lose, because the operands probably won't be ready in time, inserting stall cycles.) Third, the compilers for this architecture will have to be absolutely amazing; as much trouble as Intel is having with EPIC compilers, I'd expect a "massive parellel, tightly coupled" FPGA system to have an even more complicated compiler. Further, in addition to the normal costs associated with context switching, the FPGA will have to switch back to match the configuration it was when the process was switched out, further damaging high-end performance.

    Finally, as I mentioned above, i/o is usually the bottleneck with high-speed computing. The FPGA design doesn't offer any compelling advantage there; it doesn't matter how much of the rendering pipeline it can do in hardware if the geometry data can't get there on time.

    -_Quinn
  • The subject of the Commodore 64 came up on another thread. Well, if anyone wants a picture of what 1,666,667 ops/sec is, that's a C-128 running C-64 programs at double speed. Something like an overclocked C-64. 300 baud modems... those were the dayzzzz...
  • I wonder how fast one of these babies can run when fully overclocked.
  • If someone really had a technology that was really "60,000 times as fast as a PII-350", don't you think they'd want to sell it for more than PC prices for a while and get stinking rich? Heck, I sure would. And they even acknowledge that in the article.
  • i didn't read the original story, so i don't know to what extent this is insightful, but:

    engineers typically do not produce press releases. and when marketing people do, they sometimes try to translate statements of a technical nature into something they believe will be easier to understand. in doing so, they screw things up. i have seen this first hand.

    so i don't believe it's fair to judge any company harshly based on initial press releases. if anything, judge the morons that write the final copy.

    - pal
  • by JanneM ( 7445 ) on Tuesday June 15, 1999 @04:08PM (#1849112) Homepage
    For a start: chip designers everywhere use FPGA:s to prototype their designs. No magic; they are reasonably fast (but not as fast as custom designed chips), and _way_ more expensive. Having a large array of them would indeed make it possible to run DES at a frightening speed -- but so would a mass of standard computers. The sticking point is that the collection of FPGA:s emulating a standard CPU would be way slower for any given budget for CPU:s than a custom chip (like the PII, PIII or AMD K7) -- and way more expensive.

    Think about it: both Intel and AMD (and everybody else) uses FPGA:s for prototyping their chips. If it was so much more efficient, why do they not release chips whith this technology already?

    As for the reprogramming component part of this design: translating from low-level code to actual chip surface (which it still is very much about) is largely a manual even for very simple circuits, largely because the available chip-compiler technologies simply aren't up to the job.

    Besides, have any of you thought about the context-switch penalty of a computer that will have to reprogram its' logic for every process :)

  • It's pretty obvious that these guys are a fraud. If they had a real product, they would have every major hardware in the world lined up to buy them out for a billion dollars. Then they would have the resources to building more than the hundred machines a year they claim to be limited to.

    Also, if they had a real product they would have some kind of proof. Like cracking RC5 keys. That would be a great proof! Build a supercomputer, design a distributed.net client for it, and then start beaking records with your demo machines.

    So the real question is what these weasels are up to. I'm sure that they know that no one is dumb enough to hand over $26 million for a box full of vaccum tubes. They would have found out a long time ago that no one can award a $26 million contract without an ironclad proof of technology. Besides, their web page doesn't even make sense. They say that they have a proprietary operating system, but then on their hardware page it says that it will run either UNIX (I guess any flavor!) or Windows NT.

    I suspect that they may be trying to find suckers willing to get certified in their development language, "Viva". They list a training course [starbridgesystems.com] as being available. To participate, all you have to do is sign this an NDA [starbridgesystems.com] and send it right in. Of course, all training will happen over the web. So you won't be able to tell what kind of machine that you are taking your training on. Or complain to someone if you figure out the scam. So even if there is no suckers willing to hand over $26mm, they're probably hoping to find a thousand frustrated postal workers willing to spend $5,000 to be the first to be trained in this technology that will enable them to "ride a great tide of change as one paradigm of computing technology gives way to another". And once they are trained, they get to work for Star Bridge Systems! And they get paid in "valuable computing cycles". I'm not making this up folks!

  • This is nothing but a modern day variation of the bit-slice processors I worked with 20 years ago. They were at the time many thousand times faster than micro-processors but had limited real life computer use. They were used in specialized equipment such as disk controllers or process industry.
    Over time some bit-slice technology has entered mainstream processor technology, to make them what they are today.
    I think it is reasonable to assume that in 5-10 years we will see FPGA technology in mainstream computers.
  • Yes, they are stretching the truth a lot when they say 60,000 times a P-II 350. Yes, they are looking at only 4-bit operations. In general, they are talking about kicking serious butt when all you want to do is massively parellel applications.

    But more and more the reason we are begging for more speed in our CPUs are for massively parallel applications. Game rendering, voice recognition, audio mixing, etc are all parallel applications.

    What this thing is talking about doing is adapting _on the fly_ to whatever application you are running and reprogramming itself to maximize your use of the silicon. Today's chips are mostly superscalar, there are parts on the chip dedicated to certain operations, an integer add module, an integer mult module, a mem load module, floating point add module, etc. When you play quake, you stress out the floating point modules and leave the integer module twiddling its thumbs. All that silicon goes to waste, possibly only for a fraction of a second, but it could have performed a few MFLOPs if it had been reprogrammed to do FP.

    Intel and AMD already recognize the need to handle massively parallel applications. This is where MMX and 3Dnow! are supposed to help.

    That being said, we are looking at a whole new paradigm when we start using FPGAs. Today's languages are based on our current architecture paradigm (general purpose CPUs) and our applications are based on today's languages. To make a change to this will be a hell of a jump. To me, that is the best reason to start this stuff out in the supercomputer world where they have the money to rewrite software.

    I for one am ready to buy some Xilinx stock. Worse case for them is that they sell only a few thousand more FPGAs and get their name in the paper. Best case is they sell millions and become the foundation for the next generation of computers.

  • Posted by zyberphox:

    for the hyper-machine cost me 26 million dollars, 60000 times faster then PII, im not sure that it will worth for the time being ( 18 months,...so long )
  • ---
    So these auto mechanics somehow figured out something that IBM's 291,000 employees were just overlooking? Not likely.
    ---

    Can we say, 'Apple'?

    Sure, Woz wasn't a mechanic, but he did rev things up in a garage in Cupertino.

    - Darchmare
    - Axis Mutatis, http://www.axismutatis.net
  • It depends on how you define 'entire' if you want the entire pipeline in hardware or not. Again, it's the trade-off between reconfiguring the FPGAs to be very specifically executing that 100,000 times a second loop blindingly fast and then doing /nothing/ while its gates are rearranged so it /can/ do the 30 times a second bit and making them more general and not have to switch at all. If you're in that inner loop for a substantial enough amount of time, you still gain speed by optimizing it, but much less than you would if you didn't have to switch out.
    The problem, like I said earlier, is scale. Can this company make an FGPA complex enough that it gains more by doing hardware acceleration of certain chunks of the algorithm than it loses by switching between those accelerations? (Alternatively, is there enough complexity in the FPGA to have a large chunk of the rendering pipeline in hardware AND a general processor core to handle the rest of the code /without/ switching away from rendering pipeline acceleartion?)

    We'll just have to wait and see.

    -_Quinn
  • Unless they plan on shipping this with a 'normal' processor core that then offloads certain chunks of code to the FPGA, the FGPA must know about the context switching and the O/S and all the rest of it -- registers need to be cleaned, pipelines flushed, the base address for the virtual memory needs to be reset, the works. And if its current instruction set is insufficiently generic (i.e. it just finished optimizing the rendering loop) -- it sits there until its gate reconfiguration is done. Modern systems already have the most of the generic accelerations you're talking about ( SSE/3DNow!/MMX; soundcard accelerators, video accelerators, etc...); the benefits of the FPGA disappear if you insist on using them in a strictly generic way.

    Regarding the idea that the processor itself will profile its working set: while it's possible, it won't work that well, and special compilers will be necessary for performance. (I'm compiling Quake3, and no matter what else happens, I need to keep this set of gates the same because we'll be returning to the rendering loop very shortly. I also need a generic set of gates to handle the game logic, over here, and I don't want anyone to try and optimize the game logic because it's not worth the effort.) How do I know it won't work well to have the processor itself handle the optimizations? Look at Intel: they've given up on hardware doing the optimizations because it doesn't work well enough to keep their processors busy. If you optimize in the compiler, you can present the FPGA with an area in RAM that contains the proper gate configuration for your program and you get the speedup immediately, without waiting for the optimizer to kick in (which it might never). Even doing on-the-fly optimization in software, where you've got resources to spare, is insanely difficult: look at how late Sun was with its HotSpot tech.

    -_Quinn
  • Posted by Lord Kano-The Gangster Of Love:

    Governments will happily pay that much. Especially foreign governments. They can do their nuclear weapons research without ever being detected. If it'll be possible ot cluster a few of those babies a country like Afghanistan, Iraq, or Pakistan could play catch up on 50 years of nuclear weapons research in a decade.

    104 million dollars is a pittance to spend for that much knowledge.

    Granted these won't be allowed to be exported for that reason, but who's to say one of those foreign powers won't send someone with enough cash to set up a dummy operation here in North America?

    LK
  • Give me a break! Tons of buzz-words on their site [starbridgesystems.com] - all underlined, but not hyperlinked (means: no details available).

    And sure, sell the first system for $26 million, and the following systems for $1000 each? Supposedly they sold one system so far - I wonder who bought it. Either the company itself, or one of their VCs, probably. That's a neat way to raise money...

  • by Anonymous Coward
    Twas a time when security was all about port scans and buffer overflows.

    Now, we will have to grapple with crackers who are trying to squooge the amorphous geometry of the processor to their advantange. Maybe you can wipe out your competitor not by stealing his data but by squooging his processor form into an inefficient shape, slowly gagging him out out of business. It's a whole new range of opportunities for Bill.

    Maybe the OS will come with some kind of FPGA Squooge Alert (you heard it here first) - which will dump a small error file (50MB will probably describe the amorphous shape) when a momentary configuration change, called a squooge, opens a whole new dimension of security problems.

    Ugghhhh

  • It's not clear to me how this could possibly match the 60k increase in performance this article claims. Certainly, you can't execute a single stream of instructions at anywhere close to that speed, no matter how fast you can modify the gates. If this is supposed to be a massively parallel system, then you're only going to see this kind of speed up on tasks that can be extensively parallelized. So without even seeing the details, I don't buy it. Yeah, you might approach this level for something like prime number factoring, but I'll bet money they aren't going to achieve anywhere near this speed on everyday computing tasks

    Am I missing something?
  • The company didn't say that the PC is 60,000 times faster, the "hypercomputer" is. The PC does 100 billion ops/sec(BTW, how fast is this really, realtive to a 500MHZ P3 or G3?)

    What they said was that
    - they will have a HAL Jr, that will fit in a suitcase, and will do 640 billion ops/sec
    -they've "mapped out a series of hypercomputer systems, ranging in performance from the HAL-10GrW1, capable of conducting 10 billion floating-point operations per second, to aHAL-100TrW1, which conducts 100 trillion floating point operations per second"

    Meaning: the supercomputer will eventually go up to 100 trillion ops/sec, the but the PC is only 100 billion

    Now, as I said, could some tell me how much faster 100 billion a second is than a computer today? But I'm going to try to figure it out w/o really knowing.

    If I assume the 100 trillion one is the one that's 60,000 times faster than today's computer, then would the 100 billion one be 60 times faster? If they can release something 60 times faster than a P2 450 in 18 months that would still be damn good, IMHO.

The use of money is all the advantage there is to having money. -- B. Franklin

Working...