Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
IBM Hardware

The Amazing Shrinking Supercomputer 210

mE123 writes "It would seem that IBM is trying to change what we all think of as super computers. Their new Blue Gene family of super computers is meant to be 6 times faster, consume 1/15 of the power and be 1/10 the size of current models. The prototype is already number 73 (with 2 teraflops) on the list of the most powerful super computers and it's only "roughly the size of a 30-inch television". They are hoping to be able to make it up to 360 Teraflops using only 64 racks." We covered this a bit earlier, but without the level of details.
This discussion has been archived. No new comments can be posted.

The Amazing Shrinking Supercomputer

Comments Filter:
  • Priorities.. (Score:5, Interesting)

    by lukewarmfusion ( 726141 ) on Monday November 24, 2003 @10:15AM (#7547704) Homepage Journal
    Should the priority be making faster supercomputers (but large) or smaller supercomputers (but the same speed)? This one seems to be a step in both directions, but I wonder if they're sacrificing speed for size (or vice-versa).
    • Re:Priorities.. (Score:4, Insightful)

      by slimak ( 593319 ) on Monday November 24, 2003 @10:20AM (#7547741)
      Why do we need to have small, power-efficient supercomputers? Isn't the main goal of the supercomputer to be fast as hell? Granted, if this can be achieved while simultaneously minimizing power and size then by all means go for it. However, as stated by my parent, what sacrafices are being made?

      That aside, I would happily take a computer the size of a 30" TV if it was SUPER!

      • Re:Priorities.. (Score:5, Insightful)

        by kinnell ( 607819 ) on Monday November 24, 2003 @10:34AM (#7547826)
        Why do we need to have small, power-efficient supercomputers?

        Very few businesses/institutions can afford, nor need an Earth Simulator. Big power hungry supercomputers need specialised buildings with sufficient power supply and heat dissipation capabilities. By creating a small, power efficient supercomputer which can simply be plugged in in the server room, they open up an entirely new market.

        • Re:Priorities.. (Score:5, Insightful)

          by Smidge204 ( 605297 ) on Monday November 24, 2003 @10:39AM (#7547877) Journal
          Plus, once you have a powerful, (relatively) energy efficient computer in a smaller package, you can use them as building blocks to scale a larger installation.

          Modular installation = better able to match requirements without having to build entire system from scratch = more cost effective solution for some (most?) customers.

          I think the "Imagine a Beowulf cluster of these" joke may actually pretty close to the point!
          =Smidge=
        • by John Harrison ( 223649 ) <johnharrison@@@gmail...com> on Monday November 24, 2003 @11:48AM (#7548539) Homepage Journal
          I would bet that many institutions could find a good use for a supercomputer. Airlines, for example, use them to come up with flight schedules and crew lists. Faster computers give them more flexibility. They can recalculate the schedule at will.

          If supercomputers were ubiquitous, more uses would be found. So I don't see how "need" comes into the picture. Now who can afford one? That is a good question. If they were affordable you'd see needs popping up all over.

      • Re:Priorities.. (Score:5, Insightful)

        by rwoodsco ( 215367 ) on Monday November 24, 2003 @10:39AM (#7547874)
        Why do we need to have small, power-efficient supercomputers? Isn't the main goal of the supercomputer to be fast as hell? Granted, if this can be achieved while simultaneously minimizing power and size then by all means go for it. However, as stated by my parent, what sacrafices are being made?


        You need small power-efficient supercomputers so that you don't need a dedicated 100MW coal-fired power plant next door for each 10 teraflop building.

        Imagine the cooling system necessary for a building which dissipates the energy normally used by a small city!

        This is why bluegene is cool; they realize that at the high end, power is going to become the limiting factor, and they designed their architecture accordingly.

        Bobby
      • Re:Priorities.. (Score:2, Insightful)

        by penguinoid ( 724646 )
        Cause then it won't need such a sophisticated cooling system. Cooling systems are expensive, you know. There's no reason to waste power if you can help it
      • Re:Priorities.. (Score:5, Insightful)

        by A55M0NKEY ( 554964 ) on Monday November 24, 2003 @10:56AM (#7548014) Homepage Journal
        It seems that making computers small and efficient makes them fast as hell. Small = less distance for signals to travel = shorter times to wait for the signals to travel, and efficient = less heat given off = higher possile clock speeds.
      • by BigBlockMopar ( 191202 ) on Monday November 24, 2003 @11:44AM (#7548514) Homepage

        Why do we need to have small, power-efficient supercomputers? Isn't the main goal of the supercomputer to be fast as hell? Granted, if this can be achieved while simultaneously minimizing power and size then by all means go for it. However, as stated by my parent, what sacrafices are being made?

        The increase in speed is related to the reduction in size.

        For a moment, let's pretend that electricity within a wire travels at the speed of light.

        Now, let's pretend that we wish to carry pulses of electricity from one end of the computer to the other at a very high speed.

        At some point, the distance the signal has to travel will become significant to the speed of the computer.

        This is already happening in PCs. If you take a close look at the motherboard in your computer, chances are you'll see weird places where the traces just zig-zag back and forth (notice the angles on them, that's not by accident either, but I'm not going to try to explain a fourth-year university course in microwave and RF design here). These zig-zags add length to the traces so that they have the same length as other traces within the same bus, and all the signals on that bus arrive at the same time. Think of them as being "equal length headers", if you're into the throb of a big-block V8.

        Length of interconnecting wires is non-trivial at this point. Stray capacitance and inductance caused by any conductor are non-trivial at this point. As a result, a terrific limiting factor to the speed of a computer is now its size.

        Power consumption is also related. Modern ICs are made of millions of MOSFET transistors which behave as switches. These switches are not perfect: during the transition between a logic high and a logic low, the transistors spend time in the linear state where they are resistive. As a result, they waste energy as heat.

        Stray capacitance and inductance - even within the junctions of the transistors themselves - slow their ability to switch instantaneously. As a result, they must be made as small as possible to reduce capacitance (C) and inductance (L).

        This also explains why newer generations of a processor can run faster than their predecessors: smaller and smaller features on the IC mean less stray C and L, which means that the transistors can switch states faster, which means that they spend less time in the linear state and therefore heat up less. This means less energy wasted as heat.

      • Ask yourself: why do you need a supercomputer?

        Answer: To do a very sophisticated simulation that would be too difficult or costly to conduct in real life.

        But if the supercomputer is so expensive to purchase and maintain, it might be easier and cheaper to use CAD and rapid prototyping to make a few doo-dads and knock them into each other for real, as an example.

        So if the supercomputers can't scale with the rest of computing or manufacturing, then no one will buy them (no one who doesn't want to get fired
      • Re:Priorities.. (Score:3, Insightful)

        by pmz ( 462998 )
        Why do we need to have small, power-efficient supercomputers?

        It brings them into reach of small engineering firms and university engineering, science, and math departments.

        Imagine if supercomputing goes the way of the PC: affordable and ubiquitous to those who want them. It is arguable that today's gigaflops CPUs are already supercomputers, but I guess people are always striving for more.
      • Computer, what's the weather going to be like today?

        [PROCESSING]

    • By making a smaller super computer you're most probably adding the potential to house lots of them together, essentially getting more TF per square metre. However, the issue is raised that can these survive the heat?

      The other difference and potential problem when compared to a cluster is that in a cluster, if one machine fails, there's usually measures to just know that one machine out the network and carry on .... with smaller and smaller machines we are posed the problem that if this '30 inch tv' sized u
    • Re:Priorities.. (Score:5, Interesting)

      by sporty ( 27564 ) on Monday November 24, 2003 @10:38AM (#7547857) Homepage
      Depends.

      Do you need to find the cure for cancer via simulations faster or do you need to send a machine up on a 747?

      Different needs, different solutions.
    • Re:Priorities.. (Score:3, Insightful)

      by Andorion ( 526481 )
      Their new Blue Gene family of super computers is meant to be 6 times faster, consume 1/15 of the power and be 1/10 the size of current models

      If it's 1/10 the size and 1/15 the power and it's still faster, then we can stick 15 of them in a room, get the same power consumption, and have a larger "much faster" computer that you're looking for. This seems like a win-win direction to go in, for IBM.

      ~Berj
      • 6 times faster, consume 1/15 of the power and be 1/10 the size of current models

        You know, being 6-times faster at 1/10 the size is actually being 60-times faster, IMO. It's definitely win-win.

    • Re:Priorities.. (Score:5, Informative)

      by David McBride ( 183571 ) <david+slashdot AT dwm DOT me DOT uk> on Monday November 24, 2003 @11:14AM (#7548235) Homepage
      Space and heat dissipation are becoming very serious limiting factors in the scalability of supercomputing clusters.

      In theory, you could just keep adding more and more nodes to an existing system, and as long as your interconnects were good enough, you could scale.

      But in practice energy consumption (and getting rid of the waste heat afterwards) will hit you before you can get much futher that we are today. The Big Mac G5 cluster in VA, for example, required custom cooling systems because conventional aircon units simply couldn't handle the load.

      As a result, IBM's work is *vital* for making faster supercomputers -- and the improvements they're claiming are very impressive indeed.
    • Size and wastefullness != faster.
      One of todays problems is that we are very inefficient in out chips. Think about Intel and AMD useing 80-100 watts of power, while Transmeta new 1.3 GHz uses only 7 watts. It is possible to build a parellel system using these and have the system be cheaper to build and run than with Intel/AMD.
      While this is going to be a real killer in terms of speed, it will hopefully make it more profitable for IBM as more companies will be able to afford these.
    • Sort of doesn't work that way. The shorter the wires, the faster the signal. The smaller the switch, the faster it flips. Etc.

      There are complications, but that's the general procedure. If you made a pentium the size of an IBM 360, it would probably be slower than a 360. (I'm assuming that you just scaled everything up...switches, path lengths, etc.)

      The real trick is when they get something small enough and organized enough that they can mass produce them. Then the price starts dropping too. This pr
    • But... does it play Ogg?

      Somebody needs to ask, and it may as well be me. I leave the obligatory Wolfpack question for others (Im not greedy, after all).

  • Finally! (Score:5, Funny)

    by clifgriffin ( 676199 ) on Monday November 24, 2003 @10:17AM (#7547711) Homepage
    My mom wouldn't let me have one because they take up so much space!

    Clif

    Blogzine.net [blogzine.net]
    Fortress of Insanity [homeunix.org]
  • Scale and costs (Score:5, Interesting)

    by Space cowboy ( 13680 ) on Monday November 24, 2003 @10:21AM (#7547747) Journal
    So, how long will it be before these become commoditised for sme's ?

    Something that fits into the space of a 30" TV set (how about dimensions, guys ?) is presumably about half to 1/3 a standard rack in a co-lo. 2 Teraflops of processing power ought to be able to comfortably shift the bottleneck to the bandwidth, even for database-orientated sites ...

    I think people's cost expectations are going to be significantly impacted by the size of this - if it's small, it must be cheap, right ? (wrong, but try telling them...)

    Fantastic acheivement, btw, kudos to the man in blue :-)

    Simon
    • Re:Scale and costs (Score:5, Informative)

      by stevesliva ( 648202 ) on Monday November 24, 2003 @10:25AM (#7547773) Journal
      No exact dimensions, but there are some photos here [ibm.com].
      • But who cares really what the outside looks like? Really. Couldn't the 3d artist do a cutaway or something? They have pictures of the prototype but there's no sense of scale or anything. I'm not saying it will, but form should not dictate function. And I don't really care for that slanting theme.
    • If you read the article, it says: "even though it occupies a mere half-rack of space". So, that'd be 21U, or 21x1.75in (36.75in x 19in x ~20in (depending on it's depth).
    • Databases are extremely IO limited, and always have been. There are relatively few floating point operations done in typical database applications. This would be pretty much useless for them.

      A gigantic RAID 1+0 however...
  • by LeninZhiv ( 464864 ) * on Monday November 24, 2003 @10:21AM (#7547749)
    Yes, this is amazing--but I predict that in ten more years computers will be twice as fast, ten thousand times larger, and so expensive that only the five richest kings in Europe will be able to aford one.
  • by Vyce ( 697152 ) on Monday November 24, 2003 @10:24AM (#7547770)
    I'm awaiting a supercomputer affordable by a small business...something top 100 $30-$60k...then i'll be impressed. Otherwise, it makes no difference to me as I will never get to play with one. *sigh*
    • by mrtroy ( 640746 ) on Monday November 24, 2003 @10:27AM (#7547789)
      If you could make something top 100 for 30-60k, it wouldnt be top 100 for long. Because then other people would pay 200k for something twice as fast.

      You can either choose price, or speed, but not both. So do you want something for 30-60k? Or do you want something top 100?

      Your small business should take some economics :) Then maybe you wouldnt be so small anymore. Maybe you are choosing the price AND quantity you are selling...

    • Then your wait should be about 2-4 years, 10 if you want it for $2K or under

      I recently did a search of top500.org [top500.org] which has specs back to June 1993 [top500.org] and up to June 2003 [top500.org]

      BTW WHO THE HELL BROKE top500.org [top500.org]!?!? This site used to be easy to use and informative, now it is a banner add hell, that obscures the info you used to be able to get to easily, with many broken links and apologies for works in progress.

      Anyway I digress, the point is that in 1993 the fastest computer was the TMC at Los Alamos with GigaFl

  • by awb131 ( 159522 ) on Monday November 24, 2003 @10:27AM (#7547794)

    If you read the press release, they claim that previous 2 teraflop machines fill up entire rooms, with more than a dozen racks. I'm not so sure this is the case: for instance, Apple claims 798 gigaflops to a rack with the Xserve [apple.com]; by my reckoning that works out to needing 2.5 racks to get 2 teraflops. And that's just with dual 1.3 GHz G4 CPUs; I'd imagine there is an upcoming Xserve rev featuring dual 2.0 GHz G5's.

    Don't get me wrong, it's still an impressive achievement (especially if it uses as much less power as claimed.)

  • Take your standard technology curve (aka Moore's Law), take any specification/cost point, then move ahead an arbitrary point in time and wonders of wonders, it costs less and is smaller and does more.

    Yes, one day supercomputers will fit into your wristwatch! What's more, they already do! If you use an ancient measure from, say, 50 years ago.

    It's very disappointing to see technology always reduced to whizz-bang figures that are in fact meaningless. What about the impact on our society? What about the capability for good and for bad? What do "good" and "bad" mean, anyhow? How do I know I even exist? What does "I" even mean?

    Now, that kind of stuff is worth discussing.

    OK, go ahead and mod me as a troll now, if you can't think of an intelligent answer.

    • This system was designed for low power consumption. I believe the clock speed was intentionally lowered to have a higher speed/power ratio per processor. The reason for this is that power usage over lifetime of the system can be more than hardware costs. This makes for a lower cost per speed-lifetime unit. They made important technical changes in design.

      What this extra speed is used for is important. But it is a separate issue.
    • I is a word, a metaphysical handle for a concept. That concept is your "self." Now of course, you can't actualize or understand the true nature of your "self" because all that your are, your very intelegence is imprinted with the imperfection of language. Language has, implicit to it, definitions that are both inprecise and imutable.

      "I" then, is a word refering to the reflection of your "self" as seen through the lense of the world you've been conditioned to accept.

      "I" must exist, however, because with
    • It's very disappointing to see technology always reduced to whizz-bang figures that are in fact meaningless. What about the impact on our society? What about the capability for good and for bad? What do "good" and "bad" mean, anyhow? How do I know I even exist? What does "I" even mean?

      I will provide the answers to the important and serious questions posed above; for I believe in providing light where thier is darkness...

      What about the impact on our society? Nothing. It's just a computer dude!

      What ab

    • What does "I" even mean?

      italics?
  • A true-super is when you through all your resources into computer and make something as fast as possible. These are typically space and power consumption hogs. After a new line of supers has been investigated for a while, then a slightly slower, but much resource friendly version is produced. By the time the 2-teraflop Blues ship, frontier supercomputing will be in the 100 gflop range.
    We went through this process in the late 1980s with the cray-clones, crayettes, etc. You got like a fifth of a power of Cr
  • SHOCK! (Score:5, Funny)

    by RMH101 ( 636144 ) on Monday November 24, 2003 @10:39AM (#7547868)
    computers get/smaller faster!

    In other news, the price of petrol increases.

  • I just checked out the pictures of this machine. Since when does 5 racks equal a 30 inch TV? Or even for that matter, ONE rack?
  • by Pedrito ( 94783 ) on Monday November 24, 2003 @10:48AM (#7547945)
    I'm not a big fan of super computers. I mean, it's kind of cool, but to me, it's just throwing a whole bunch of computers at the problem, more or less.

    That being the case, why aren't distributed apps considered as part of the Super Computer list? I mean, SETI@Home has got to be far and away, #1 in terms of computing power. Granted, it's not in 1 integrated piece of hardware, and Berkeley doesn't own all the hardware, but I still think these things ought to be considered, at least to make it more realistic about who actually has the most computing power.

    Just my little rant.
    • by roystgnr ( 4015 ) <roy AT stogners DOT org> on Monday November 24, 2003 @12:25PM (#7548854) Homepage
      That being the case, why aren't distributed apps considered as part of the Super Computer list?

      Most of the tasks you pick a supercomputer for aren't things you can cut up into a thousand chunks and let every computer finish it's chunk of the problem independently. In particular, the benchmarks (LINPACK) that determine who goes where on that supercomputer list generally measure a computer's performance at big linear algebra problems (which are what takes up most of the compute time for huge classes of real problems), and for those problems every node needs to share results with many other nodes after essentially every iteration: this means you need high bandwidth and very low latency connecting the nodes.

      Now, the supercomputer benchmarks may make things worse than they have to be: according to this [top500.org] they're measuring performance on dense matrices (where every node needs to talk to every other node), whereas many real world problems can be discretized into very sparse matrices (where each node only has to talk directly to a few of the others) instead - still, even in the sparse situation you want your computers to be separated by microseconds across your high speed interconnect rather than milliseconds across the low bandwidth internet.
  • Bad joke... (Score:3, Funny)

    by breon.halling ( 235909 ) on Monday November 24, 2003 @10:53AM (#7547991)

    Would sales tax on these things be called a "Blue Gene Levy"? Hahahaha. Horrible, I know. ;)

  • by rarose ( 36450 ) <rob@roGAUSSbamy.com minus math_god> on Monday November 24, 2003 @10:53AM (#7547998)
    was already having to figure the propagation delay of signals (traveling at near the speed of light) into their large multirack systems. I can only imagine one of things driving the desire for smaller supercomputers is to speed up the clock by reducing the delay across the physical size of the box.
  • by mantera ( 685223 )

    I submitted this story 10 days ago, November 14, 2003, the day IBM published their press release online, almost verbatim as i quoted the same material, and it was rejected!!

    Not only that this strikes me as old news, but its publication now completely baffles me.
    • Here's an idea: (Score:2, Insightful)

      by Ayanami Rei ( 621112 )
      Probably lots of people submitted that article. Slashdot editors, not being complete idiots, had the same reaction as a lot of the posters here, to paraphrase: "Shock! IBM makes smaller, faster, clusterable computer!" So they featured this in a group article on the 14th about a bunch of similar articles.

      Later on after about 20 more people submitted it, they gave in and posted it directly. They generally attribute the person who causes them to post it, rather than a group.
  • gee (Score:2, Redundant)

    by b17bmbr ( 608864 )
    technology is getting faster and better and cheaper. damn. who'da thunk it.
  • by Anonymous Coward on Monday November 24, 2003 @11:01AM (#7548072)
    I work on the project.

    We're packing 1024 compute nodes (each node having two CPU cores) into a rack. The nodes are small and based on the PowerPC 440, with beefed up floating point. It has to be air cooled - water is a PITA.

    The finished machine will still be quite large - 64 racks with miles of cables. And that doesn't count disk drives. There isn't a single disk drive on the thing - the customer provides the filesystem, which will also be another beefy set of machines. It requires a new building.

    The machine featured in the article is just half a rack. It is still respectable, coming in at #73 on top500.org. Might be quite useful for business and small scale scientific in it's current form. (This is far more than my alma matter had access too.)
    • A question:
      The original artilces from AP (news.yahoo.com and many other places) mentioned something about the walls/sides being tilted 17 degrees to speed up airflow. True or just a hoax?
      Can you comment on that or is this covered by your NDA?

      Thanks in advance.

  • by mustangsal66 ( 580843 ) on Monday November 24, 2003 @11:02AM (#7548078)
    That's one hell of a p0rn server. Now compile apache, and connect to a pair of DS-3s...

    hmm... how many Counterstrike servers will it run at the same time...

    (Note: The above is meant to be foolish and meaningless. Any other interpritation is pure coincidence. The names have been changes to protect the inocent)

  • Blue Gene (Score:2, Informative)

    by Quiberon ( 633716 )
    You can find most of what you want to know on IBM Research [ibm.com] or US Department of Energy [llnl.gov] (search for bluegene). I think both can survive slashdotting.
  • Difficult to program (Score:3, Informative)

    by nimrod_me ( 650667 ) on Monday November 24, 2003 @11:29AM (#7548392)
    These new supercomputers are all massively parallel systems. They work well for specialized numerical algorithms and were designed with these algorithms in mind.

    It is much more difficult to use them for most applications most of us can think of. For example, VLSI CAD software (simulation/analysis/synthesis) is very compute intesive. However, these systems usually do not even take advantage of the multiple CPUs in a typical general purpose SMP system. You have to manually partition designs and sometimes loose the advantages of global optimization.

    So don't run and order your new Blue Gene yet :)

  • by Nuclear_Loser ( 663599 ) on Monday November 24, 2003 @11:58AM (#7548633)
    Finally a computer exists that can easily fit in my apartment with enough power to play Doom III at 30 fps.
  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday November 24, 2003 @11:59AM (#7548640) Homepage Journal
    I think it's hilarious that we still talk in terms of computers taking up a mere half a tennis court. Once upon a time, computers took up an entire room - and they still do.
    • By definition a supercomputer is a computer or machine that can solve problems that an ordinary computer can't solve.

      So you won't se supercomputer under your desk simply because as long as there is space it's possible to build a larger computer that do things that your computer can't do.

  • by Jeremy Erwin ( 2054 ) on Monday November 24, 2003 @12:00PM (#7548658) Journal
    This set of sldes [llnl.gov] compares some of the architecture of the BlueGene/L to other ASCI machines.
  • Old OLD news.... (Score:2, Informative)

    by TheProteus ( 15398 )
    Seems to me there was once a guy named Seymour, who could do the fast *and* small thing quite well. Since we're talking in terms of Televisions...

    Big screen #1 [ugwarehouse.org]

    40" HDTV [ugwarehouse.org] and a A size perspective [ugwarehouse.org]

    We have DEFINITELY been down this road before folks. I don't see why it's so hard to do this, unless you're using COTS components. Hence, the point of "engineering" - not cramming a bunch of stuff in boxes/packages into bigger boxes and packages.
  • by kabocox ( 199019 ) on Monday November 24, 2003 @12:24PM (#7548852)
    This one little computer is small and efficent and all the waste heat easily taken care of. Now imagine not just one of these, but a whole building of these. Our heat problem crops right back up.

    IBM knows what it is doing.
  • now just make one that's PDA sized :)
  • Can't compete. (Score:3, Interesting)

    by blair1q ( 305137 ) on Monday November 24, 2003 @12:46PM (#7549040) Journal
    They have no hope of bucking the Earth Simulator and taking the real crown, so they're pretending the rules have changed.
  • by UnknowingFool ( 672806 ) on Monday November 24, 2003 @01:34PM (#7549459)
    Their new Blue Gene family of super computers is meant to be 6 times faster, consume 1/15 of the power and be 1/10 the size of current models.

    While progress in making supercomputers more efficient in terms of power usage and space, the widespread adoption of supercomputers is still really hampered by functionality. The majority of supercomputers are used for modeling, simulations, or code breaking. This limits their usage to academic and government institutions. These break through only help those kinds of institutions afford a super computer. I would think that most businesses have little use for that kind of raw computing power. Their computing bottlenecks are more related to transactions per time as opposed to calculations per time.

    • So you say people who dont need a supercomputer dont need a faster supercomputer. You are really bright :)

      Buisnesses dont have computing bottlenecks, they only have IO and Disc bottlenecks. Even the 700k tpm manchines with 70+ raid channels dont have 100% cpu load...
  • I'm more than half-way serious, though. Just TRY to imagine a Beowuld cluster of these. (Yes, I know it's already a cluster. So are the individual cell columns within your brain.)

    I do think that this is the wrong apporach from the long term, but for now...

    What I see for the long term is some chip maker implementing a complete Beowulf node on a chip. And using a Beowuld bus for the connection lines...though you might design it so that it could link directly to "nearest neighbors" to the four sides (cor
  • in green please.
  • The supercomputer thing is cool and useful and everything, but what I really waiting for is someone to bring true 'intelligence' to computers. Despite all progress that has been made in the last 50 (?) years of computing, our present-day machines can only be described as truely gifted idiot savants. They can blaze through a list of instructions faster than ever before, but are helpless in assigning any meaning to those instructions, or learning from them.

    For example say you have 2 graphics packages install

C'est magnifique, mais ce n'est pas l'Informatique. -- Bosquet [on seeing the IBM 4341]

Working...