Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
IBM Hardware

IBM Unveils Fastest Microprocessor Ever 292

adeelarshad82 writes "IBM revealed details of its 5.2-GHz chip, the fastest microprocessor ever announced. Costing hundreds of thousands of dollars, IBM described the z196, which will power its Z-series of mainframes. The z196 contains 1.4 billion transistors on a chip measuring 512 square millimeters fabricated on 45-nm PD SOI technology. It contains a 64KB L1 instruction cache, a 128KB L1 data cache, a 1.5MB private L2 cache per core, plus a pair of co-processors used for cryptographic operations. IBM is set to ship the chip in September."
This discussion has been archived. No new comments can be posted.

IBM Unveils Fastest Microprocessor Ever

Comments Filter:
  • Required (Score:4, Funny)

    by Anonymous Coward on Thursday September 02, 2010 @06:59AM (#33447772)

    But will it run ... a Beowolf cluster of ...

    [Comment terminated : memelock detected]

  • by TaoPhoenix ( 980487 ) <TaoPhoenix@yahoo.com> on Thursday September 02, 2010 @07:01AM (#33447790) Journal

    So what is this beast supposed to be, a 64 core machine?

    Didn't we retire the Ghz wars 5 years ago? I know, AMD style "more done per cycle", but isn't a quad core 3.1 Ghz per chip with 20% logistic overhead faster?

    • by Haedrian ( 1676506 ) on Thursday September 02, 2010 @07:05AM (#33447818)

      The thing is that if you have 2 (say) 1.6 GHz processors, they aren't as 'powerful' as one 3.2 GHz processor.

      For one - there are overheads, certain stuff common between them, pipelines - stuff which I forgot (computer engineering related problems).

      But the main thing is that not all programs are multi-threaded, and a program with a single thread can only run on one processor. So yeah, GHz are still useful. Maybe for large single-thread batch processing - which is the kind of thing a mainframe would do.

      • But the main thing is that not all programs are multi-threaded, and a program with a single thread can only run on one processor. So yeah, GHz are still useful. Maybe for large single-thread batch processing - which is the kind of thing a mainframe would do.

        I'm betting the code used on these z196 systems is multi-threaded. Shit, if you're paying hundreds of thousands of dollars per CPU you can afford some top notch programmers. With two co-processors used for cryptographic operations per chip I'd say they were after a bigger prize than, say, hardcore gamers ;-)

        BTW, TFA mentions L1 cache per core but doesn't mention how many cores this chip scales up to. Could it be just one?

        • Re: (Score:3, Interesting)

          by Carewolf ( 581105 )

          BTW, TFA mentions L1 cache per core but doesn't mention how many cores this chip scales up to. Could it be just one?

          It later mentions using 128Mbyte just for level 1 cache, so that would be around 1024 cores.

        • Re: (Score:3, Insightful)

          But the main thing is that not all programs are multi-threaded, and a program with a single thread can only run on one processor. So yeah, GHz are still useful. Maybe for large single-thread batch processing - which is the kind of thing a mainframe would do.

          I'm betting the code used on these z196 systems is multi-threaded. Shit, if you're paying hundreds of thousands of dollars per CPU you can afford some top notch programmers.

          Actually I think this mainframe is for getting the last little bit of performance out of thirty year old cobol code. And the original top notch programmers are long dead.

          • by mickwd ( 196449 )

            Actually I think this mainframe is for getting the last little bit of performance out of thirty year old cobol code. And the original top notch programmers are long dead.

            Considering that life expectancy in the developed world is in the region of 80 years, there is a reasonable chance that programmers who were under 50 when they wrote code thirty years age are still alive.

            They may have little recollection of what they did 30 years ago, but to say they are all "long dead" is somewhat of an exaggeration.

            • I've never met a programmer over 50. I must therefore conclude that they all perish mysteriously upon their 50th birthday. Something like the planet of grim reapers from Futurama is how I prefer to envision it.

          • More processors = Share the Legacy.

        • Shit, if you're paying hundreds of thousands of dollars per CPU

          You aren't. FTA, the complete systems will cost hundreds of thousands of dollars, to a few million. Not the individual CPUs.

          • by bws111 ( 1216812 ) on Thursday September 02, 2010 @08:02AM (#33448362)

            When configured to run Linux, each core costs approx $125K. When configured for z/OS, each core costs approx $250K. A complete system (not including any storage or software) can cost up to around $30M.

            • by hitmark ( 640295 )

              And all the hardware will be there no matter what package you choose, and a "upgrade" will involve a IBM representative coming over to move a jumper.

              • Re: (Score:3, Informative)

                by Jeremy Erwin ( 2054 )

                Actually, IBM can upgrade mainframes over the internet. It can also downgrade it, if the lessee so chooses. The extra chips are used for failover.

        • Shit, if you're paying hundreds of thousands of dollars per CPU you can afford some top notch programmers.

          If you're paying hundreds of thousands of dollars for a multi-GHz CPU then it's probably because you're trying to make up for the product of crap programmers, not the other way round.

          • by AHuxley ( 892839 )
            product of crap programmers
            Sorry to ask but who does IBM see using this?
            At the price point and data sets that need sorting? - cheaper clusters or more expensive faster unique chips depending on math?
            • Sorry to ask but who does IBM see using this?

              People with legacy mainframe programs that they don't want to port (translation: that they don't dare touch).

            • by LWATCDR ( 28044 ) on Thursday September 02, 2010 @09:26AM (#33449882) Homepage Journal

              Banks, Credit card companies, hospitals, Insurance companies...
              Cheap clusters are great but they are not always the best tool for the job.
              Very large traditional datasets involving lots of high value transactions, with 5 9s uptime requirements do not tend to scale well to COTS clusters.
              IBM mainframes have uptimes measured in years if not decades.
              They have hot swapable everything including CPUs. so you can do ugrades with zero downtime.
              Also you need to take a look at the costs involved. The costs to throw out a working software system that has been used for decades and then the cost to redesign it to work on a Cluster of X86 boes will be huge.
              Not to mention the investment in making it fault tolerant and if it is used in certain markets the cost of the auditing the software.
              Not to mention that ZSystems tend to be really secure. There are just not a lot of exploits on Zsystems.

              When downtime can cost millions of dollars hardware costs are just no that big of a deal.
              Now if you are starting from scratch then you may save money by going with a cluster but then you may not depending on just how good your programmers are.

        • by cgenman ( 325138 )

          They say it's an old CISC architecture. This is probably the sort of system that runs horribly outdated and un-updatable code, like the tax system.

          • by LWATCDR ( 28044 ) on Thursday September 02, 2010 @09:50AM (#33450448) Homepage Journal

            "They say it's an old CISC architecture. This is probably the sort of system that runs horribly outdated and un-updatable code, like the tax system."
            You mean like Windows?
            The X86 is also an old CISC architecture.

            Actually the Power line is RISC anyway. When it is used in a ZMachine the old style 360/370/390 CISC ISA is translated to RISC and then executed.
            Before you go ew that is what modern X86 chips do as well as ARM when using the Thumb Instruction set. The ZSystem ISA is so high end it is almost a high level language so the translation doesn't really effect performance much at all. Also that old CISC architecture is much better than the mess that we have on the X86.
            I am not sure about how IBM does the translation. On the System 38 AS/400 System-I the translation was done during the IPL aka Initial Program Load. On the Zs it may be done as a JIT but I am not sure.
            Honestly I love the idea and wish that Linux would adopt it. You could then have one binary that would work on any Linux system on an CPU.
            The AS400 way kept a native binary copy along with the TIMI copy. When the program was run the first time it would translate the TIMI copy into the native segment. Yes the first time you ran the program it might take a bit to start but after that it would run at full speed and start fast. Of course you could add a binary segment when you first released the code for the ISA of your choice.

            All in all those old Mainframes and Minis had a lot of brilliant tech we still don't have today on our PCs.

            • Re: (Score:3, Informative)

              by TheRaven64 ( 641858 )

              The X86 is also an old CISC architecture.

              Actually x86 is a new CISC architecture. The System/360 architecture predates it by over two decades. x86 was about the last CISC ISA to be developed outside of a few tiny niches.

              Actually the Power line is RISC anyway. When it is used in a ZMachine the old style 360/370/390 CISC ISA is translated to RISC and then executed

              Umm, no. POWER is RISC (well, RISC purists would say that's stretching the point), but POWER and System/z are completely unrelated. The POWER6 and z10, and POWER7 and this chip, were designed by cooperating teams, so they share some execution units, but they are very different architectures. This is not a POWER CPU running a S

      • Re: (Score:2, Insightful)

        by asliarun ( 636603 )

        The thing is that if you have 2 (say) 1.6 GHz processors, they aren't as 'powerful' as one 3.2 GHz processor.

        For one - there are overheads, certain stuff common between them, pipelines - stuff which I forgot (computer engineering related problems).

        But the main thing is that not all programs are multi-threaded, and a program with a single thread can only run on one processor. So yeah, GHz are still useful. Maybe for large single-thread batch processing - which is the kind of thing a mainframe would do.

        OK, firstly the OP should have said that this is the microprocessor with the highest clock speed. Calling it the fastest CPU is extremely misleading. In most modern CPUs, clockspeed is NOT related to throughput. The Intel Sandy Bridge or Nehalem CPU for example may be running its 4 cores at a clockspeed of 3.2GHz but overall, each core in the CPU is easily 4-5 times faster than a 3.2GHz Pentium4 core.

        Secondly, many of the bottlenecks that you allude to are no longer major bottlenecks. CPU interconnect bandw

        • by mickwd ( 196449 ) on Thursday September 02, 2010 @08:33AM (#33448850)

          "clockspeed is NOT related to throughput"

          Of course it is. It is not, however, the only factor, and other factors may indeed (and commonly do) outweigh it.

          "IBM may have created a very highly clocked CPU and given it tons of transistors, but I seriously doubt if it will compete with a modern day server CPU from Intel or even AMD."

          I think you underestimate IBM's technical ability. They do have some idea of what they're doing.

          "pure performance maybe, but definitely not price-performance or performance-per-watt"

          That's like saying a Ferrari is a poor performance car because it can't compete against a Ford Focus on cost-per-max-speed or miles-per-gallon.

          • That's like saying a Ferrari is a poor performance car because it can't compete against a Ford Focus on cost-per-max-speed or miles-per-gallon.

            I doubt that IBM mainframes suffer from the equivalent of engine fires. [jalopnik.com]

          • Re: (Score:3, Interesting)

            by asliarun ( 636603 )

            "clockspeed is NOT related to throughput"

            Of course it is. It is not, however, the only factor, and other factors may indeed (and commonly do) outweigh it.

            You took my comment out of context. I was responding to the original post that focused purely on clockspeed as a magic mantra. What you say is only true if you are talking about clock speed increase in the same microarchitecture, ceteris paribus. Making a blanket claim that we have the fastest CPU because we have clocked it at 5GHZ means nothing. I could overclock a P4 to 5GHZ using exotic cooling and my laptop would still probably beat it in terms of performance.

            I think you underestimate IBM's technical ability. They do have some idea of what they're doing.

            Of course they do. I wasn't talking trash ab

          • But the ferrari has the focus beat all to hell in the important blowjob per dollar cost part.
            • Re: (Score:3, Funny)

              Not saying I'd recommend it, but if that's the measure that you want to use then I'd say a cheap clunker could probably beat the both of them*.

              *Take the rest of the cash your would have spent buying either one and spend that on blowjobs.

      • Yup... there are so many dependencies on application and OS code that hardware capability matters very little.

        I recently tried to tune a workload on a pSeries system. We gave it half a processor and 2 virtuals (with the Power version of hyperthreading so it saw 4 processors). Performance was a dog. Load was only 60% of capacity though. We doubled the number of virtual processors but kept the overall entitlement. Load dropped to 40%. Added another couple virtuals and load dropped to 25%. No increase in thro

    • by dsavi ( 1540343 )
      I was wondering about this- Why did the Ghz wars end, anyway? Did the chip makers hit a wall or something? At the rate it was going, I thought we'd have 5Ghz+ processors by now.
      Yeah, I'm uninformed.
      • by Anonymous Coward on Thursday September 02, 2010 @07:54AM (#33448264)

        More or less. They hit two walls - fabricating chips that could run faster while retaining an acceptable yield, and dealing with the heat such chips produced.

        The fastest general-sale chips were the P4s - the end of their line marked the end of the gigahertz wars, as Intel switched from ramping up the clock to ramping up the per-cycle efficiency with the Core 2 and their complete architecture overhaul. As a result a 2GHz Core 2 duo will outperform a 4GHz P4 dual-core under most conditions. Better pipeline organisation, larger caches better managed.

        Clock rate is no longer the key variable in comparing processors, unless they are of the same microarchitecture.

        • by gorzek ( 647352 ) <gorzek.gmail@com> on Thursday September 02, 2010 @10:25AM (#33451170) Homepage Journal

          Yeah, it's actually kind of funny how today's Intel desktop processors actually trace their lineage to the Pentium M, which was a mobile chip. When the Pentium 4 came around, the Pentium Pro (Pentium II, Pentium III) architecture was pretty much relegated to the mobile market while Pentium 4 represented their desktop line. As you said, they ran into heat (and power) issues with the Pentium 4s and basically had no more room for expansion there. They went back to the Pentium M, which was doing pretty nicely in the notebook space, and since it was low-power and efficient it became the basis for their future desktop CPUs--the Core line, in particular. They just stopped playing up the clock speed because that architecture's clock speeds were substantially lower than the Pentium 4, despite being able to do more work. I read once that a Pentium M could do about 40% more work than a Pentium 4 of the same clock, so in essence a 2GHz Pentium M was about as powerful as a 3.2 GHz P4.

          Switching everything over to the low-power and parallel-friendly Pentium M line is probably one of the smartest things Intel ever did. They would've dug their own grave had they stuck with building on Pentium 4 to the bitter end.

        • Re: (Score:3, Informative)

          by knarf ( 34928 )

          Clock rate is no longer the key variable in comparing processors, unless they are of the same microarchitecture.

          Clock rate has *never* been the key variable in comparing processors. Even back in the heady days of 1 MHz 6502/6510 vs 4 MHz Z80 the comparison was useless - the 6510 does way more per cycle than the Z80 and ends up being comparable speed-wise.

      • There's also the problem of feeding such a monster processor and keeping it synced up with the rest of the machine. On top of that servers for instance tend to cope better with many cores than faster ones after a certain point, which is presumably well before 5ghz. Since servers typically are more concerned with large numbers of connections, chances are that a quad core running at 2ghz would have better performance than a single core 5ghz would, scale that up as needed to the number of cores. Of course freq
    • by JamesP ( 688957 )

      You actually can go faster without x86 hogging you down

    • IBM BlueGene/L - runs at 700 MHz .... 596 TFLOPS

      Cray XT5 - runs at 2.6 Ghz ... 2331.00 TFLOPS

      Both of these are slower in Hz than the PC I am using to type this ....

    • Re: (Score:3, Insightful)

      by Jeremy Erwin ( 2054 )

      It's quad core. 24 MB of L3 Cache, and 96 MB of L4 Cache.
      source [theregister.co.uk]

  • Price: RTFA (Score:5, Informative)

    by miketheanimal ( 914328 ) on Thursday September 02, 2010 @07:03AM (#33447802)
    The Z-series mainframes cost hundreds of thousands (or even over a million) dollars, not the chips. As it says in the article.
    • Re: (Score:2, Informative)

      by jtollefson ( 1675120 )
      They're very expensive, but for Enterprise scale workloads they're cheaper than the comparable distributed system. The cost entirely depends on how many cores you're running, and more importantly your monthly usage. IBM bills you for your Iron depending on an average of how much you used it that month. There's a reason why Mainframes run so quick and fast, they're the only system where all processing from user ISPF interaction all the way to data processing is tracked. All that processing turns into your fi
  • by account_deleted ( 4530225 ) on Thursday September 02, 2010 @07:03AM (#33447808)
    Comment removed based on user account deletion
    • by fuzzyfuzzyfungus ( 1223518 ) on Thursday September 02, 2010 @07:33AM (#33448072) Journal
      The PowerMac G6 would be pretty impressive. The PowerBook G6 manual would include the following phrase:

      "Please note: The revolutionary new MagsafePro 3-Phase/480 power connector is not backwards compatible with the Magsafe connectors of prior, non-containerized Mac Portables."
    • Re: (Score:3, Informative)

      Unfortunately this chip will most likely go into workstations and servers. In order for IBM to make a desktop version, it will have to make a custom chip to handle things like video, sound, etc. This will lead to same logistical problems for Apple that it had before. Manufacturing companies do not want to keep excess inventories whether it was Apple or IBM. If Apple needs more, it will have to wait while IBM rearranges their manufacturing schedules to compensate. Also even if Apple orders millions of t
      • by BrentH ( 1154987 )
        IBM uses Hypertransport as interconnect, right? That would imply that you can slap any old AMD chipset to such a chip, wich has all the desktop-features you need.
      • by splutty ( 43475 )

        Uhm...

        We're talking about Z-series mainframes. These are absolute beasts, with all the cooling, memory and processing speed that would leave a desktop in the dust without any problems whatsoever.

        However putting this sort of hardware in a desktop is extremely prohibitive for a ton of reasons, one of the most important being cooling. You'd need a room just for that...

    • by TheRaven64 ( 641858 ) on Thursday September 02, 2010 @08:21AM (#33448646) Journal

      Wrong chip family. This is the Z-series mainframe chip, using an instruction set that is backwards compatible with the System/360 stuff from back in 1960 (the architecture of the future, as the marketing material trying to persuade my university to upgrade their IBM 1620 put it). The PowerMacs were using PowerPC chips, which use the same instruction set as the POWER CPUs from IBM (they used to be similar, with a common subset, now they are identical).

      The chip that this is replacing, the z10, was designed concurrently with the POWER6. They share a number of common features, including a lot of the same execution engines (both have the same hardware BCD units, for example, as well as more common arithmetic units), but they are very different in a number of other aspects, including the instruction set, cache design, and inter-processor interconnect, because they are designed for different workloads.

      I've not read much about this chip yet, but I think it shares some design elements with the POWER7, in the same way that the z10 did with the POWER6.

      In short, while some of the R&D money spent on this CPU made it into chips that could, potentially, run OS X, this chip itself could not without a major rewrite.

    • It's not a PowerPC chip anyways. It's zSystem architecture, which is actually the modern-day descendant of what was originally the System/360.

  • But it will be obsolete by the end of the month.
  • IBM defines the z196 as one of the few remaining CISC chips, which allows for bulky, large programs that can require much more memory to execute in than RISC chips, including the PowerPC and ARM embeddded processors, among others.

    For CISC you need more bytes per instruction, because there are more instructions. With RISC your executable has more instructions but they each use less storage.

    I am not sure I believe their implication that CISC is better for humungus commercial applications. Sounds like marketing speak to management to me.

    • Essentially all desktop and laptop computers use CISC chips and they are fast and cheap. RISC is a neat theory, but these days it seems that as the processors get decoupled from their ISAs anyhow, for various reasons, that it doesn't matter much. You choose the ISA for reasons of binary compatibility or features or the like, and it'll work just fine with the chip.

      Also it is not true that CISC needs more bytes per instruction, at least not all implementations. With x86 you find instructions are variable leng

    • Actually, CISC uses less memory in general, but has traditionally been slower. CISC CPUs came out when memory was extremely expensive relative to CPU speed. cheaper memory is what made RISC (with its larger footprint but faster speed) possible. Nowadays, it really doesn't matter much, CISC is probably better nowadays that memory bandwidth is the big bottleneck. However, our CISC designs are not exactly modern, if you were to do a modern CISC design you would probably end up with something more akin to ARM's

    • CISC and RISC are marketing terms that incorporate a lot of loosely connected design elements. Most CISC architectures use variable-length instruction encodings. On x86, for example, a number of common instructions are a single byte, while the longest ones are 15 bytes. A RISC architecture typically has fixed-length instructions, typically either 4 or 8 bytes (although ARM chips tend to also support Thumb and Thumb-2 instruction sets which use a 2-byte encoding).

      This is why x86 chips need smaller inst

    • This is a mainframe, it is designed to run many virtual machines, natively

      Memory in this context is cheap, fast and readily available - all the things it is not on most RISC systems

      If the chips are designed properly (which with IBM they will be) then for the tasks they are designed for they can be faster ...for other tasks they may well be slower

  • by bobdotorg ( 598873 ) on Thursday September 02, 2010 @07:08AM (#33447848)

    The chip uses 1,079 different instructions

    Can't even imagine writing in assembly code for this monster. I miss dinking around with a nice 6502 system.

    • The chip uses 1,079 different instructions

      Can't even imagine writing in assembly code for this monster. I miss dinking around with a nice 6502 system.

      Yeah the 6502 is nice and friendly. I taught myself how to hand assemble on the 6502 when I was 12 or 13.

    • I miss dinking around with a nice 6502 system.

      Start playing with ARM then, its design was somewhat inspired by the 65xx series and there are plenty of affordable ARM-based systems available.

      • by sznupi ( 719324 )

        Or something "lower" among many popular microcontroller families. AVR is quite pleasant, for example.

    • These days, compilers take care of almost everything. It has gotten complex to the extent that a programmer trying to do things all in assembly will probably do a worse job than a good compiler. Chips have many, many tools to solve their problems.

      That isn't to say it is never done, in some programs there may be some hand optimized assembly for various super speed critical functions. However even then it is most likely written in a high level language, compiled to assembly (you can order most compilers to do

      • Be that as it may, there are also a few of us who simply enjoy the art of assembler. Sue me; I'm a romantic.
  • Announcing a 5.2 GHz, 1.4 billion-transistor processor at "Hot Chips 2010" just makes sense. Strangely, no power numbers were given...

  • Intel's netburst architecture (of pentium 4 fame) featured the 'Rapid Execution Engine', which consisted of two ALU's running double the clock speed, on 3.8 GHz Pentium 4's, that would be 7.6 GHz

    Granted, that is not the entire cpu, but still..

  • Those slackers, wheres my 3GHZ G5? Huh?

    *sigh* FINE, begin the switch back....


    -Steve

    Sent from my iPad 2
  • My 386DX has an external Maths Coprocessor, => can only do floating point functions :(
    However mine's now a bit faster overclocked it from 33Mhz to 52Mhz ... your one does how 5.2 Ghz -> Sure my M series superseeds your G series..right?.... ....
    right?

  • It's crazy that an architecture developed in the '60s lives on in the System Z today. IBM bet the company on the S/360 product line. I think the investment has paid off-- and still does!

  • Wait....what? (Score:3, Insightful)

    by antifoidulus ( 807088 ) on Thursday September 02, 2010 @07:41AM (#33448146) Homepage Journal
    It contains a 64KB L1 instruction cache, a 128KB L1 data cache, a 1.5MB private L2 cache per core, plus a pair of co-processors used for cryptographic operations. In a four-node system, 19.5 MB of SRAM are used for L1 private cache, 144MB for L2 private cache, 576MB of eDRAM for L3 cache, and a whopping 768MB of eDRAM for a level-four cache. All this is used to ensure that the processor finds and executes its instructions before searching for them in main memory, a task which can force the system to essentially wait for the data to be found--dramatically slowing a system that is designed to be as fast as possible.

    I'm assuming the cache referred to in the second paragraph is off-chip cache, otherwise it would sort of negate the first sentence.... Would be nice if the article would have actually said that though.
    • Re: (Score:3, Insightful)

      by Anonymous Coward

      Considering the ratio between the two sets of figures is ~96, it seems that the "four-node system" contains 96 cores with their own L1 and L2 caches, but shared L3 and L4 caches.

  • 64KB L1 (Score:2, Funny)

    by dmomo ( 256005 )

    That ought to be enough instruction cache for anybody.

  • The codename for this processor, was "Ming Mecca".

    -Laxitive

  • Really this article kind of makes all of last week's comments on the speed of light limiting the speed of processors to 3GHz a bit pointless doesn't it? Now I know in principle the discussions were correct, but this just goes to show that problems can be engineered around.
    • by Ecuador ( 740021 ) on Thursday September 02, 2010 @08:09AM (#33448456) Homepage

      The comments were about the fact that at 3GHz light travels 10cm per clock speed, which limits how far you can have 2 items on a bus if you want them to communicate within 1 clock cycle. There is no "light speed barrier" or anything of the sort, however at these frequencies you design knowing that it will take measurable time for an electric signal to propagate. For example, for this particular system whose core is at 5.2GHz, if you try to send a signal to an external memory that is say 11-12cm away, then it will take about two clock cycles just for the signal to travel the distance.

    • A lot of nonsense was spoken in that thread, but the issue is real. The time taken for light to travel is not yet a problem, but the skew is. Most communication between parts of a chip is parallel. If the connections are not precisely parallel then signals arrive at slightly different times. The clock speed is limited to the amount of time that is the maximum where signals will arrive in the same time slice. A similar limit also affects fibre optics, due to total internal reflection causing paths taken

  • If you direct to the IBM announcement, which mentions the system in more detail then this linked article - http://www-03.ibm.com/press/us/en/pressrelease/32414.wss [ibm.com] The New zEnterprise 196 " From a performance standpoint, the zEnterprise System is the most powerful commercial IBM system ever. The core server in the zEnterprise System -- called zEnterprise 196 -- contains 96 of the world's fastest, most powerful microprocessors, capable of executing more than 50 billion instructions per second. That's rough
  • How much would it cost for me to put together a system with the same computing power, using off-the-shelf products, like a Xeon chip, or something? How long would it take for me to save $1 million in electricity, or whatever?

  • ...my trusty VIC-20 was clocked at a mere 1.02 MHz.

    Its truly amazing how far we've come in so short a time.

    (Well, maybe not so short for you whippersnappers...)

  • Bad Golf? (Score:3, Funny)

    by thogard ( 43403 ) on Thursday September 02, 2010 @10:20AM (#33451030) Homepage

    Had a golf game ended differently, would we be seeing these in power macs?

  • by FreeBSD evangelist ( 873412 ) on Thursday September 02, 2010 @11:37AM (#33452676)
    From TFA:

    IBM also previously claimed the title of fastest microprocessor with the POWER6 chip, which ran at speeds of up to 4.6 to 4.7 GHz, and its own z10, a 2008 chip which ran at speeds of up to 4.4 GHz.

    I seem to recall that one of the official reasons Apple gave for the switch from Power to Intel was that IBM couldn't/wouldn't deliver a fast enough processor.

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...