Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
IBM Hardware

IBM Unveils Fastest Microprocessor Ever 292

adeelarshad82 writes "IBM revealed details of its 5.2-GHz chip, the fastest microprocessor ever announced. Costing hundreds of thousands of dollars, IBM described the z196, which will power its Z-series of mainframes. The z196 contains 1.4 billion transistors on a chip measuring 512 square millimeters fabricated on 45-nm PD SOI technology. It contains a 64KB L1 instruction cache, a 128KB L1 data cache, a 1.5MB private L2 cache per core, plus a pair of co-processors used for cryptographic operations. IBM is set to ship the chip in September."
This discussion has been archived. No new comments can be posted.

IBM Unveils Fastest Microprocessor Ever

Comments Filter:
  • by bobdotorg ( 598873 ) on Thursday September 02, 2010 @08:08AM (#33447848)

    The chip uses 1,079 different instructions

    Can't even imagine writing in assembly code for this monster. I miss dinking around with a nice 6502 system.

  • by Spad ( 470073 ) <`slashdot' `at' `spad.co.uk'> on Thursday September 02, 2010 @08:09AM (#33447850) Homepage

    Yes, but their article comments are much closer to Youtube than Slashdot.

  • by the linux geek ( 799780 ) on Thursday September 02, 2010 @08:15AM (#33447914)
    except possibly in clock speed. I'm fairly sure than an 8-core 4.25GHz Power7 is probably as fast or faster if the workload is properly threaded, which any enterprise server or mainframe should be. On the other hand, on single-thread or few-thread workloads, the z196 probably has a bit of an edge, despite a large portion of its instruction set being microcoded.
  • by MichaelSmith ( 789609 ) on Thursday September 02, 2010 @08:24AM (#33448000) Homepage Journal

    But the main thing is that not all programs are multi-threaded, and a program with a single thread can only run on one processor. So yeah, GHz are still useful. Maybe for large single-thread batch processing - which is the kind of thing a mainframe would do.

    I'm betting the code used on these z196 systems is multi-threaded. Shit, if you're paying hundreds of thousands of dollars per CPU you can afford some top notch programmers.

    Actually I think this mainframe is for getting the last little bit of performance out of thirty year old cobol code. And the original top notch programmers are long dead.

  • by asliarun ( 636603 ) on Thursday September 02, 2010 @08:39AM (#33448126)

    The thing is that if you have 2 (say) 1.6 GHz processors, they aren't as 'powerful' as one 3.2 GHz processor.

    For one - there are overheads, certain stuff common between them, pipelines - stuff which I forgot (computer engineering related problems).

    But the main thing is that not all programs are multi-threaded, and a program with a single thread can only run on one processor. So yeah, GHz are still useful. Maybe for large single-thread batch processing - which is the kind of thing a mainframe would do.

    OK, firstly the OP should have said that this is the microprocessor with the highest clock speed. Calling it the fastest CPU is extremely misleading. In most modern CPUs, clockspeed is NOT related to throughput. The Intel Sandy Bridge or Nehalem CPU for example may be running its 4 cores at a clockspeed of 3.2GHz but overall, each core in the CPU is easily 4-5 times faster than a 3.2GHz Pentium4 core.

    Secondly, many of the bottlenecks that you allude to are no longer major bottlenecks. CPU interconnect bandwidth and memory bandwidth is now large enough that this is no longer an issue - the days of FSB saturation are over. Of course, there are exceptions to every rule, but I mean this for most workloads.

    Yes, you are correct as far as single threaded workloads are concerned. Nonetheless, you cannot even compare two different CPUs on a clockspeed basis, especially those with completely different architectures, even for single threaded workloads. IBM may have created a very highly clocked CPU and given it tons of transistors, but I seriously doubt if it will compete with a modern day server CPU from Intel or even AMD (pure performance maybe, but definitely not price-performance or performance-per-watt). I strongly suspect that it will probably succeed because of its RAS features, overall system bandwidth, and platform, not because of its raw clockspeed or performance.

  • Wait....what? (Score:3, Insightful)

    by antifoidulus ( 807088 ) on Thursday September 02, 2010 @08:41AM (#33448146) Homepage Journal
    It contains a 64KB L1 instruction cache, a 128KB L1 data cache, a 1.5MB private L2 cache per core, plus a pair of co-processors used for cryptographic operations. In a four-node system, 19.5 MB of SRAM are used for L1 private cache, 144MB for L2 private cache, 576MB of eDRAM for L3 cache, and a whopping 768MB of eDRAM for a level-four cache. All this is used to ensure that the processor finds and executes its instructions before searching for them in main memory, a task which can force the system to essentially wait for the data to be found--dramatically slowing a system that is designed to be as fast as possible.

    I'm assuming the cache referred to in the second paragraph is off-chip cache, otherwise it would sort of negate the first sentence.... Would be nice if the article would have actually said that though.
  • by Anonymous Coward on Thursday September 02, 2010 @08:54AM (#33448264)

    More or less. They hit two walls - fabricating chips that could run faster while retaining an acceptable yield, and dealing with the heat such chips produced.

    The fastest general-sale chips were the P4s - the end of their line marked the end of the gigahertz wars, as Intel switched from ramping up the clock to ramping up the per-cycle efficiency with the Core 2 and their complete architecture overhaul. As a result a 2GHz Core 2 duo will outperform a 4GHz P4 dual-core under most conditions. Better pipeline organisation, larger caches better managed.

    Clock rate is no longer the key variable in comparing processors, unless they are of the same microarchitecture.

  • Re:Wait....what? (Score:3, Insightful)

    by Anonymous Coward on Thursday September 02, 2010 @09:08AM (#33448454)

    Considering the ratio between the two sets of figures is ~96, it seems that the "four-node system" contains 96 cores with their own L1 and L2 caches, but shared L3 and L4 caches.

  • by mickwd ( 196449 ) on Thursday September 02, 2010 @09:33AM (#33448850)

    "clockspeed is NOT related to throughput"

    Of course it is. It is not, however, the only factor, and other factors may indeed (and commonly do) outweigh it.

    "IBM may have created a very highly clocked CPU and given it tons of transistors, but I seriously doubt if it will compete with a modern day server CPU from Intel or even AMD."

    I think you underestimate IBM's technical ability. They do have some idea of what they're doing.

    "pure performance maybe, but definitely not price-performance or performance-per-watt"

    That's like saying a Ferrari is a poor performance car because it can't compete against a Ford Focus on cost-per-max-speed or miles-per-gallon.

  • by Jeremy Erwin ( 2054 ) on Thursday September 02, 2010 @09:57AM (#33449292) Journal

    It's quad core. 24 MB of L3 Cache, and 96 MB of L4 Cache.
    source [theregister.co.uk]

  • by Anonymous Coward on Thursday September 02, 2010 @11:25AM (#33451168)

    Mainframes are engineered fundamentally around two things: Reliability and IOPS.

    When it comes to basic tasks, it isn't often that a large server ends up CPU bound (especially database servers). Instead what usually becomes the bottleneck is I/O and RAM.

    Reliability is where mainframes take the cake. Some use multiple CPUs to execute the same instructions to make sure the output is correct. Mainframes have virtually redundant everything. Because they have been doing VM since the dawn of computing, it may be that a LPAR might need kicked, but a full IPL of a mainframe is exceedingly rare.

    IBM System z machines are on one end of the spectrum. They cost an arm and a leg, but if someone has a lot of 1U servers or even blades, it might be better to just dump the rackfuls of those machines and go with some big iron and LPARs. The TCO of a machine isn't just the price tag of the box, nor the licenses or service fees. One factor people forget is how many admins are needed to keep things going. Some companies are far better off with a mainframe and some Linux admins as opposed to a rackfuls of Windows machines that require an army of MS-ITPs to keep running.

    Believe it or not, mainframes have advanced along with the times. They have always been reliable and boring. COBOL is long gone except for way legacy stuff. Instead, you still have Oracle, WebSphere, JBoss, and many other behind the scene applications which are not flashy, but are business critical.

    Mainframes also come with their own viewpoint. On one hand, a company can buy enough x86 servers with clustering, redundancy, failover capability, and other items to reduce the MTBF of those servers to an acceptable level. On the other hand, a company can pay the ticket to the System z series and have one machine that has an extremely high MTBF with less of the need of a HA cluster. Even with all the clustering and redundancy of x86 machines, there is only so much lipstick you can put on a pig before it turns into a oinking ball of wax, so if some wants to go the x86 route, it will require a lot more employees to keep things running.

  • Re:Required (Score:3, Insightful)

    by MobileTatsu-NJG ( 946591 ) on Thursday September 02, 2010 @11:51AM (#33451692)

    [Comment terminated : memelock detected]

    If Slashdot ever gets this working I'll instantly subscribe.

The only possible interpretation of any research whatever in the `social sciences' is: some do, some don't. -- Ernest Rutherford

Working...