IBM's New Processors To Exceed 5Ghz 250
Jordin Normisky writes to mention the news, via ZDNet Asia, that IBM's new Power6 processor will be unveiled next month at a conference in San Francisco. They're also planning to announce a second-generation Cell, both of which are expected to run faster than 5GHz. From the article: "In addition, the [Power6] chip 'consumes under 100 watts in power-sensitive applications,' a power range comparable to mainstream 95-watt AMD Opteron chips and 80-watt Intel Xeon chips. Power6 has 700 million transistors and measures 341 square millimeters, according to the program. The smaller that a chip's surface area is, the more that can be carved out of a single silicon wafer, reducing per-chip manufacturing costs and therefore making a computer more competitive. Power6, like the second-generation Cell, is built with a manufacturing process with 65-nanometer circuitry elements, letting more electronics be squeezed onto a given surface area. "
We've heard that before. (Score:4, Insightful)
Re:And here I thought... (Score:2, Insightful)
Work-per-clock cycle? (Score:4, Insightful)
--
Wi-Fizzle Research [wi-fizzle.com]
avoiding the obvious? (Score:5, Insightful)
Why don't they seem to be making any kind of performance comparisons? Talking about physical size, power consumption as compared to intel & AMD are great, but it seems weird that there's no mention of real-world performance against those same competitors. Even a rough estimate would be interesting.
Re:We've heard that before. (Score:5, Insightful)
Re:And here I thought... (Score:3, Insightful)
Size matters (Score:5, Insightful)
Boy, Howdy! are you out of the loop. I work on those suckers and believe you me, the chip cost is not trivial.
Do the math: the cost of a 300 mm wafer in a 65 nm process runs well over $5000 (how much is a Deep Dark Secret.) Ignoring geometric yield loss, that's about 70,000 mm of potential dice per. If one chip is 350 square mm, you're getting about 200 per wafer, or $25 per chip fab cost. Yield drops off steeply with size (think in terms of losing ten to twenty dice per wafer, regardless of die size) and that adds into the fab cost too.
That's bare minimum, assuming there aren't any bad lots etc. It adds up fast.
Re:And here I thought... (Score:3, Insightful)
Re:And here I thought... (Score:5, Insightful)
Re:And here I thought... (Score:5, Insightful)
Keep in mind, this is a promise (Score:3, Insightful)
Re:Macintoshes (Score:5, Insightful)
The issue is that IBM makes supercomputers, and Motorola makes cellphones, and they design their chips accordingly. Apple, making neither of these things, couldn't persuade either of them to make a low-power, fast, cheap CPU useful for a laptop and continue updating it with such a small market. Intel, on the other hand, spends most of their engineering effort trying to solve exactly this problem, and so has its business interests aligned with Apple's, as opposed to IBM or Motorola, who didn't really care about them at all, and would happily spend their R&D money on designing things like this chip instead of making a G5 that would fit in a laptop.
Re:65 nm hardly to brag about (Score:2, Insightful)
Fair enough.
But do these chips come with 32Mb of L3 Cache, have the fastest Fiber Channel Bus Interconnect in the market, and allow for extremely flexible, multi-platform OS true hardware virtualization?
Performance comparisons between x86 and RISC chips in my opinion are really not valid. What you really want to look at is system workload. Scalability is where the POWER chips really perform and these chips are designed for the high-end server market.
see for yourself [ibm.com]
Re:And here I thought... (Score:5, Insightful)
Re:And here I thought... (Score:3, Insightful)
AMD is all about the platform now. That's why they purchased ATI. It's about bringing CPU, GPU, and other specailized processors together using a fast, flexible bus (HyperTransport).
AMD is also about low-cost. Remember that current Athlon 64 CPUs have about half as many transistors as their Core 2 Duo counterparts. CPU + GPU + Northbridge in a single CPU (AMD Fusion) will have huge impact in the low-end market.
The fact is, 90% of the time, CPU performance doesn't matter anymore. Most applicaitons are either disk or user input bound now. The exceptions are media encoding/decoding (at the high end), scientific/technical computing (CAD/CAM, simulation, etc.), and gaming.
Re:I'll bet apples pissed. (Score:3, Insightful)
Lets see IBM actually roll out those babies, and look what yields they get, how cool they really run and in what ways the design has suffered to allow them to reach that kind of clockspeeds.
Re:And here I thought... (Score:1, Insightful)
Big endian was around first!
Converting back and forth continuously causes hardware and firmware people problems. Almost all protocols, and the embedded processors on adapters are big endian. At least in English with arabic numerals, when you write a number in hex on a piece of paper it is big endian. When you look at a big endian memory dump, it will match what is on the paper. When you look at a little endian memory dump, there's an extra, unneeded layer of complexity. You have to do the conversion on a per field basis. When 2 and 4 byte fields are mixed, it can be a PITA. So I think we should have stuck with everything being big endian.
Re:And here I thought... (Score:3, Insightful)
Killer app? Well, nothing unless it takes full advantage of the capabilities of the system and cards. Perhaps Blender will become the killer app for 3-d modeling/etc when it gets some native support or plugin. Who knows? It's all dependent upon what the programmers/management/company wants to support, there.