34 Design Flaws in 20 Days of Intel Core Duo 356
Pray_4_Mojo writes "Geek.com is reporting that Intel's errata (bug) documentation shows that the Intel Core Duo chip has 34 known issues found in the 20 days since the launch of the iMac Core Duo. (you can read the list) with only plans to fix one of them. While bugs in hardware is nothing new (the P4 has 64 known issues, at this time Intel does not plan to fix a single one) this marks one of the first times that Intel released a processor with known bugs, and some of the bugs are of higher severity than in the past. Also alarming is the rate the flaws have been found, at one and half per day since the launch of the iMac Core Duo."
Up front (Score:5, Interesting)
Does anyone know.... (Score:3, Interesting)
Jaysyn
Statistics (Score:3, Interesting)
Re:AMD errata (Score:3, Interesting)
Why is this listed under Apple? (Score:1, Interesting)
Re:20 days? (Score:2, Interesting)
Re:One MAJOR flaw (Score:3, Interesting)
Mac Powerbooks and G5s are WIDELY used as THE copmuter for editing film on. The new MacBook does not properly run Final Cut Pro 4, one of the biggest names in editing software. BIG mistake apple, big mistake.
Ummm, because some company is going to run out and buy new machines right away and expect the software to have been ported, even though anyone who follows either the video editing or Apple news knows they announced Final Cut pro would be ported in March? Do people really use imacs for pro video editing? I'd think they would be going with towers, which work fine now and will likely not be intel before march or with powerbooks, which won't ship till Feb, only a month before Final Cut Pro is ported. The only people who might get burned by this are the clueless.
Time for article moderation. (Score:3, Interesting)
I want to be able to moderate articles for depth, due diligence, and bias.
This one's going to sit at top level for quite some time, trolling in everyone until they read the comments and discover they shouldn't have bothered.
Re:Faster? Or under pressure from Apple? (Score:3, Interesting)
What wast the (newsworthy or not) bug per CPU per release count BEFORE switching to Intel? What happened to all that new-fangled "chip simulation" stuff? Seems if this erratta is not just typos and such, then the SIMulation needs some STIMulation to be more useful.
I wonder if AppTel did a "test design" before the Apple side of the house went to market. As for "finding the bugs faster", I am wondering if Apple found them and told Intel, "fixem or we go back to IBM, even if IBM charges more money to come back-but you can be sure we won't pay YOU over flaws we specced to be avoided...", assuming Apple could foresee and document what to avoid.
As for Intel being "more honest", heck, I am willing to assume Apple has a better branding position than Intel, and Apple is not going to stand for Intel using it's mammoth inventory and factory count to roll over people. Any heavy computer user-- particularly Mac users who make money by USING their computers in small businesses-- will not tolerate Intel chips if things don't turn around.
And, finally, I imagine Jobs will do a war-dance job in Intel if they think ONE bug fix is all that's required or if they think they can get away with fixing only ONE bug. But, if they are firm on fixing only ONE "BUG", then maybe they have refunds, refurbs, exchanges, chip-swaps... and/or a new chip in the pipeline...
Re:Up front (Score:3, Interesting)
Re:A flawed design kept alife. (Score:5, Interesting)
Ah. Ok. So then -- do these "known better archtectures [sic]" have no bugs then? Significantly fewer bugs? Are the bugs less severe? And how do they compare to the Intel/AMD architectures in terms of speed? I can assure you that I can make a chip that is 100% bug free -- it's also going to run somewhere in the vicinity of the original 8008.
Frankly, I doubt you know all that much about the real ISA that Intel or AMD execute on their cores. The x86 instructions are never executed -- they're translated into an internal only ISA that doesn't look anything even vaguely like the x86 ISA.
I'm so sick and tired of all these kids out of college whining about the x86 ISA. And yeah, I was there once too. But know what? That decreipt, horrible, ghastly API has outlasted every single competitor, has been upgraded from 8-bits to 64-bits without losing backwards compatibility, and runs far, far faster than every chip that's tried to take away the title. And costs less. Intel's proven the doom 'n' gloom wrong everytime -- including with their latest transition off the Netburst architecture. AMD has as well (I give Intel props because for decades they were the only real designers for the x86 ISA; AMD is pretty much responsible for the latest incarnation as x86-64 though).
If you look at any of the modern chip architectures then none of them fall nicely and neatly into "CISC" or "RISC". The Power architecture is awfully CISC like in some ways. The x86 (the classic CISC) doesn't use a complex ISA internally, it has pipelining, branch prediction, caching, etc. -- all classic RISC subsystems that were never supposed to work on CISC. Everyone is multi-core now (to various extents).
The x86 architecture isn't going anywhere. If anything Apple's move should've reinforced this concept -- the fact of the matter is that Intel spends more in R&D than every other (general purpose) chip maker on the planet. Combined. And sells their product for less. That kind of R&D budget makes up for a lot of paper shortcomings.
Welcome to the real world.
Re:3 Reasons (Score:3, Interesting)
my problem with the ipod versioning then becomes "the serial number on the thing is too damned hard to read".
even on my thinkpads, yes there are "Thinkpad R-31" but that is hardly enough when needing detailed technical support, that is why there is easily available "real" type information (e.g. 2656-MU5) when you get down to technical support.
Given the R&D costs... (Score:4, Interesting)
You also have to bear in mind that designs are modular and have limited connections, so N transistors is not a meaningful number - you should only be concerned with the number of modules and the number of interconnects. (eg: a 32-bit register will obviously take more transistors than an 8-bit register, but both are simply cut-and-paste copies of a 1-bit register. So long as you have the 1-bit form correct, there is no increase in complexity no matter how wide the register becomes.)
As for the interconnects - if you have N modules, you have an upper limit of !N possible interactions, if you can string any possible combination together. That's a big number, even for small values of N. But most of those don't exist. You cannot feed the output of one operation directly into the input of another. There are some special cases where there is a chain of events, but it is not something you can program with total freedom. Many operations just produce a result which is pushed back into the registers. Thus, N modules will produce only a little more than N interactions of interest. That is a much more managable number.
Then you need to consider that processors aren't "open floor plan". They are highly segmented. The term "floating point unit" literally does refer to a definable segment of the chip that is designed for floating point work. Again, from the standpoint of reliability, you can test each unit independently before doing an integrated test, so unit tests don't need to concern themselves with overall complexity or the number of other units out there.
Next up is the cost of a recall. Recalls are expensive. From a pure profit standpoint, you want to spend less on QA than you'd spend on a recall, but the less you spend on QA, the more you are likely to end up spending on that recall. The ideal is to reduce the number of potentially serious bugs to the point where any further initial clean-up will cost more than the money lost in cleaning up afterwards. Less QA than that will cost more than it saves. More QA than that will also cost more than it saves unless it expands the market (ie: the chip becomes good enough to be used in mission-critical systems such as life-support or fly-by-wire systems), but is sometimes good to do anyway for PR reasons.
Finally, not all transistors are "important". Once you know the cache algorithm works, the actual cache memory is irrelevent - memory is rarely implemented "incorrectly", it doesn't "do" anything (the active part is the algorithm), it's just heap.
With modern software verification tools, chip validation suites and the high level of understanding of microelectronics, an average of one bug for every four or five instructions is high. I would consider a chip with a third as many bugs to be only just acceptable for home use, and a thirtieth as many for operations in which any significant number of people would be put at risk. The extra cost would be minimal (compared to all the other costs) and would still be much less than the cost to Intel of the Pentium divide bug or to Transmeta of the flaws in their initial Crusoe chips.
Re:A flawed design kept alife. (Score:3, Interesting)
Up to about 15 years ago x86 was ok. Up to about 10 years ago it was bearable. Everr since then it's a mere roadblock for software and hardware development. One that had to be steered around with much efford on a daily basis. Mos people just don't notice it anymore, because they got used to it. Intel builds Ford Ts. The have a big advantage in manufactoring methods and and in economy of scale. And it sure has it's merits. Bet even the Ford T wasn't built for 20 years. If Ford did what Intel does we'd still have to start the car at the front with a lever. And actually we do. We start in real mode.
Re:Thank you (Score:3, Interesting)
I have a hard time believing this is true.
I might believe that AMD usually has more bugs, or has more bugs cumulatively, but the number of bugs, being a RANDOM varible, is quite unlikely to be so well behaved that the number of Intel bugs has NEVER exceeded the number of AMD bugs. I would like to see a source for your statement.