Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel iMac Hardware

34 Design Flaws in 20 Days of Intel Core Duo 356

Pray_4_Mojo writes "Geek.com is reporting that Intel's errata (bug) documentation shows that the Intel Core Duo chip has 34 known issues found in the 20 days since the launch of the iMac Core Duo. (you can read the list) with only plans to fix one of them. While bugs in hardware is nothing new (the P4 has 64 known issues, at this time Intel does not plan to fix a single one) this marks one of the first times that Intel released a processor with known bugs, and some of the bugs are of higher severity than in the past. Also alarming is the rate the flaws have been found, at one and half per day since the launch of the iMac Core Duo."
This discussion has been archived. No new comments can be posted.

34 Design Flaws in 20 Days of Intel Core Duo

Comments Filter:
  • Faster (Score:3, Insightful)

    by mysqlrocks ( 783488 ) on Tuesday January 24, 2006 @12:29PM (#14548952) Homepage Journal
    Maybe they're just getting faster/better at finding bugs?
  • by sczimme ( 603413 ) on Tuesday January 24, 2006 @12:31PM (#14548970)

    this marks one of the first times that Intel released a processor with known bugs

    No: either it is the first time or it is not. There can be only one... first time.

    and some of the bugs are of higher severity then in the past

    then != than

  • 20 days? (Score:5, Insightful)

    by Anonymous Coward on Tuesday January 24, 2006 @12:32PM (#14548983)
    It's a little disohnest to use the phrasing "Core Duo chip has 34 known issues found in the 20 days since the launch of the iMac Core Duo."

    Most of these bugs were found well before the release of Core Duo. Many of the bugs are listed as having been observed by Intel only. That means the verficiation teams did hit these issues, either with very bizarre code setup, or doing something that's probably not technically legal anyway. Odds of seeing most of it in an end-user platform are very unlikely.
  • by GeekDork ( 194851 ) on Tuesday January 24, 2006 @12:37PM (#14549041)

    Now, this would've been interesting or informative if you would have provided a link to that PDF. Pretty please?

  • by Angostura ( 703910 ) on Tuesday January 24, 2006 @12:39PM (#14549060)
    This news would be a lot more interesting if I knew the size of the errata list for the G4 or the G5. I think it unlikely that there are zero unfixed bugs.

    Anyone? Bueller?
  • Sensationalized (Score:2, Insightful)

    by emerrill ( 110518 ) on Tuesday January 24, 2006 @12:39PM (#14549075)
    geeks.com has pumped up these problems by doing their own analysis, and claiming 'show stopper' on many of them, yet there are already machines in the wild that seem to have no problem with many of them. Like them saying that machines wouldn't be able to wake from sleep because of one of them. Their analysis is a lot of FUD.
  • by Golias ( 176380 ) on Tuesday January 24, 2006 @12:40PM (#14549085)
    this marks one of the first times that Intel released a processor with known bugs

    No: either it is the first time or it is not. There can be only one... first time.


    I disagree with the mod who marked you "Off-topic." It may look like you are just being a grammar nazi, but you raise a valid point.

    Saying "this marks one of the first times that Intel released a processor with known bugs" is pretty much the same as saying, "this is not the first time that Intel has released a processor with known bugs, but I want it to sound like alarmingly bad news for Apple."
  • by TheRaven64 ( 641858 ) on Tuesday January 24, 2006 @12:42PM (#14549100) Journal
    Not quite the same. All that has been kept the same is the interface, not the implementation. It's the equivalent to having to keep an API/ABI stable. It can cause problems (see the WMF features for more information), but it's also often useful - Win3.0 apps running on Windows XP, for example, or UNIX code from the '80s compiling and running on Linux / BSD.

    The problem with x86 comes from the fact that a large number of instructions interact in relatively complex ways with others. Changing a small amount of silicon can change a side-effect of an instruction, which is then a bug. An ISA such as Alpha eliminated this by keeping inter-instruction interactions to a minimum (no condition registers, etc).

  • by sterno ( 16320 ) on Tuesday January 24, 2006 @12:42PM (#14549104) Homepage
    So not only how many bugs in Athlon, etc, but also...

    How many bugs in other Pentium chips?
    What was the rate of discovery of bugs in other chips?

    Keep in mind that during Intel's entire history they've released one desktop processor that had a bug sufficient to require a recall. Most of the bugs are easily worked around including that one. Hell, I've got an old P60 that I was using as a router until the last year or so and it just worked fine and it was always amusing to see Linux notice the FDIV bug on boot.
       
  • by s31523 ( 926314 ) on Tuesday January 24, 2006 @12:50PM (#14549191)
    Being in the Aerospace/Defense industry, this is disconcerning, especially for those of us that deal with the FAA and the imfamous DO-178B. Higher demanding systems are forcing us to use more powerful processors and if they are plagued with "known issues" it may be a problem with getting through a certification by some governing agency. Especially now that DO-254 has reared its ugly head... Has Intel gone the way of Microsoft? Delivering early to gain market even though the product has sever quality issues and then take the "well, it's not a critical secutriy flaw?".
  • Re:Faster (Score:5, Insightful)

    by Surt ( 22457 ) on Tuesday January 24, 2006 @12:52PM (#14549210) Homepage Journal
    It seems likely that given the increasing complexity, the error rate is going to rise proportionally. I mean, how many errors do you expect in a 100,000 transistor chip vs a 100,000,000 transistor chip?
  • Re:Faster (Score:2, Insightful)

    by Golias ( 176380 ) on Tuesday January 24, 2006 @12:52PM (#14549218)
    And we know that there are no plans to fix these "show stopper" bugs because geek.com says so. Also, we know they are "show stopper" bugs because geek.com says so.

    34 is actually a very tiny bug list for a bleeding-edge CPU.
  • Re:Up front (Score:1, Insightful)

    by A beautiful mind ( 821714 ) on Tuesday January 24, 2006 @12:52PM (#14549220)
    Always the optimist, eh? :)
  • by emerrill ( 110518 ) on Tuesday January 24, 2006 @12:57PM (#14549264)
    That assumes that intel wants the safety critical market for this processor. In most cases, when you develop in this sector, you have to use hardware that is specificly designed for these applications. developing chips that can be certified for SC applications can be a pain in the ass, and the may simply not car for this chip.
  • by CountBrass ( 590228 ) on Tuesday January 24, 2006 @12:59PM (#14549278)
    No, that's what you get when you build something really complicated. The clever bit is that they still work despite the errors.
  • Comment removed (Score:3, Insightful)

    by account_deleted ( 4530225 ) on Tuesday January 24, 2006 @01:18PM (#14549465)
    Comment removed based on user account deletion
  • Re:Faster (Score:3, Insightful)

    by c_forq ( 924234 ) <forquerc+slash@gmail.com> on Tuesday January 24, 2006 @01:19PM (#14549478)
    Future chips. This batch may have them until they are no longer pressed, but I would imagine any revisions or new families of chips will take these past mistakes into account.
  • Re:Faster (Score:5, Insightful)

    by Golias ( 176380 ) on Tuesday January 24, 2006 @01:19PM (#14549480)
    What I am saying is that in general, what's the use of getting better and faster at finding bugs if there aren't plans to fix it?

    Because the purpose of finding silicon bugs is almsot never to fix it. Fixing CPU bugs is often impractical. You find the flaws so you can route around them. This is the case with every consumer chip on the market, including the one you are using to read this right now.
  • by podperson ( 592944 ) on Tuesday January 24, 2006 @01:26PM (#14549565) Homepage
    I've heard rumors that some small PC manufacturers, such as Dell and Gateway are selling computers using this cpu.
  • Re:Faster (Score:4, Insightful)

    by diegocgteleline.es ( 653730 ) on Tuesday January 24, 2006 @01:42PM (#14549725)
    Indeed! It's like you would say that it's much easier to find bugs just after the first release of the CPU and even easier when it's the debut of a completely new architecture like the Core Duo is!. It'd be like posting links to the AMD errata docs!

    like bugs in CPUs are something new....I want to know how many bugs where found in the first 20 days of the release of other intel architectures and the opteron, otherwhise I can't know if the core duo is a bad CPU compared with others or not. This article just looks like anti-intel FUD from AMD fanboys (Intel made a good CPU even with the bugs, deal with it, AMD is not going to give away free CPUs to you for being a fanboy).

    And let me doubt that there's any CPU manufacturer at all that releases CPUs without any "know bug", many CPU bugs are fixed with microcode updates via new bios versions. There's a reason why both amd and Intel CPUs allow to update the microcode, they don't include features for fun.
  • Re:Up front (Score:5, Insightful)

    by ciroknight ( 601098 ) on Tuesday January 24, 2006 @03:10PM (#14550511)
    Take a look at the error list for a second. Over 50% of them are caused by dropping the processor into Debug mode, with over 75% of them only being observed by Intel themselves. Now, certainly there are more bugs reported so far, but does that mean that there are actually more bugs, or that Intel is getting better at finding bugs and reporting them?

    Time will only tell.
  • by OOGG_THE_CAVEMAN ( 609069 ) on Tuesday January 24, 2006 @03:59PM (#14550911)
    I think your estimates are *way* off.

    Silicon fab facilities are extremely expensive and capital intensive, but they produce shitloads of chips. The process scales; making 1000 wafers in these fabs is as easy as making one.

    Engineering analysis of complex IC designs is a perfect example of combinatorial explosion. Each bit of state in the chip doubles the state space in which bugs can exist. Yes, *most* of that state is in the cache which has regularity in its structure, but that regularity didn't happen by accident: it was *designed* that way.

    You can only test to a spec, and if the spec is imperfect and has gaps, you will leave space for bugs. Given that specs are written by engineers, they cannot be nearly complete for anything other than the most trivial circuits; the infrastructure used to suppor engineering of non-trivial circuits could itself have bugs.

    The part of the spec that covers the cache is simple, and can conceivably be error-free and well-tested, and perhaps with methods that are amenable to mathematical proof. But that's not where the errors crop up. The errors crop up in the hugely complex mechanisms that handle all the pipelining, branch prediction, translation to microinstructions, handling of interrupts, etc., etc., that are not highly regular and modular and are not easy to spec, and are not easy to approach with formal methods.
  • by aeoo ( 568706 ) on Tuesday January 24, 2006 @05:15PM (#14551634) Journal
    This is like reporting that the sun set again or that slashdotters have no love life.

    This is getting annoying. I, for one, am happily married and have a fulfilling love life. It's a silly and outdated stereotype that "slashdotters have no love life" and we should just drop it.
  • Re:Up front (Score:3, Insightful)

    by Kadin2048 ( 468275 ) <slashdot.kadin@xox y . net> on Tuesday January 24, 2006 @10:38PM (#14553904) Homepage Journal
    That's a poor attitude to take. Almost certainly they did testing before they went to production and started making masks and all the rest -- but a responsible company doesn't just stop doing testing the moment the product rolls out the door.

    I work on a very large software project. In some ways, it's not unlike designing hardware; we have a very slow, inflexible release schedule. Once a release starts being rolled out to the users, it's done. While theoretically there might be a way to do an "emergency patch" in some extremely severe circumstance (followed by a ritual sacrifice of everyone involved), in practice it would be almost impossible. But that doesn't mean that we stop testing software once it goes into production -- and the fact that we still test production versions doesn't mean that we don't do a lot of in-house testing, either.

    You test, test, test before the product gets rolled out -- whether it's hardware or software -- and then you continue to test afterwards. What changes is your ability to fix things. Before the product has been frozen and you're committed, you can actually fix bugs. Afterwards, you are limited to impact mitigation and providing workarounds for your support teams. Not as good as actually eliminating the bug, but I think as a user it would be better to know about a bug in advance and be provided with a workflow that avoids it, than run into it on your own and be stuck.

    Frankly I think it would be irresponsible for a company not to continue testing, as long as they have the resources to do so. That's called maintenance.

    Furthermore, there is a certain point you get to (at least in my experience) where you can keep hammering out bugs, and eventually start creating new bugs as the result of your own fixes. It's a never-ending process; there will always be one more bug. This idea that anyone can produce a totally bug-free product, on a large scale (the size of a modern microprocessor or a huge software project), if they just threw enough resources at the problem, is incorrect and dangerous. At some point you have to stop fixing things and release the product -- especially if your goal is to make money and stay in business.

"Experience has proved that some people indeed know everything." -- Russell Baker

Working...