Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Reveals Itanium 2 Glitch 250

NeoChichiri writes "News.com is running on an article about glitches in Intel's Itanium 2 chips. Even though it doesn't affect all chips, they have still stopped shipments of the new 450 Servers until the problem is resolved. Apparently it has to be 'a specific set of operations in a specific sequence with specific data.' Intel is saying that affects the 900MHz and 1 GHz Itanium 2 chips and that it will not affect the upcoming 1.5 GHz Itanium 2 6M chips." Until the next iteration of chip arrives though, Oliver Wendell Jones writes, "they recommend working around the problem by underclocking the processor to run at 800 MHz instead of its default 900 MHz or 1 GHz."
This discussion has been archived. No new comments can be posted.

Intel Reveals Itanium 2 Glitch

Comments Filter:
  • by hobbesmaster ( 592205 ) on Monday May 12, 2003 @05:22PM (#5940203)
    The Itanic 2 appears to be going down like the first...
  • Glitch? (Score:4, Interesting)

    by Ramjet350 ( 582868 ) on Monday May 12, 2003 @05:23PM (#5940223)
    Is it a glitch or did they sell chips that can't run at the rated speed?
  • Mmmm (Score:4, Funny)

    by thebatlab ( 468898 ) on Monday May 12, 2003 @05:25PM (#5940236)
    *in Homer Simpsons voice* 'mmmmmm.....Itanium 2 chops.......glazhzhzhz'
  • by fadeaway ( 531137 ) on Monday May 12, 2003 @05:25PM (#5940237)
    Bye-bye fans and thermal paste, hello heaters and insulation!
  • Microcode? (Score:5, Interesting)

    by chill ( 34294 ) on Monday May 12, 2003 @05:28PM (#5940252) Journal
    Is this something that could be addressed by a microcode update? I've always wondered about exactly what can be done with the Kernel support for microcode updates.

    On a side note -- who exactly didn't expect something like this? Intel has a history of this sort of thing -- from the 80486DX not being able to add properly, and IBM having to halt shipments of PS/2 machines; to the Pentium F00F bug and others. Buying first run Intel chips is like playing dice with your business. Give them a few production runs to work out the bugs...
    • by binaryDigit ( 557647 ) on Monday May 12, 2003 @05:35PM (#5940323)
      who exactly didn't expect something like this? Intel has a history of this sort of thing

      Of course when it happens to Intel, then EVERYBODY knows about it. My question is, how prevelant is this sort of thing throughout the cpu industry? Anyone know of other "mistakes" by the other major players? It's hard to imagine that only Intel makes these kinds of goofs, esp. with the complexity of todays chips. As an example, wouldn't Mot's failure to scale up the G4 PPC chips be considered an "error"? They just caught it early enough to not to ship any chips and say "oh, we're sorry, our G4's won't go as fast as we originally stated, wait another year and a half or so and we'll get it all sorted out". Didn't they also do a similar thing with the 68040?
      • I agree I'd like to see some stats on "bugs" in hardware.

        I think Motos problem is they're too busy making cell phones to worry about PPC.
      • by vadim_t ( 324782 ) on Monday May 12, 2003 @05:59PM (#5940509) Homepage
        Not very uncommon, really. Here are some AMD bugs [216.239.57.104], for example. I think the deal is that the Itanium has a rather serious problem that's been undetected for a long time. Itanium based computers can cost about $20000, which is why it's a big deal. If you have such a system you probably are running something important on it.
      • by questamor ( 653018 ) on Monday May 12, 2003 @06:02PM (#5940527)
        The 68040 bug affected quite a few LC040 machines, which made running FPU emulation on them horrid. Basically, trapping calls to the FPU in order to emulate them in software doesn't work as it should. It's b0rked, and most Apple 68LC040 machines just cannot fully emulate an FPU. That wasn't such a problem with the MacOS at the time, as it didn't need an FPU for any functions, nor did most apps.

        Running a normal Linux or NetBSD on one of these machines is asking for pain however,.
      • Failure to scale up G4 chips? The question is, does Motorola care about supplying processors to a tiny market with razor-thin margins? You can be pretty damn sure that apple doesn't hand over much of that premium Mac price to Motorola, and development costs for processors are pretty high. Apple should have just went with the program and switched to x86.
        • I think I might have just stayed passive if you hadn't made a reference to Apple's per-processor price.

          What do you know about how much Apple pays to Mot? Mot's processor business is just that - their own business - They are responsible for what they decide to charge to Apple.

      • Remember sun's ECC cache bug? It was front page news (not on /.) for days and days and days....
        • Remember sun's ECC cache bug?

          Fortunately, that was just a supplier issue, where IBM was giving Sun bad cache RAM. This problem certainly caused a lot of unhappy customers, but it was a straight-forward resolution compared to fixing or patching the CPU itself.

          I've read that the UltraSPARC CPUs themselves tend to have very low errata rates, like a half dozen or so for the UltraSPARC II compared to dozens for Intel's Pentium chips. This is probably the result of Sun's long development and testing cycles,
      • Every CPU has errata. After all, CPUs are just software. Ever heard of software that doesn't have bugs? (Besides the Space Shuttle.) Sometimes the manufacturer tells you about it; sometimes they don't.
      • I know that the Sparc processor in my Ultra 1 has some sort of an 64 bit instruction bug that's bad enough that Sun defaulted the firmware in Ultra 1's to 32 bit mode. You have to change a jumper on the motherboard (hard to get to after opening the case) in order to reflash the Firmware and run 64 bits. I believe the bug is an instruction you can call that crashes the system. Someone else can add more details, I just run NetBSD/Sparc64 on the machine and it's not publically accessable.
      • Sun: There was a thing with the Sparc3s i believe where you had to download a microcode update that turned off certain FP functions (which were busted), resulting in a speed hit.
      • Of course when it happens to Intel, then EVERYBODY knows about it. My question is, how prevelant is this sort of thing throughout the cpu industry?

        Very prevalent. A recent /. story spoke of MMU bugs in the 68K series. Ultrasparc CPUs have had cache corruption bugs. I know somebody who was frustrated (for several weeks!) by a register corruption bug in a microcontroller. These bugs are sometimes "fixed" by changing code-generators (eg, compilers) to avoid problematic sequences.

        I'm not very familiar w

      • While Motorola may not have been able to scale up the G4s as fast as they wanted, that's not nearly as bad an error as the Itanium 2 "glitch" referenced in the article, or something like the F00F bug on the original Pentium. Why? Because Mot caught it in-house! They didn't release chips that couldn't reach their rated clockspeed in a normal operating environment, then advise their customers (who may have paid a fortune for their hot new system) to underclock until it works.

        Most chipmakers don't always me

    • Re:Microcode? (Score:2, Informative)

      by Anonymous Coward
      It's not just Intel. How about Motorola leaving out critical instructions in the PPC603 and crippling every machine with one compared to the PPC601? or the G3 floating point debacle where excel spreadsheets would show up errors consistently. What about AMD and their first run overheating problems? running hot is one thing, burning up even with adequate cooling is another.

      Best option is not to restrict yourself to certain "runs" but to just see the performance of a run yourself. The aforementioned PPC601 w
      • Not ppc603s (Score:5, Informative)

        by questamor ( 653018 ) on Monday May 12, 2003 @06:49PM (#5940890)
        ow about Motorola leaving out critical instructions in the PPC603 and crippling every machine with one compared to the PPC601?

        That's a very very big reinterpretation of the facts. ppc603 machines were designed for low cost low heat. One of the ways to do this was to further remove instructions that were not needed, legacy instructions from pre-PPC601, and were never designed to be in the 601. They were not 'critical' and did not cripple anything. ppc603 cpus ended up working just for the purpose they were designed for. cheaper and less energy-hungry cpus.

        the G3 floating point debacle where excel spreadsheets would show up errors consistently

        You made a typo there. "Pentium" is not spelled "G3"

      • The 601 had extra instructions for compatibility with POWER. It was part of the architecture to remove some instructions, planned and agreed to by Apple, IBM and Motorola before the 601 was made.
    • Re:Microcode? (Score:4, Insightful)

      by WndrBr3d ( 219963 ) * on Monday May 12, 2003 @05:59PM (#5940503) Homepage Journal
      I have to agree with your side note. I make the technology decisions here at my company and have a strict belief when it comes to upgrading. Microsoft OS's I refuse to deploy on our systems until SP1 is released (because we all know its coming sooner or later). We just last month upgraded to Windows XP.

      I suppose the same argument can be applied to everything in life. Cars, Televisions, DVD players.. you name it. You just need to get a feel for how things age before you invest in them for long term.
    • The problem that Intel is experiencing is not unique. Within the last 5 years, Sun Microsystems has been experiencing significant problems with its own processors. Please read " Sun suffers UltraSparc II cache crash headache [theregister.co.uk]".

      In terms of reliability, the Itanium II is no worse than the UltraSPARC series of chips. Both Itanium and UltraSPARC face the daunting task of debugging 100+ million transistors. Ensuring that the fabricated chip is bug free is virtually impossible. So, both companies have substant

      • by Mr. Piddle ( 567882 ) on Monday May 12, 2003 @08:01PM (#5941334)
        You really are a troll, tonight!

        Please read "Sun suffers UltraSparc II cache crash headache [theregister.co.uk]"

        This was a problem with the cache RAM and not the CPU itself. It was traced to a supplier (IBM), who was selling a defective product.

        In terms of reliability, the Itanium II is no worse than the UltraSPARC series of chips.

        There is no data to back this up. I know you don't have it, and I certainly don't have it. The only people who really have it (Intel and Sun) probably won't give it to us, so this ends here.

        However, since so many people pay attention to the flaws in Intel chips, they are likely to have less bugs than other chips.

        This is not true. Intel is pressured by a time-to-market more than other suppliers, especially with respect to the Pentium line. Sun has obviously decided to delay product launches to work out issues (e.g., UltraSPARC IIIi), because their customers expect reliability over other concerns. Hardware doesn't really follow the "all bugs are shallow" mantra of the Open Source movement, we mainly have to have faith in the manufacturer's simulation and test labs.

        In any event, the performance of the Itanium II is at least 1 order of magnitude greater than the UltraSPARC III and (soon) IV.

        Do you even know what "order of magnitude" means? You are claiming that, if the UltraSPARC III scores 975 on something that the Itanium II would score 9750??? For a given clock, it is true that the Itanium II is faster than the US III, but by a fraction--not a factor of ten!

        Also, the US IV, by definition, will be almost twice as fast as the US III for throughput, because it is two US III chips in one.

        You really don't know what the facts are.
        • Also, the US IV, by definition, will be almost twice as fast as the US III for throughput, because it is two US III chips in one.

          First, UltraSparc IV will be a Out-of-Order CPU. Any comparisson with the In-Order UltraSparc III ends here.

          Second, "two chips in one" is misleading. It will be a CMP chip: multiple cores on one die, sharing external interfaces and higher levels of cache.

          Thirdly, the performance gain of doubling the number of cores per die (or the number os CPUs in a system) doesn't mean it

          • Second, "two chips in one" is misleading. It will be a CMP chip: multiple cores on one die...

            Okay, dual cores is more accurate than dual chip.

            Thirdly, the performance gain of doubling the number of cores per die (or the number os CPUs in a system) doesn't mean it can provide twice the throughput.

            For a large number of applications, it can, and the Solaris kernel's fine-grained threading improves the odds greatly. For applications that saturate the processor's external bus, then it is certainly possibl
    • Of course that could be said about anything. I own several AMD systems. The first ones I bought, when the Athlon was initially released, had many problems. The motherboards would only work properly with certain types of video cards, etc. After a couple generations these systems are far more stable and trustworthy.

      I believe all of this is related to our greedy decisions to release products prematurely. Afterall we're only making these products so we can make money. I doubt a single exec at Intel cares
      • QA is what prevents this sort of thing from happening. And in my experience QA should be the buffer between your scientists, engineers and your customers. But most companies don't want to pay for the proper QA and most don't even want to pay for the proper R&D. They just want to sell their products, even if the name is the only thing carrying them.
    • My creative nomad muvo had a hardware problem where it wouldn't work with my nice new shiny NForce2 Mobo, and five months later creative released a firmware update that solved the problem.
  • by gilesjuk ( 604902 ) <giles.jones@nospaM.zen.co.uk> on Monday May 12, 2003 @05:28PM (#5940253)
    Perhaps they should put some silver stuff over the serial number. Welcome to the Intel Itanium scratchcard lotto, those with bad chips win a new one :)
  • What, this is going to affect all 6 people that own this chip?
    • The point is that this is the stuff they want consumers and companies to buy... Here ya go, buy our great 64bit chip, that has so many problems, it must be made by Intel...

      I've heard almost nothing about AMD's 64bit chip, and I'd still rather buy *it* than Intel's offerings.

      Actually, I'd still much rather have a new Alpha system. Excuse me while I drool. I guess Intel can't even compare to the chip they based their own new chip on.
    • What, this is going to affect all 6 people that own this chip?

      No, all 6.666666666666666666666 people.

  • Deja Vu (Score:3, Interesting)

    by Jason1729 ( 561790 ) on Monday May 12, 2003 @05:28PM (#5940259)
    Apparently it has to be 'a specific set of operations in a specific sequence with specific data.

    This sounds similar to the way they described the floating point divide error in the original pentium. How long until they start giving odds on the chances of someone seeing the problem in normal use.

    Jason
    ProfQuotes [profquotes.com]
  • When I clicked to read more of this story I got an Intel ad at the bottom of the story. Gee, what great timing...

  • by overbom ( 461949 ) <overbom@NOspaM.yahoo.com> on Monday May 12, 2003 @05:30PM (#5940270)
    whenever they come out with a new design, they tend to have all sorts of f00fy little problems with it.
  • Underclock? (Score:4, Interesting)

    by phorm ( 591458 ) on Monday May 12, 2003 @05:30PM (#5940277) Journal
    "they recommend working around the problem by underclocking the processor to run at 800 MHz instead of it's default 900 MHz or 1 GHz."

    Why not just buy the lower-clocked CPU's then? Will Intel replace the crap chips when a revision with a fix comes around?
    "If the customer feels it's the right solution, we'll exchange processors with ones that aren't affected," she said. Intel has developed a simple software test that can determine whether a chip is affected. Meaning what? Lower-end chips that aren't aaffected, or a fixed version of the same chip. If it's the same chip, who wouldn't think it is the right solution? The article doesn't indicate whether the problem is actually solved either, but that it seems to be somewhat of an anomaly that doesn't affect all chips.

    Not a good day for Intel, and probably another reason why you don't immediately need that "Newest on the shelf" CPU, whether for your home machine or a server. Besides, by the time this chip is assuredly fixed, a faster revision will probably be out at a comparable price.
  • by ethnocidal ( 606830 ) on Monday May 12, 2003 @05:32PM (#5940288) Homepage
    Underclocking is typically necessary if a part needs more voltage than is allowed for with the default configuration. This is why when you overclock, the converse is generally required; you can get better overclocks by increasing voltage.

    Obviously, Intel are not going to encourage people to increase the voltage of their processors in order to run them at the default speeds, as this can run the risk of thermal damage to the chip with insufficient cooling, or overly high voltages. It may however still represent an option for system administrators who are keen to retain the performance of the chip.
  • by Photar ( 5491 ) <photar@photar.nMOSCOWet minus city> on Monday May 12, 2003 @05:33PM (#5940293) Homepage
    When you consider all the bugs that come through in higher level programming where everything is object oriented and human readable, it really comes as a surprise that you don't see more bugs in hardware considering the complexity of the problem and low level nature.
    • by kindofblue ( 308225 ) on Monday May 12, 2003 @06:04PM (#5940536)
      I readily agree. as a software person, it boggles my mind that the hardware doesn't fail hourly. If Microsoft (or Oracle or any software company) were held to the same standards as Intel/AMD, etc, they wouldn't exist. Intel and CPU companies have been work against the immutable laws of physics, whereas software companies only have to manage their own incompetence and beat back their business departments, IMO.
    • by AxelTorvalds ( 544851 ) on Monday May 12, 2003 @07:42PM (#5941238)
      All do respect, but I know how they make chips. They use software to do it and that's why they are so reliable, a human doesn't put each gate in to place. It's also designed with test in mind and there are whole industries and standards surrounding that. Try to name something remotely close to a JTAG interface for software. I believe it's more reliable than software but that's really becuase once you etch a piece of silicon it's pretty damn hard to fix it. Don't get me wrong though, I trust the chip a lot more than the software in most cases, I expect a compiler bug long before I expect to have stumbled on to the magic code stream that doesn't compute correctly and I expect my own errors before that.

      This kind of bug is a little different though, we're not talking about a stuck gate that only gets tickled during a single ALU operation or retiring an instruction too early or bigfooting a register too early or anything like that. We're talking about clocking issues and fundamental timing issues in Intel's "server grade" platform. There are accepted standards and practices for how aggressive to be, some vendors can tell you with amazing detail how reliable their chips are, in what conditions, etc.. With clocks in particular some vendors can be picky, I've seen hard hitters scope up boxes and refuse to support hardware they sold because it was clocked out of spec (think about the edge of a clock and clock quality.. a 1.2 Ghz clock isn't enough, it has to actually achieve the level of the clock before it switches back and it takes time for the clock to transition..) it sounds like Intel is either ignoring them or trying to write their own book or the IA64 is a bigger disaster than any one there wants to even hint at. There are a fairly limited class of errors where underclocking the chip fixes the problem and most of those errors are related to the chip being aggressively clocked to begin with. It's ironic, on IBM's POWER4 line of processors they added extra cache room for parity (at the expense of potential performance) and made the leads more beefy (again at the expense of higher clock speeds) because the platform is a server platform that places reliability at a premium. It sounds like Intel has been making PC chips too long and isn't ready for server grade chips.

      Their party line has been that they will keep working at it until it's ready, they aren't expecting it to move a lot of chips, etc. etc.. Right now they have walked down a road where they have invested billions? (at least hundreds of millions) in an unproven technology. They have crossed the line to the point that there won't be $1500 IA64 products for years and years. They have piped it as a server grade platform. And it underachieves in every area and has't taken the world by storm nearly as much as they said. So bad is it that HP, their blood brother in that mess has continued the PA-RISC and Alpha lines past the point they claimed when they originally adopted the IA64. The only reason I could imagine them to aggressively clock it like that have would be because that's the only way to make it perform remotely like they have claimed it would. I'm not going to guess about Intel's dirty laundry but I'd guess the stakes are little higher than it would look on the surface for the IA64, either that or there are some incompetants running the show.

  • Intel disclosed an electrical problem Monday that can cause computers using its flagship Itanium 2 processor to behave erratically or crash.

    Hmmm...wonder if BMW is using these chips [news.com.au]?
  • Ironic? (Score:5, Interesting)

    by Jonathan the Nerd ( 98459 ) on Monday May 12, 2003 @05:34PM (#5940314) Homepage
    Does anyone else find it ironic that when Intel makes one mistake in a processor, everyone jumps on them for making a bad product, but software companies can sell products with thousands of bugs in them and people accept this as normal? Sure, we complain about buggy software, but I don't think anyone here expects any software to be completely bug-free. Why are Intel and other chip manufacturers held to such a high standard? Or, more importantly, why are software companies not held to the same high standards?. If Intel and AMD can make incredibly complex processors that are (usually) completely bug-free, why can't any software company in the world make any product that even comes close to being free of defects?
    • It's the main component of a computer. Besides, for software, it's much easier to update (bug fix). If your processor is messed up, it's a lot worse.

    • Re:Ironic? (Score:2, Insightful)

      by ziggy_zero ( 462010 )
      Because software is a fuckload easier to fix - free downloadable patches, etc.

      With hardware like a proccessor, you'd most likely have to actually replace the part that's broken.

      I agree that software companies should be held to a higher standard, but they can get away with it because the bugs are easier to fix.
    • you can't dl a patch for a chip. 'nuff said. And has AMD had even close to as many bugs as intel?
      • you can't dl a patch for a chip

        I guess you haven't heard of microcode patches...

        And has AMD had even close to as many bugs as intel?

        Yes. Every CPU on the market has bugs. I remember Palimino had a nasty bug with cache coherency that AMD was reluctant to fix. The only difference is that Intel is on a lot more systems than AMD, so it is a lot more noticable.
        • I guess you haven't heard of microcode patches...

          I've heard they dn't work for a lot. Is this one? They talk about having to provide replacements, so i assumed it wasn't.

          Yes. Every CPU on the market has bugs. I remember Palimino had a nasty bug with cache coherency that AMD was reluctant to fix. The only difference is that Intel is on a lot more systems than AMD, so it is a lot more noticable.
          Ok, thx for the info. I guess this is the downside for intel of getting all the press.
    • Re:Ironic? (Score:3, Insightful)

      It's called managing expectations, and no one excells at the black art more than Microsoft.

      Everyone expected WinXP to be crap, and they were so relieved that it wasn't as bad as they thought they forgot to complain about the problems that do exist, as evidenced by the number of people who say "WinXP is great, compared to Win98 it's very stable and pretty fast, even though I did have to buy a new PC to run it, but that's just progress, isn't it?" when you ask them what they think of it.
      • as evidenced by the number of people who say "WinXP is great, compared to Win98 it's very stable and pretty fast, even though I did have to buy a new PC to run it, but that's just progress, isn't it?" when you ask them what they think of it.

        going a bit offtopic but that is progress isn't it ?. XP is a desktop OS that used by people at home a lot. people want their pretty colours, games and multimedia features. I'm not saying that MS software isn't a little bit bloated but it is not as bad as linux peopl
    • Re:Ironic? (Score:5, Insightful)

      by Bombcar ( 16057 ) <racbmob.bombcar@com> on Monday May 12, 2003 @06:30PM (#5940759) Homepage Journal
      The big problem is when something fails SILENTLY! That's what the BSOD and the Kernel oopsies are! If the system has corrupt data, it is very very bad, worse than losing data. So if the hardware has a bug, then it will pass corrupt data around, and then things fail.....google around for what happens with bad ram, and learn about HAppy Fun Bugs!
    • Re:Ironic? (Score:4, Insightful)

      by slashdot_commentator ( 444053 ) on Monday May 12, 2003 @06:36PM (#5940807) Journal

      While I do understand your sympathy towards hardware manufacturers, there is one obvious difference between accepting software and hardware bugs. The software bug can be fixed with a patch. The $200 software now works; we can accept that. When the CPU is buggy, the only way that gets corrected is if the manufacturer is willing to replace the CPU. BIG difference.

      I agree completely that software products should be set to a higher standard. But we haven't seen integrity in the industry, so all that's left to fix the problem would be to sic the lawyers at them. I don't see that as fixing the problem...
  • Hey Intel! (Score:5, Funny)

    by craenor ( 623901 ) on Monday May 12, 2003 @05:43PM (#5940384) Homepage
    I have about 6 years experience in Quality Assurance, with emphasis on electronics, manufacturing processes and attention to detail.

    You know...if you're looking for anyone that is.
  • Bad Joke. (Score:3, Funny)

    by rf0 ( 159958 ) <rghf@fsck.me.uk> on Monday May 12, 2003 @05:44PM (#5940395) Homepage
    How long does it take an Itanium to count to 10?

    I don't know but will let you know when it gets there

    OMG I don't believe I just wrote that

    rus
  • Anybody else had flashbacks of the Pentium FDIV bug and this excelent post [google.com.mx]?
  • by dprice ( 74762 ) <daprice@NOsPam.pobox.com> on Monday May 12, 2003 @05:56PM (#5940479) Homepage

    There isn't much detailed information about the exact conditions that bring out the bug, but they do state that the bug is electrical, that some unspecified combination of instructions and data pattern are needed, and that reducing the clock frequency avoids the problem. I can think of several things that might cause the bug. These are just guesses.

    One possibility is that there is a slow timing path in the logic that is marginally meeting the 900MHz or 1GHz clock speed. Going to 800 MHz gives the slow path more margin. This is the easy answer.

    Another possibility is that they have some part of the chip that has insufficient metal to deliver power to the logic gates. The right combination of activity might cause enough voltage droop to cause logic errors. Slowing the clock reduces the power consumption in CMOS chips.

    They might have a crosstalk problem between some signals that could flip bits when the right activity and frequency are combined. Slowing the clock can shift the relative positions of signal transitions.

    Eventually more details might surface, but Intel is probably keeping it quiet so that people don't write code to maliciously crash servers.

  • by handy_vandal ( 606174 ) on Monday May 12, 2003 @05:57PM (#5940489) Homepage Journal
    "Open the Itanium register sets, HAL."

    "I'm sorry, Dave. I can't do that ...."
  • Mwhahaha (Score:5, Funny)

    by nate nice ( 672391 ) on Monday May 12, 2003 @06:09PM (#5940580) Journal
    Finally, The electrical engineers are to blame. I knew my code was correct!

  • by bani ( 467531 ) on Monday May 12, 2003 @06:11PM (#5940590)
    "Until we're sure the issues are 100 percent resolved, we're going to keep holding back shipments with the 450," IBM spokeswoman Lisa Lanspery said. "We have a policy of zero tolerance for undetected data corruption" at a customer site, she said.

    so detected data corruption is just fine, then...? :-)
  • by smartdreamer ( 666870 ) on Monday May 12, 2003 @06:12PM (#5940598)
    Maybe this is not a bug, maybe this is just Intel's new anti-overclocking technology!
  • Hmm... (Score:3, Funny)

    by Cervantes ( 612861 ) on Monday May 12, 2003 @06:14PM (#5940623) Journal
    Ok, I guess the joke is now no longer:

    Intel Inside: Get 99.98765374% from your PC!

    Instead, it's now:

    Intel Inside: Get 99.98765374% from your ...>>NO CARRIER
  • by glenebob ( 414078 ) on Monday May 12, 2003 @06:22PM (#5940698)
    "Intel is saying that affects the 900MHz and 1 GHz Itanium 2 chops"
    So... is /. an early adopter?
  • by Mooncaller ( 669824 ) on Monday May 12, 2003 @06:51PM (#5940903)
    Itanium is a very new architecture. It has the potential for kicking i386 chips in the butt once it has a chance to grow up. With anything as radicaly new as the Itanium, there is a high probability of unexpected problems. AMD has not had this sort of problem resently because they don't have any balls. All they ever do basicaly amounts to minor tweeks of a stable design. Even their 64 bit extensions fall into this catagory.

    The type of problem Intel is dealing with could very well be in a new class. I have a hunch that it has to due with either unexpected capacitive coupling ( possibly related to an in-spec extreme of the process variation) or thermal transients causing timing skew. These types of phenomena are nearly impossible to model, especial if its tied to a particular set of process deviations. That is why manufacturer do such extensive qualification testing. Unfortunatly this testing can not be done untill there are enough units to test ( like in the 1000s). This does not happen untill the device is ready for production. Technicaly, this is the Pilot phase of development.

    One needs to give Intel some credit for learning a lesson from the Pentium fiascos ( not just the math error, but also the original ( 5V) 90Mhz burn-up issue). At least they are doing the right thing now. Corporations, like people, sometimes need to learn the hard way. Unfortunatly, though people usually retain their lessons, Corporations sometimes need to relearn them, especialy when being run by greedy BODs ( or board members with hidden agendas). AMD has yet to learn this particular lesson. One of these days, they will try to cover up a problem and its not going to work. They have gotten away with some stuff already because everyone loves to hate Intel ( me included, 68000 and PowerPC for me!)

    Unless your familiar with LSI semiconductor manufacturing, you should not be commenting. Because you don't have a clue as to what is going on. The posts I've read so far, remind me of what a class of 10 year olds would right in criticing Joseph Conrads "Heart of Darkness".
    • >>Unfortunatly this testing can not be done untill there are enough units to test

      >>AMD has not had this sort of problem resently

      Does that mean they're jealous of Intel's problems and resent not having them?

      >>The posts I've read so far, remind me of what a class of 10 year olds would right in criticing


      Wow. With your mad spelling and grammar skills you ought to know exactly what 10 year olds are capable of.

      But seriously though, Intel sells these chips to a completely different ma
    • Give ME a break (Score:3, Informative)

      by m11533 ( 263900 )
      While I have no particular animosity toward Intel, other than it is important for there always to be competition to push them, I do not think they need to be let off the hook. Itanium has been around a very long time. You may think of it as new technology, but that is more because of the lack of acceptance in the marketplace, not because it has only recently been released. What was happening all of these years since Itanium was initially launched?

      Additionally, while the Itanium instruction set takes a diff
    • I have a hunch that it has to due with either unexpected capacitive coupling ( possibly related to an in-spec extreme of the process variation) or thermal transients causing timing skew.

      In that case we need to change the gravitonic phase, reduce the tectronic radiation and then increase the nucleonic flux.
  • by Anonymous Coward
    Because I truly believe that 1 * 1 == 2.
  • They recommend working around the problem by underclocking the processor to run at 800 MHz instead of its default 900 MHz or 1 GHz

    I just want to see them recommend this AFTER they start incorporating their new patented anti-clock speed changing technology into all of their chips.

  • Intel QA (Score:3, Funny)

    by sharkey ( 16670 ) on Monday May 12, 2003 @10:05PM (#5941976)
    Where quality is job 1.99904274017.
  • by 1nv4d3r ( 642775 ) on Monday May 12, 2003 @10:49PM (#5942157)
    Luckily, all of the Itanium 2 owners have been contacted, and both of them had not yet experienced data corruption.
  • I hate to say it, but after removing the second smoldering AMD processor from my chasis, I don't think I'll be swaying from Intel anytime soon. And I thought I was getting a deal with the KT133A chipset... *shrug*

    Sans the liquid varients, there doesn't seem to be any such thing as 'adequate' cooling on an AMD T-Bird in Texas during the summer. Sure, the last few AMD processor generations seem relatively bug-free, but what's the point of a 'flawless' processor if it only lasts me a year?

    Sigh.....*waits

    • Try the stock retail fan. They last me just fine, and my machine runs in 100+ degree weather a lot. If your CPUs continue to die, there is a different problem. While thermal protection was a missing feature of AMD some time ago, the problem has since been addressed on a number of fronts. Personally, I'd rather go with a fairly bug-free chip that's cheaper and more powerful than one with a corporate logo attached to its 50 gazillion stage pipeline.

C'est magnifique, mais ce n'est pas l'Informatique. -- Bosquet [on seeing the IBM 4341]

Working...