Intel Skylake Bug Causes PCs To Freeze During Complex Workloads (arstechnica.com) 122
chalsall writes: Intel has confirmed an in-the-wild bug that can freeze its Skylake processors. The company is pushing out a BIOS fix. Ars reports: "No reason has been given as to why the bug occurs, but it's confirmed to affect both Linux and Windows-based systems. Prime95, which has historically been used to benchmark and stress-test computers, uses Fast Fourier Transforms to multiply extremely large numbers. A particular exponent size, 14,942,209, has been found to cause the system crashes. While the bug was discovered using Prime95, it could affect other industries that rely on complex computational workloads, such as scientific and financial institutions. GIMPS noted that its Prime95 software "works perfectly normal" on all other Intel processors of past generations."
Deja Voo of the Pentium 5 FDIV bug (Score:4, Insightful)
Re:Deja Voo of the Pentium 5 FDIV bug (Score:5, Informative)
Well 'Deja Vu' and you can leave '5' off.
For an analogous screw up, you only need to look at Haswell/Broadwell and TSX feature, which they retroactively disabled due to defect.
The FDIV was noteworthy because the state of things were such that they didn't have much recourse other than replacing the processors. We haven't seen a defect such that processors had to be physically recalled at such scale since, though there have been a number of similarly disastrous issues, if not for the fact they could push a microcode change to disable something or workaround...
Re: (Score:3)
Re: (Score:3)
Well, that was the first response. Eventually, though, they bit the bullet
"Monday, December 19 [1994] we changed out policy completely. We decided to replace anybody's part who wanted it replaced... replacing people's chips by the hundreds of thousands... We created a service network to handle the physical replacement for people who didn't ant to do it themselves."
-- from
Re: (Score:3)
Re: (Score:2)
did you see the price tag of those HP or Compaq desktops? (Big) Businesses (of the 90s) don't do clones.
And even if it meant we had some PhD billing us at $500 an hour, you can bet since he was a consultant he had the 386sx16, and maybe after a few months of begging we got him a 387sx. Pentium??? LOL
Re: (Score:1)
Well 'Deja Vu' and you can leave '5' off.
For an analogous screw up, you only need to look at Haswell/Broadwell and TSX feature, which they retroactively disabled due to defect.
The FDIV was noteworthy because the state of things were such that they didn't have much recourse other than replacing the processors. We haven't seen a defect such that processors had to be physically recalled at such scale since, though there have been a number of similarly disastrous issues, if not for the fact they could push a microcode change to disable something or workaround...
That's because after FDIV, they put in a shit ton of work developing survivability features so that problems could be worked around. This is a good thing.
Re: (Score:2)
Probably the TSX problems are closer in some respect, that the fix comes in microcode. With F00F, the OSes had to workaround the issue one way or another.
Re: (Score:2)
Kimchi, Satan-san? re: FOOF! (Score:1)
indeed. Streng was bold.
http://blogs.sciencemag.org/pipeline/archives/2010/02/23/things_i_wont_work_with_dioxygen_difluoride
see also
https://what-if.xkcd.com/40/
Re: (Score:2)
Re: (Score:3, Interesting)
Nah, we blame this one on the NSA, to wit:
It only happens when running complex calculations like Mersenne primes. Who runs such calculations? It isn't the good citizens looking at their Facebook whatever it is that they look at. It's people doing crypto, ie, Terrorists.
So how do we stop Terrorists? Don't let them do complex crypto calculations.
QED.
Re: (Score:3)
Re: (Score:1)
They were only looking for weapons of math destruction.
Re: only looking for (Score:2)
weapons of math destruction. 3
Re:Deja Voo of the Pentium 5 FDIV bug (Score:5, Informative)
Re: Deja Voo of the Pentium 5 FDIV bug (Score:2)
It's not a bug. It's a "specification update". Get it right. Clearly you were using the wrong specification.
Re:Deja Voo of the Pentium 5 FDIV bug (Score:4, Funny)
Old-timers will remember the Pentium 5 FDIV bug
5? That was the 80 4.999999583694 86 processor was it not?
now all my Intel 585.879436603 jokes go faster! (Score:2)
and run simultaneously on 7.9335 threads, too!
Re: (Score:2)
Re: (Score:2)
Old-timers will remember the Pentium 5 FDIV bug [wikipedia.org] where the chip sometimes yielded incorrect results for complex mathematical calculations.
Does the following make sense?
The engineers brought back the above code, because the people who knew about it and why it should not be used, had retired. This retirement situation allowed for it's re-introduction. No, Intel will not be accepting returns for Skylake. It will be a microcode patch. The microcode patch is a backdoor input to the cpu to allow fixing instructions and breaking security.
Lack of competition (Score:1, Troll)
Re:Lack of competition (Score:5, Insightful)
If you saw the actual errata list for processors on launch day, regardless of manufacturer, your jaw would drop. A lot of nasties get cleaned up on subsequent revisions (mask changes), but in the meantime patches show up for the BIOS, libraries, and compilers so that the user never sees the warts. With Billions of transistors there will be design errors that even intel will not catch during verification or characterization. The fact that a BIOS fix will take care of it is a sign that it is not that egregious.
If you want to avoid this kind of stuff you should wait a few months after any major shakeup to buy.
Re:Lack of competition (Score:5, Informative)
Go see page 21 for example:
http://www.intel.com/content/d... [intel.com]
Re: (Score:3)
Re: (Score:2)
Re: (Score:3)
The only difference between the low end and high end chips is the number of flaws in the core coming off the die. It's impossible to get a consistent yield on a wafer. Minor electrical variances, impurities in the materials, flaws in the machines that do the manufacturing, etc. The chip maker has to test each and every chip that is produced to sort them into a wide variety of performance bins. The ones that have the fewest flaws and can run the fastest get put in the most expensive bins. The ones with f
Re: (Score:2)
Simulation testing is very difficult. It is many orders of magnitude slower than the actual device. At some point, you have to ask "should we do 2 more months of simulation on this or just spend a million $ or so to fabricate some samples with the newest tiny geometry?" So you fabricate, find 50 errors missed in simulation, fix those and start simulations again. Fabricate again (whoops! there goes another million) and find that there are flaws caused by the fixes, flaws hidden by the previous flaws, newly d
Re: (Score:2)
Like software one should wait until after the product has had a revision 1st.
Oddly we think of intel cpus and chipsets as rock solid and operating systems as garbage based on Vista, ME, and 8.1. Perhaps doing the same and buying older hardware would be wise too.
My gigabyte board for example I am disappointed in and same with Asus when z97 haswell. Was new. Both are top brands but were extremely unstable and buggy. Asus Sabertooth is unusable and Gigabyte got stable after 4 updates somewhat.
Re: (Score:2, Interesting)
Everything is getting faster. Development cycles are getting shorter, schedules are getting tighter, margins are being trimmed down and testing is taking some of that hit. Software is already brutally paced to the point that customers are now performing QA. We're having to train our customers how to use Bugzilla and we somehow accept this as "Ok". Eventually the pacing will become so brutal that version 2 won't even use the same codebase as version 1. Posting bugs will become useless. Software development v
Re: Lack of competition (Score:2)
I find neither Gigabyte nor Asus to be "top" motherboard manufacturers. At best they are premium value boards (cheap boards with some premium features enabled). I have found them consistently to be buggy and sometimes even outright useless. The last time I bought them, I actually returned an Asus board because it 'supported' ECC RAM but didn't actually implement it (simply disabled it).
I buy SuperMicro boards, not always on the edge but consistently configurable and very good support if any bugs do arise. I
Re: (Score:2)
I've found Gigabyte to be okay, but I've never understood why people like Asus so much. Their stuff is way too flaky and unreliable to command the premium prices you'll pay for it. It's too bad that Intel stopped making motherboards (at least ones in standard form factors). They generally weren't terribly friendly to overclockers and could be a bit conservative on what settings they exposed but they tended to be pretty stable and well supported.
Re: (Score:2)
Re: (Score:2)
I told him I had canceled the PC orders I placed and would not buy more of them until the situation was resolved. A short while later, Intel changed their tune and also started being more open with the bugs in their processors
Before I was born, Britain had never had a female prime minister, America had never had a black president, and the Shah still ruled Iran.
My birth clearly changed all of this...
Re: (Score:2)
For a given value of performance expectation, as purchased.
One might be a little bit cheesed to discover that the entire hardware floating point subsystem has been replaced with on chip emulator, which additionally wires down half of your L2 cache to host the microcode execution vectors and/or byte codes.
In the spirit of good will and transparency, I hope to see Intel recirculate the original sample chips to all the hardwa
Re: (Score:3)
That is why I never buy the (new/lat)est stuff. I'll get the old and more stable stuff.
I'm typing this on an AMD machine (Score:2)
you insensitive clod!
Nothing to worry about (Score:2, Offtopic)
It probably just means the NSA is already using your processor's compute capacity as part of their vast decryption botnet. The fix should improve resource management so you won't notice it in future.
Re: (Score:1)
Thanks, that got me one step closer, and then I found these on the Intel website:
http://www.intel.com/content/www/us/en/processors/core/core-i3-processor.html?wapkw=skylake
http://www.intel.com/content/www/us/en/test/manju-test/core-i5-processor.html?wapkw=skylake
So, it look like the processor names (what I can find in the system specs) are i3-6x00 and i5-6x00, etc.
I don't have anything that new, so I am OK.
When hardware must just work (Score:5, Informative)
Re: (Score:2)
I know the 720p version of this movie would send one Intel multi-core CPU into shutdown. That was with 3D TV and an NVidia 3D Vision setup. The same graphics boards and display had no problem with another motherboard/CPU combination. Still wondering whether it was the CPU or the cooling. No problems with anything GPU related.
http://www.3dtv.at/Movies/Skyd... [3dtv.at]
Re:When hardware must just work (Score:5, Interesting)
I work on ASIC design, though I am on the Analog side of things. There are more people doing verification than design by roughly 2:1. I am told that in the smaller nodes and more complex designs that the ratio is even higher. Basically you can slap down some RTL code (verilog or VHDL) quickly, but torturing it through all exceptions is very hard. Then you have to synthesize and build it, which can introduce all sorts of timing and parastic kinds of problems that have to be double checked. Finally test vectors have to be created to double check the functionality of every transistor in the design to assure that what was built matches the masks.
It is truly phenominal that anything with Billions of gates ever works at all, let alone with the high yield and relatively low error count we have come to expect.
Re:When hardware must just work (Score:5, Interesting)
I've done this.
First, billions of transistors is actually easy - most of the transistors in a modern CPU is actually spent on caches and other memory. Logic itself doesn't have as high a transistor density as you might think. In fact, in practically all ASIC designs, there's so much extra silicon space that they put extra gates there that do nothing but are tied to a logic value. These spare transistors serve to provide "rework" room for the design. If you look at most steppings, you start with A0, then you have A1, A2, ... B0, B1, ... etc. Well, going from A0 to A1 is basically just a metal mask change - they don't change the transistor masks (each mask costs around $100K each, and 10 layer metal designs have often 30+ masks, so a $3M cost before the first silicon is patterned). instead, they rewire the transistors using this spare sea of transistors to fix the issues - hopefully only needing to change 5, maybe 10 masks tops ($1M). When you go from Ax to B0, that implies a complete new mask set - either there are too many fixes, or the design is being revised.
As for simulation, it's multi-stage. First each block is individually tested, and simulated, then it's all brought together and software simulated to check for easy to spot faults and have full inner visibility to see why things are the way they are. The complexity of modern CPUs and SoCs means this is only around 1Hz, usually less, so it's reserved for initial testing and sanity checking test vectors.
The next step is to put in on an accelerator - systems like Cadence's Palladium which can get your clock speeds up to the hundreds of Hz range. The simulation isn't as visible and the timings can be off, but you can functionally check most of the blocks and with careful probes design, bring error cases back to the software model to understand what's going on.
The next stage is FPGA simulation - you're testing the logic itself and FPGAs (we're talking about the ones that cost easily $30K each, and no, you need at least 4 or 8 of them or more - that's a quarter million dollars in FPGAs!). But the system moves to the kHz range to even 1MHz. Which despite its slowness, is actually fast enough to boot an OS like Windows or Linux or run test software so software development for drivers and such can begin. Visibility is limited to whatever probes you could install and whatever debugging tools your FPGA toolset has.
Then it's all laid out and routed and all that, and software simulations are run to verify timings - ensuring there are no setup and hold violations in the final floorplan.
And it's not as bad as you think - each block is quite independent and as long as the interface contract is held (setup and hold, timings and other things for the block), the tools will tell you how close you are to violating the specs for each block. So you can test each block in isolation and as long as the interface contract is held, be assured it will work.
Of course, it won't catch integration errors like ground bounce or other such things that. It's akin more to building a space shuttle or airplane - with the right design, you can get something that works.
Re: (Score:1)
As a CPU designer that was my thoughts as well. tlhIngan is/was probably involved in ASIC design but doesn't understand the significance of verifying architectural correctness and the sophistication of DFT practices.
Re: (Score:1)
learn to write mom(s) in the possessive form, douchebag's.
I do not have a dog in this fight, but I would just like to point out that "mom basement" is as valid a term as "man cave".
Re: (Score:2)
Those would be rather gross synthesis or layout errors. These kinds of "miswirings" are almost impossible with properly modularized HDL. If you get the "memory affects registers" kind of a bug, there's something very wrong somewhere in the tools, but it's super unlikely that it'd be a design error.
Re: (Score:2)
Do you have a Czar of Bandgaps, and do you dread temperature-dependent startup problems yet? :)
Does the same thing with Autocad (Score:1)
Just got a MSI with 32GB of RAM and the skylake processor because I need to manipulate large Autocad files. For no reason my laptop would lock up and nothing would be in the dump logs. I could not figure it out...until now.
Re: (Score:1)
You were running windows?
Re: (Score:2)
I think you might want to look elsewhere for your problems. I've got an MSI z170a motherboard, an i7 6700K and 32GB RAM, which I use to manipulate large Autocad files... and I have had absolutely no issues at all.
Re: (Score:2)
Why TFA states BIOS? (Score:1)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Intel needs to push the microcode update through the BIOS. You can't do it via a OS update. So hopefully your motherboard manufacturer picks up on this.
How often does Joe Six pack update his bios? I mean really? It makes sense to patch the cpu at startup as most of these users have updates enabled by default because their computer came that way when they turned it on
Re: (Score:2)
Re: (Score:2)
No, they don't. Yes you can. Maybe - meh.
That's one hell of a heat sink (Score:2)
The CPU makes the PC freeze? If they could just crank this bug down a bit it could revolution the server cooling industry.
At 3ghz 1 in a billion is 3 times a second (Score:5, Interesting)
Just saw this video
https://www.youtube.com/watch?v=eDmv0sDB1Ak
Gives some insight in to the insanely complex nature of processor design and how absurdly reliable they need to be. Modern computers pretty much expect the CPU to be flawless and that's a daunting task considering their complexity and the staggering amount of computations they perform even in ordinary day-to-day use.
An error that occurs one in a billion operations will happen 3 times a second at 3ghz.
So yeah. Some bugs are gonna happen. Thankfully most can be fixed with microcode updates.
Re: (Score:2)
No, it doesn't work like that. (Score:1)
Most processor bugs have nothing to do with the frequency of execution, they're caused by a unique set of circumstances. So when someone says it will happen once out of every billion operations they're making the assumption that you will setup that unique case one out of every billion times. This depends heavily on what you're doing with processor. For example, this bug is a math related operation and chances are that if you put it in one of Google or Netflix web servers it would never hit the bug for th
Few, OS X is safe! (Score:1)
Well, count my lucky stars that OS X isn't affected! Mac master race wins again! I'm guessing there's no Prime95 mac users, so therefore I must be safe, right? right?
On a slightly more serious note, how does one bios-update the CPU on a Mac? Does Apple roll it into their updates? Just curious.
Re: (Score:2)
Apple calls these sorts of things "firmware updates" (yes that is a generic name). Things like this are included, as are things like updates for ethernet chipsets, firewire routers (there are 3 on the MacPro), and even rarely firmware for the GPU. Additionally there are sometimes "SMC" updates for the part of the computer that manages power and sleep behavior.
Re: (Score:2)
Guilty as charged, but I'm going to go with "Everyone, I found the OS X manifestation of this bug!!"
More correct than you realize (Score:2)
Correct, but for the wrong reason:
There are currently no Apple products that utilize a Skylake CPU.
Correction (Score:2)
I was incorrect. The 2015 iMac has a Skylake CPU.
Re: (Score:3)
How exactly does one use "Fast Fourier Transforms to multiply extremely large numbers" and when exactly did Prime95 become an industry?
The most common way to multiply numbers larger than the register size of the machine (e.g., 4000 bit numbers) is to express it like most people multiply numbers more than 1 digit relative to some base R.
(c0 + c1*R+ c2*R^2 + c3*R^3 + ...) * (d0 + d1*R+ d2*R^2 + d3*R^3 + ...) = (p0 + p1*R+ p2*R^2 + p3*R^3 + ...)
Where R is 10 for humans, for a computer, R is some power of 2 (because computers like that).
A basic observation of the math is that product of digits computed this way is very similar to a linear con
Re: (Score:2, Informative)
That's the technical explanation, but the mathematical one is actually fairly simple - you convert the multiplication to an addition. There are several ways to do this - logarithms are one common way (A*B = inverse log(log(A) + log(B)) ), but so is convolution, or realizing that addition and multiplication in say, the time domain becomes multiplication and addition in the frequency domain, respectively.
So if you have two numbers, you do the FFT of them to convert the domains, then you add them up, and then
Re:Prime95 is now an industry? (Score:5, Informative)
FWIW, your "mathematical" explanation is totally bogus. You appear to have literally no idea what you are saying.
The reason the FFT works for modular multiplication of *integers* with thousands of bits is that you can pick a radix and a convolution size where you do multi-digit convolution where you don't lose any precision in those thousands of bits. Using a "logarithm" algorithm would require nearly 10x the precision to do modular multiplication on integers and using hw floating point (even long doubles) would be totally useless because it isn't accurate to more precision.
Also, addition and multiplication in the time domain does NOT magically become multiplication and addition in the frequency domain. Convolution in the time domain becomes multiplication in the frequency domain (that's how the FFT algorithm works: FFT multiply iFFT becomes cheaper than digit convolution when the size of the problem becomes large).
Finally, although it might be technically possible to use a DCT used in a typical video decoder to do some trivial digit convolution, the precision of a typical video decoder' DCT is only 14-16 bits and limited to 8 points which isn't enough precision to do squat for the modular multiplication needed to search for very large Mersenne Primes (which is what Prime95 program does). Of course you can't even get to the 1D DCT used in GPU hardware accelerators (they are generally hardwired to do 2D DCT only and modern compression algorithms don't even use the DCT anymore).
Sorry to rain on your parade, but leaving stream of consciousness BS like that around unchallenged risks it getting modded up and makes it harder for people to distinguish the real shit from the BS...
Re: (Score:2)
It's incorrect to say that FFT is "using sines". FFT is using complex exponents as base functions, while DCT uses real cosine functions. The major practical difference between the two is the discontinuities at the boundaries present in FFT, but absent in DCT. That's what makes DCT easier to apply in compression jobs.
Re: (Score:2)
are we at risk? (Score:2)
While the bug was discovered using Prime95, it could affect other industries that rely on complex computational workloads, such as scientific and financial institutions.
How about porn?