Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Intel Math Hardware

Where Intel Processors Fail At Math (Again) 239

rastos1 writes: In a recent blog, software developer Bruce Dawson pointed out some issues with the way the FSIN instruction is described in the "Intel® 64 and IA-32 Architectures Software Developer's Manual," noting that the result of FSIN can be very inaccurate in some cases, if compared to the exact mathematical value of the sine function.

Dawson says, "I was shocked when I discovered this. Both the fsin instruction and Intel's documentation are hugely inaccurate, and the inaccurate documentation has led to poor decisions being made. ... Intel has known for years that these instructions are not as accurate as promised. They are now making updates to their documentation. Updating the instruction is not a realistic option."

Intel processors have had a problem with math in the past, too.
This discussion has been archived. No new comments can be posted.

Where Intel Processors Fail At Math (Again)

Comments Filter:
  • by Spy Handler ( 822350 ) on Friday October 10, 2014 @03:52PM (#48114779) Homepage Journal

    with new maths

    • by Falos ( 2905315 ) on Friday October 10, 2014 @03:57PM (#48114827)
      1+1=3 for particularly large values of 1
  • What this mean... (Score:5, Interesting)

    by __aaclcg7560 ( 824291 ) on Friday October 10, 2014 @03:53PM (#48114789)
    I should get an AMD CPU and put the extra money towards a graphic card since GPUs do math extremely well in parallel.
    • by Austerity Empowers ( 669817 ) on Friday October 10, 2014 @04:25PM (#48115055)

      I would test that theory first. I have a hunch some GPUs are going to take shortcuts with math that someone like the guy who wrote this article will object to.

      • Re:What this mean... (Score:5, Interesting)

        by ais523 ( 1172701 ) <ais523(524\)(525)x)@bham.ac.uk> on Friday October 10, 2014 @06:45PM (#48116255)
        GPUs used to take mathematical shortcuts all the time. More recently, though, with the scientific community starting to use GPU clusters for computation, the main GPU manufacturers have been adding mathematically precise circuitry (and may well use it by default, on the basis that there's no point in having both an accurate and an inaccurate codepath).
      • Re:What this mean... (Score:5, Informative)

        by Frobnicator ( 565869 ) on Friday October 10, 2014 @07:04PM (#48116375) Journal

        You might take a look at the article and at Intel's reply.

        The issue is in sine, cosine, and similar trig functions, with an actual error of 4e-21. That error scales, of course.

        Intel's documentation change basically says you should scale and reduce your numbers first before running the functions.

        Consider what that level of error precision means. If you were measuring with a meter stick, you could be measuring the radius of electron charge radii with several precision bits left over. If you were measuring the distance between the Sun and Proxima Centari, you could do it in millimeters and have accuracy to spare.

        Even though I've run HPC simulations most of my career, we've seldom needed more than around six decimal digits of precision; that's akin to variations of human hair width when working at the meter level. It's only a problem when someone throws some strange scale into the mix; we're running physics on the kg-m-s scale, and suddenly someone complains that their usage of microseconds and nanometers breaks the physics engine We answer simply, "Yes. Yes, it does." If you need to operate in both scales, you need a different library that handles it.

        Finally, even the actual article admits this is mostly about documentation. "The absolute error in the range I was looking at was fairly constant at about 4e-21, which is quite small. For many purposes this will not matter. ... for the domains where that matters the misleading documentation can easily be a problem." He then points out that a bunch of existing math libraries know about it. He mentions that high precision libraries have different solutions and always have. He mentions that most scientists who need it use better, high precision libraries. And he details that is really just the rough approximations done on the FPU that already plays fast-and-loose by switching between 53-bit and 63-bit floating point values that have been documented as being only good for that kind of approximation since the 1980s. Everybody who works professionally with floating point for any amount of time already knows the entire x86 family (including AMD and Intel) dating back to the original coprocessor are all terrible if you need high precision.

    • by rasmusbr ( 2186518 ) on Friday October 10, 2014 @04:26PM (#48115067)

      AMD CPU:s reportedly return exactly the same values as Intel CPU:s. I'm guessing they do so for compatibility reasons, so that any workarounds that software developers have implemented work as expected.

      • AMD CPU:s reportedly return exactly the same values as Intel CPU:s.

        What, for transcendental functions? That's both impractical and useless. No "compatiblity reasons" could potentially justify this. If you take the table maker's dilemma into consideration, there's absolutely no reason to standardize on specific implementations of transcendental function - there's no fundamental simple way of "doing them right". I don't believe that IEEE-754 even standardizes anything beyond basic operations, and for good reason; the thing you're proposing could easily lull numerical develop

        • by ais523 ( 1172701 )
          As a simple example, some games use a log of actions in order to store their save files; they replay the actions in order to reconstruct the state of the game. Even if those actions involve floating-point computations, the saves are still typically reproducible given the same executable (given that these games are normally written to avoid the use of uninitialized memory, and the like). If a processor starts handling floating-point differently, suddenly everyone's save files will be broken.
      • Compatibility is the reason these instructions will never be fixed. AMD implemented correct behavior in the K5 family. It was then reverted to preserve compatibility.
    • by armanox ( 826486 )

      Get a processor that isn't crippled in the Floating Point department, if you really care that much.

      Intel sucks less then AMD does on that one (1/3 speed on Core i-series (1-3rd gen tested) vs 1/5 on FX). FP performance is much better on other types of processors - MIPS, Itanium, and POWER all show much better FP performance then x86-based processors (and SPARC runs faster then AMD).

      • Do you have any benchmarks to back that up? The ones I see from SPEC [specbench.org] pretty much has Intel dominating in FP (compared to what few Itanium and POWER results there are).

      • by Aryden ( 1872756 )
        This is not easily done when you look at enterprise issue equipment. Dell pushes intel over anything else, and that's what corps are getting thousands of.
    • Re: (Score:2, Interesting)

      Comment removed based on user account deletion
      • Intel has superior process technology, which results in lower power dissipation. At $0.12/kWh 1 watt difference is $1 a year.
        • Re: (Score:3, Interesting)

          Comment removed based on user account deletion
          • In an office environment it adds up fast when you have 100 computers. Also you need a more expensive power supply by going with AMD systems. True the first i7 sucked 200 watts of power but the 4770k is really efficient.

    • Actually, GPUs do maths very inaccurately. They trade non-standard floating point implementations for speed.

      • Wrong, at least the new cores are IEEE-754 compliant. AMD's GCN cores give exact results wherever exact results are guaranteed by the standard. Presumably nVidia is doing the same, otherwise they wouldn't be putting Teslas into supercomputers, now would they?
        • by suutar ( 1860506 )

          Perhaps, but keep in mind the standard doesn't specify everything. According to TFA, this Intel issue with fsin does not technically violate the IEEE-754 standard on how sin() should be calculated.

    • But generally only single-precision. GPUs are made for speed, not accuracy - gamers want their frame rate high, and don't care if a few pixels are shifted one space to the right.

  • by i kan reed ( 749298 ) on Friday October 10, 2014 @03:56PM (#48114823) Homepage Journal

    The main goal for Floating Point coprocessor sine calculations is to get a good enough result in a set number of cycles.

    Given that fully approximating sine takes about as many concrete operations as bits in the value, getting it exactly right isn't usually a trade off people want to make.

    There's a reason the C standard specifies that mathematical trig functions are platform dependent. If you want it precise, do it yourself to the level of precision you need.

    • by TWX ( 665546 ) on Friday October 10, 2014 @04:04PM (#48114891)
      From what I gather, the problem is that Intel didn't acknowledge in documentation how poor the instruction was for scientific use though. This is fine for home and probably most general-purpose business use, but becomes a problem when it's more critical. If those that develop software that relies on sine functionality don't know about this then error in the results of their programs will actually matter.

      This won't matter to a gamer playing some first-person shooter.
      • Well they would if the shooter was designed to apply, say, the character rotation as a delta versus as an absolute. That operation uses a lot of sin/cos, most games are designed such that the angle is stored, the delta updates the angle, and the rotation reapplied on update. Versus rotating the vertices based on the delta from the update, and saving the result (until the next update). You do the latter too much and eventually your object looks like poo. Mathematically, it's perfectly acceptable, but practic

        • If you are doing any floating point calculations and assuming exact results, you're going to get yourself in trouble. The issue is that FSIN is less accurate than advertised, not that it's not 100% accurate.

          Anyone who deals with floating point math very quickly learns about error accumulation and how to deal with it.

        • The problem is with sin( near pi ). Range reduction subtracts pi, and the value of pi doesn't have enough bits in the Intel fsin instruction. In gaming, nothing is going to rotate pi radians between updates, so this deficiency won't show up in gaming applications where incremental rotates are used.
      • by ledow ( 319597 ) on Friday October 10, 2014 @04:43PM (#48115197) Homepage

        Sorry, but anyone relying on this for scientific use where the answer matters should be using software that gives them the accuracy they want and - ultimately - are the only people who will realise whether the result is correct "enough" or not for their process.

        Some idiot researcher who expects Excel or an FPU instruction to be accurate for sin to more than 10 decimal places is going to crop up SO MANY anomalies in their data that they'll stick out like a sore thumb.

        Nobody serious would do this. Any serious calculation requires an error calculation to go with it. There's a reason that there are entire software suites and library for arbitrary precision arithmetic.

        I'm a maths graduate. I'll tell you now that I wouldn't rely on a FPU instruction to be anywhere near accurate. If I was doing anything serious, I'd be plugging into Maple, Matlab, Mathematica and similar who DO NOT rely on hardware instructions. And just because two numbers "add up" on the computer, that's FAR from a formal proof or even a value you could pass to an engineer.

        Nobody's doing that. That's why Intel have managed to "get away" with those instructions being like that for, what? Decades? If you want to rotate an object in 3D space for a game, you used to use the FPU. Now you use the GPU. And NEITHER are reliable except for where it really doesn't matter (i.e. whether you're at a bearing of 0.00001 degrees or 0.00002 degrees).

        Fuck, within a handful of base processor floating point instructions you can lose all accuracy if you're not careful.

        • by Shinobi ( 19308 )

          "I'll tell you now that I wouldn't rely on a FPU instruction to be anywhere near accurate. If I was doing anything serious, I'd be plugging into Maple, Matlab, Mathematica and similar who DO NOT rely on hardware instructions. And just because two numbers "add up" on the computer, that's FAR from a formal proof or even a value you could pass to an engineer."

          That depends on what kind of FPU you are using. The Power 6/Power 7 Decimal Floating Point unit is sufficiently accurate for engineering use

        • Hello,

          As a maths grad working with computers, you probably have to rely on documentation for any tool you're using, right? The article is claiming the documentation is inaccurate. If we can't rely on the documentation to be accurate, what can we rely on? Maple, Matlab, and Mathematica ALSO rely on the documentation being accurate. If they told you one precision, and you got another, might you not complain, and want that information widely spread so they're more apt to fix it?

          Also, I've noticed that Math

    • Personally I think that floating point binary has it's advantages as it allows you to do lots of calculations really fast. However, with the number of financial and money processing applications out there, it's amazing that more languages don't have better support for decimal numbers. Even simple numbers like 0.1 can't be properly represented with floating point numbers. .Net has a native data type called decimal [microsoft.com] that does uses decimal floating point and is accurate to 28 or 29 digits, which makes it a gr
      • by Anonymous Coward

        .Net has a native data type called decimal [microsoft.com] that does uses decimal floating point and is accurate to 28 or 29 digits, which makes it a great thing to use when dealing with money. I wish more languages would support something similar.

        They do:
        https://docs.python.org/2/library/decimal.html
        http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic

        Just because your world is limited to .NET it does not mean there aren't other things out there... Did you even bother looking up?

      • This is the #1 reason that banks use COBOL, and IBM makes Power processors with high-speed BCD arithmetic instructions.
        • The x86 instruction set has BCD instructions too. I don't think they see much use now - processors are fast enough that you can do decimal math without them. But they were needed at the time.

        • by itzly ( 3699663 )
          BCD makes no sense in the real world. It's much easier to process everything in binary, and only do the decimal/binary conversion at the input and output, which is going to be bandwidth-limited anyway.
    • by raymorris ( 2726007 ) on Friday October 10, 2014 @04:12PM (#48114957) Journal

      The documentation says that the result will be correct until the last decimal place. So if the CPU says the answer is:

      0.123 456 789 123 456 789

      You have have a close approximation, accurate to the 17th decimal place, according to the documentation.
      The problem is, the correct answer may be:
      0.123 444 555 666 777 888

      The documentation says it's fairly precise. In truth, it's only good to the fourth decimal place in some cases, whereas Intel documented the function to be accurate to 66 bits or so.

      • Well, no.

        From TFA, the absolute error closely approximates 0.000000000000000000004.

        So you'll only see a relative error as large as you're showing (off in the fifth decimal place), if the correct answer is something like 0.000000000000000012345, which might show up as 0.000000000000000012344.

        • by raymorris ( 2726007 ) on Friday October 10, 2014 @05:37PM (#48115749) Journal

          Here's an example from TFA:

          tan(1.5707963267948966193)

          actual -39867976298117107068
          x87 fpu -36893488147419103232
          error 743622037674500958.81 ulp

          • tan(1.5707963267948966193)

            And the example is rubbish.

            That number 1.5707963267948966193 is a decimal number. Before you get anywhere near calculating the tangent of that number, you have to convert it to a binary number. The error in that conversion will be about 2^-64. That means the argument that you pass to the FTAN instruction isn't actually 1.5707963267948966193, but a number that is different from this by up to 2^-64. The error that you get by not passing 1.5707963267948966193 but a slightly differ

    • by Beck_Neard ( 3612467 ) on Friday October 10, 2014 @04:28PM (#48115083)

      The error is not small. If you read the article, on certain very reasonable inputs (not pathological at all), you can sometimes wind up with only _four_ bits being correct.

      Many scientific applications absolutely depend on fast hardware sine implementation. As you said, getting it exactly right isn't a tradeoff that people usually want to make.

      This has nothing to do with the C standard. Intel's own documentation was incorrect, making 'YMMV' completely moot.

      • by whit3 ( 318913 )

        The error is not small. If you read the article, on certain very reasonable inputs (not pathological at all), you can sometimes wind up with only _four_ bits being correct.

        The issue here, is that any computed sine value outside the first quadrant (input values 0 to pi/2) is computed by reducing the input quantity. The function is periodic, so adding or subtracting any multiple of the period (2 pi) from the input value, is mathematically valid. So, the error is made to be small for each value in that fir

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      The problem isn't caused by the actual sine calculation, but in the preparatory range reduction, where the input value is mapped to the +-PI/2 interval. It appears that a "hard coded" approximate value of PI is the culprit, because the approximation is only accurate to 66 bits, but for the correct result, it would have to be correct to 128 bits. AMD at one point made processors which used the full precision value of PI and returned correct results for fsin(x). It broke software, so AMD "fixed" it by breakin

      • Pi? No wonder! Pi is wrong [youtube.com]. They should be using Tau.
      • I recall working with numerical methods from about 40 years ago, and all of the calculations that required a call to sin were range reduced to the region of +/- pi/4 anyway. The reason is that the taylor series expansions for sine and cos are most accurate in the region of zero, and for values in excess of pi/4, it is more accurate to do a transformation and implement a different call.

        It is likely that the serious numerical code already handle this condition inside the internal algorithms.

        • I recall working with numerical methods from about 40 years ago, and all of the calculations that required a call to sin were range reduced to the region of +/- pi/4 anyway. The reason is that the taylor series expansions for sine and cos are most accurate in the region of zero, and for values in excess of pi/4, it is more accurate to do a transformation and implement a different call.

          If you read the article, that's what the Intel processor does. Instead of an infinitely precise value of pi, it uses pi rounded to 66 bits (which is 13 bits more than normal double precision arithmetic would use, and 2 bits more than extended precision arithmetic would use). So all these people getting all excited about an error in Intel's FPU most likely wouldn't be capable of implementing the sine function anywhere near as precise as FSIN does.

      • > AMD at one point made processors which used the full precision value of PI and returned correct results for fsin(x). It broke software, so AMD "fixed" it by breaking it in microcode.

        Do you have a link for that please? Thanks.

      • So, manually subtract a more accurate value of pi from the argument and pass that to fsin()? Sounds trivial enough for anyone really concerned about accuracy and speed.

    • Are the FSIN results more or less accurate than the trig tables inside the covers of math textbooks?
    • If you want it precise, do it yourself to the level of precision you need.

      People just don't realize that FPUs are **inherently** approximations, anyone's FPU, its not Intel specific. There are inaccuracies converting to and from binary, there are inaccuracies depending on the relative magnitude of operands, the are inaccuracies due to rounding, etc ...

      Do you know one way to tell if a calculator app is implemented using the FPU. Try 0.5 - 0.4 - 0.1, you may not get zero if a FPU is used. That is why handheld calculators often implement calculations using decimal math rather tha

    • Given that fully approximating sine takes about as many concrete operations as bits in the value

      I'm not sure that this is even true - the way I understand it, nobody even knows how many operations it takes in the general case, if by "fully approximating sine" you mean "returning a result no more than half-ulp from the correct value for every valid input".

      • I'm not sure that this is even true - the way I understand it, nobody even knows how many operations it takes in the general case, if by "fully approximating sine" you mean "returning a result no more than half-ulp from the correct value for every valid input".

        That's actually solved (for double precision). Google for crlibm.

    • I know nothing of processing, at least at this level. You often post good stuff here. Can you explain why computer processors cannot do the functions in the same way that calculators do? Maybe I'm asking the question in the wrong way, but you know what I mean (I hope).
      • by vakuona ( 788200 )

        They can. But they make tradeoffs because computers are expected to do the same calculations much faster (or were in the past, and now have to keep making the same mistakes for compatibility reasons).

      • I know nothing of processing, at least at this level. You often post good stuff here. Can you explain why computer processors cannot do the functions in the same way that calculators do? Maybe I'm asking the question in the wrong way, but you know what I mean (I hope).

        The whole problem is that you may have been reading an article written by a blogger who made some rather idiotic assumptions about the precision of the sine function.

        Here's the reality: If you look at posts on stackoverflow.com you will find to your horror that many "programmers" apparently don't even know the difference between "float" and "double" and do calculations with six digits precision instead of 15, out of stupidity. Intel processors also support "extended precision" which gives you >18 digi

  • Bad intel (Score:3, Funny)

    by Tablizer ( 95088 ) on Friday October 10, 2014 @04:04PM (#48114895) Journal

    We already know it's a sin to eat pi.

  • Inaccurate headline (Score:5, Informative)

    by Loki_1929 ( 550940 ) on Friday October 10, 2014 @04:20PM (#48115005) Journal

    The headline is quite inaccurate. The processors are doing what they're designed to do; approximate the results of certain operations to a "good enough" value to achieve an optimal result:work ratio. Sort of like how the NFL measures first-downs with a stick, a chain, and some eyeballs rather than bringing in a research team armed with scanning electron microscopes to tell us how many Planck lengths short of the first down they were.

    This is a documentation failure. They're fixing the documentation. For anyone who would actually care about perfect accuracy in these kinds of operations, there are any number of different solutions to achieve the desired, more accurate result. The headline and the summary make it seem as though there's a problem with the processor which is simply incorrect.

    • For anyone who would actually care about perfect accuracy in these kinds of operations, there are any number of different solutions to achieve the desired, more accurate result.

      ...like using an AMD K5? [computer.org]

      ;-)

    • by fermion ( 181285 )
      The headline is accurate. For instance, if you have a student calculate a value and they give you 10 decimal places, and only four decimal places are correct, then that student has failed at math. The most fundamental error one can make when reporting values in math is reporting more accuracy than is justified by the computational method. The Intel manual says the all digits reported, except for the last, is accurate. Since this is not the case, Intel has failed at math. Even with the change in the man
      • by Ichijo ( 607641 )

        Unfortunately, IEEE 754 doesn't provide a way to indicate the level of precision (number of significant figures) of the answer.

  • The Intel engineers watched Superman III, and they have a plan.

    • The Intel engineers watched Superman III, and they have a plan.

      They're going to override the security?

  • Division is futile. You will be approximated.
  • and the chip aint one.
  • Round off the usual suspects...

  • by jmv ( 93421 ) on Friday October 10, 2014 @08:42PM (#48116983) Homepage

    There's nothing I find particularly alarming here and the behaviour is in fact pretty much what I would expect for computing sin(x). Sure, maybe the doc needs updating, but nobody would really expect fsin to do much better than what it does. And in fact, if you wanted to maintain good accuracy even for large values (up to the double-precision range), then you would need a 2048-bit subtraction just for the range reduction! As far as I can tell, the accuracy up to pi/2 is pretty good. If you want good accuracy beyond that, you better do the range reduction yourself. In general, I would also argue that if you have accuracy issues with fsin, then your code is probably broken to begin with.

  • by gnasher719 ( 869701 ) on Friday October 10, 2014 @10:47PM (#48117559)
    It seems that the blogger didn't actually read the documentation that he claimed to read. The exact behaviour is documented in "Intel® 64 and IA-32 Architectures Software Developerâ(TM)s Manual Volume 1: Basic Architecture" of March 2012 on page 8-31. I don't have an older copy of that manual anymore, but I have written code according to that exact documentation sometime around 2001, so I am quite confident that it was in the 2001 version of the document.

    This is what the documentation says: "The internal value of Ï that the x87 FPU uses for argument reduction and other computations is as follows: Ï = 0.f â-- 2^2 where: f = C90FDAA2 2168C234 C". A more precise approximation according to Wikipedia would have been f = C90FDAA2 2168C234 C4C6 4...; the difference between pi and the approximation used by Intel is about 0.0764 * 2^-64.

    If you let x = pi, then people would ordinarily expect that sin (x) = 0. That, however, would be wrong. Storing pi into a floating-point number produces a rounding error. Rounded to extended precision (64 bit mantissa) instead of the usual double precision (53 bit mantissa) produces a result of 4 * 0.C90FDAA2 2168C235 instead of 4 x C90FDAA2 2168C234 C4C6 4...; this is too large by 4 * (1 - 0.C4C64...) * 2^-64. The sine of that number would also be 4 * (1 - 0.C4C64...) * 2^-64.

    But FSIN doesn't subtract pi from that number x, instead it subtracts 4 * 0.C90FDAA2 2168C234 C. So we get a result of 4 * (1 - 0.C) * 2^-64 instead of 4 * (1 - 0.C4C64...) * 2^-64. That's what he complains about. The reality is that the correct result would have been zero, but we couldn't get that because trying to assign pi even to an extended precision number gives some rounding error.

    Now in practice, if you calculate an argument for the sine function, and that argument is close to pi, even if you manage to get a correctly rounded extended precision result, you must expect a rounding error up to 2^-63, and therefore an error in the result up to 2^-63, even if the calculation of the result is perfect. FSIN gives a result that is about 0.0764 * 2^-64 away from that, so the inevitable error caused by rounding the argument is increased by a massive 3.82 percent. Doing the calculation in double precision, as almost everyone does, makes the rounding error 2048 times larger and FSIN is now 0.00185 percent worse than optimal.
  • /fp:strict
    Failing that, use a pen and paper.

  • by munch117 ( 214551 ) on Saturday October 11, 2014 @03:04AM (#48118371)

    Dawson points to an 'optimisation' in gcc 4.3: constant folding is done using the higher-precision MPFR library [gnu.org]. At least the gcc developers seem to think it's an optimisation, but unless it's disabled by default, it is actually a bug. In the absence of undefined behaviour, optimisations must not change observable behaviour. And, as Dawson demonstrates, this one does.

    If you need MPFR precision, you should use MPFR explicitly.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...