Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Stats Upgrades Hardware

Next-Gen Intel Chip Brings Big Gains For Floating-Point Apps 176

An anonymous reader writes "Tom's Hardware has published a lengthy article and a set of benchmarks on the new "Haswell" CPUs from Intel. It's just a performance preview, but it isn't just more of the same. While it's got the expected 10-15% faster for the same clock speed for integer applications, floating point applications are almost twice as a fast which might be important for digital imaging applications and scientific computing." The serious performance increase has a few caveats: you have to use either AVX2 or FMA3, and then only in code that takes advantage of vectorization. Floating point operations using AVX or plain old SSE3 see more modest increases in performance (in line with integer performance increases).
This discussion has been archived. No new comments can be posted.

Next-Gen Intel Chip Brings Big Gains For Floating-Point Apps

Comments Filter:
  • Would that improve hashing speeds in, say, Bitcoin?
    • by slashmydots ( 2189826 ) on Monday March 18, 2013 @04:05PM (#43207459)
      Slightly, but you haven't been keeping up on the latest hardware? My pair of Sapphire 5830's graphics cards would top off at about 435MH/s at a total system wattage of around 520W. The new Jalapeno chips from butterfly labs will do 4500 MH/s using 2 watts total system power. For comparison, my i5-2400 performed 14MH/s at 95W or so. So the Jalapeno is about 321x faster and about 47x more power efficient so combined, I believe that's 15,267.864x more efficient.
      • Can the Jalapeno chips do anything else when the Bitcoin market crashes? At least with the video cards I cant still drive video cards with them.

        • They had officially classified it as a coffee warmer
    • Re: (Score:3, Insightful)

      by Anonymous Coward

      Would that improve hashing speeds in, say, Bitcoin?

      Bitcoin is based on SHA256 hashing, which has zero floating point operations. So no, this will not impact Bitcoin mining at all.

  • by bluegutang ( 2814641 ) on Monday March 18, 2013 @03:55PM (#43207361)

    " Next-Gen Intel Chip Brings Big Gains For Floating-Point Apps "

    How much of a gain? More or less than 0.00013572067699?

    • FTFS:

      While it's got the expected 10-15% faster for the same clock speed for integer applications, floating point applications are almost twice as a fast

      HTH

      • Re:Let's see... (Score:5, Informative)

        by 0100010001010011 ( 652467 ) on Monday March 18, 2013 @04:06PM (#43207461)

        It's a joke. The Intel P5 Pentium FPU had a bug where

        4195835/3145727=1.333739068902037589 The correct answer is 1.333820449136241002.

        • Oh right, that bug an Intel rep laughably claimed one would only encounter once every 2,500 years or so. I'd forgotten about that.

      • by raymorris ( 2726007 ) on Monday March 18, 2013 @04:16PM (#43207539) Journal

        While it's got the expected 10-15% faster for the same clock speed for integer applications, floating point applications are almost twice as a fast HTH

        Integer and floating point are separately implemented in the hardware, so an improvement to one often doesn't apply to the other. You can add integers by counting on your fingers. To do that with floating point, you have to cut your fingers into fractions of fingers - a very different process.
        See: http://en.wikipedia.org/wiki/FMA3 [wikipedia.org]
        It's common to have an accumulator like this:

        X = X + (Y * Z)

        To compute that in floating points, the processor normally does:

        A= ROUND(Y*Z) X=ROUND(X+A)

        Each ROUND() is necessary because the processor only has 64 bits in which to store the endless digits after the decimal point. FMA can fuse the multiply and the add, getting rid of one rounding step, and the intermediate variable:

        X= ROUND( X + (Y*Z) )

        That makes it faster. Since integers don't get rounded to the available precision, the optimization doesn't apply to integers. The above processor would do Y*Z, then +X, then round, then X=. A CPU designer can make that faster by including either a "add and multiply" circuit or a "add and round" circuit or a "round and assign' circuit. Any set of operations can be done in two clock cycles, if the maker decides to include a hardware circuit for it.

    • Okay, so how will it compare w/ the Itanium?
  • by GlobalEcho ( 26240 ) on Monday March 18, 2013 @04:02PM (#43207431)

    I hope there's really a new Mac Pro coming [ibtimes.com] and that it has these chips in it! I do a heck of a lot of PDE solving, statistics and simulations, and would love to have a screamin' machine again.

    • by Anonymous Coward on Monday March 18, 2013 @04:14PM (#43207527)

      Do you really need a Mac for that? If not, it seems you're limiting your potential by having to wait for the holy artifacts to be released.

    • by semi-extrinsic ( 1997002 ) <asmunder@nOSPAm.stud.ntnu.no> on Monday March 18, 2013 @04:18PM (#43207555)
      If you're doing numerics, what the fuck (if you'll pardon my French) are you doing buying Apple? I'm working on two-phase Navier-Stokes solvers myself, and I just bought a new rig consisting of 3 boxes each with a Intel Core i7 @ 3.7 GHz, 12 GB RAM, an SSD drive and a big-ass cooling system. In total that cost less than the Mac Pro with a single Core i7 @ 3.3 GHz listed in that article.You're paying 3x more than you should, and you get what extra? A shiny case? Puh-lease.
      • He gets to tell his friends he bought an apple... apparently he keeps friends that care.

      • by Anonymous Coward

        Most physics researchers (source: physics PhD) use Mac desktops/laptops and Linux servers. Macs are perfect environments for a mix of coding and general computing, with good support for *nix tools. Anything serious gets done on a cluster. I've seen this in several universities, all of them top tier (e.g. Oxford, Imperial, UCL, Warwick), so it's not isolated.

        But hey, this is Slashdot.

        • Most of the people in the physics department here use windows desktops, but pretty much all of the numerics people use linux desktops. Naturally, all of the computing clusters are linux. It seems that virtually all laptops are macs though, which is curious. Possibly people would like to use macs on the desktop but there is some barrier (eg, purchasing or IT administration policies) ? I'll have to find out!
        • Youre paying at least double for the same hardware on a Mac. The Mac cited in the article has 2x 6-core Xeons @ 2.4gHz. Those (assuming E5645s) can be had for ~$575 each, with a motherboard at ~$275. Everything else is pocket change; a whole right with SSDs etc could be had for under $1700.

          But Im sure someone somewhere will explain why the aluminum makes the extra $2000 for the Mac worth it.

          • by Jeremi ( 14640 )

            But Im sure someone somewhere will explain why the aluminum makes the extra $2000 for the Mac worth it.

            The case is very nice, but it's not worth $2000 extra.

            The ability to run MacOS/X (without "hackintosh" style shenanigans) is really nice, and is worth $2000 extra if you have that kind of money lying around (or, more realistically, if your employer does).

            If you think $2000 extra is too much to spend, you're probably right. On the other hand, plenty of people will spend an extra $20,000 on a nicer brand of car; sometimes people want what they want, and are willing to pay extra for it.

            • On the other hand, plenty of people will spend an extra $20,000 on a nicer brand of car; sometimes people want what they want, and are willing to pay extra for it.

              The problem with this notion is that often the people are not buying a nicer brand of car, they're buying a prettier brand of car. A Lexus is just a Toyota with more asphalt and the same shit construction and the same shit handling. But a BMW costs the same as a Lexus and is, well, they're built shit since the eighties, but they're actually worth driving. For their extra $2000 they could have got something substantively better, but all they've done is buy a shinier Toyota with some options they could have a

            • The ability to run MacOS/X (without "hackintosh" style shenanigans) is really nice, and is worth $2000 extra if you have that kind of money lying around

              Which doesnt explain why a lower end Mac costs only $1000. And whether its worth $2000 extra is about as subjective as it gets; particularly when I doubt you can name a capability that OSX has that Windows does not, or a benchmark showing a substantial performance difference.

              Why not just a debian or RH flavor and be done with it if you really want a *nix?

              • by Jeremi ( 14640 )

                particularly when I doubt you can name a capability that OSX has that Windows does not

                Built-in bash shell and Unix environment by default is what does it for me. (I know you can sort of fake it using Cygwin and whatnot on a Windows box, but I'd rather pay the extra money and not have to fake it). I was a die-hard BeOS user back in the day, and MacOS/X is the closest thing to the BeOS user experience that is readily available now.

                Why not just a debian or RH flavor and be done with it if you really want a *nix?

                Because I also want to be able to buy and use commercial software. Linux/Unix are fine, but it's also nice to be able to get software X you rather than having to

      • by fyngyrz ( 762201 )

        Not to put too fine a point on it, he gets OSX, the OSX ecosystem, the vast majority of the *nix ecosystem, the ability to VM several varieties of the Windows ecosystem *or* any one of a number of pure *nix ecosystems, all in parallel if he likes, the ability to drive a bunch of monitors (I've got six on mine), all manner of connectivity, and yes, perhaps last and even perhaps least, probably one of the best cases out there -- it's not just shiny. it's bloody awesome.

        I don't even *like* Apple the company --

      • If you're doing numerics, what the fuck (if you'll pardon my French) are you doing buying Apple?

        Fair question. It turns out, PDE solving etc. isn't all I do, so while I like my machine to be reasonably fast at the numerics, I require it to work well as a general-purpose computer, too. To me, Windows, Linux and FreeBSD fail to meet that criterion.

        I do small-to-medium problems locally without having to think about remote execution issues, and then farm truly heavy numerics out to parallel processing farms like anybody else (aside from the PDE solvers, much of what I do is embarrassingly parallel). It

        • by cpotoso ( 606303 )
          You would still do a lot better getting an imac for your regular software and a linux machine for the computation. X11 makes all transparent too. And still spend less... See my post above.
          • I agree with you from a price point of view, but workflow efficiency is very important to me, moreso than workstation power.

            At one of my jobs, a powerful Linux workstation is my primary machine and we use a Linux compute farm, so I am keenly aware of the shortcomings of both the Linux user environment and of the hassle involved in dealing with remote jobs. If one doesn't have a very wide variety of calculations, or the calculations rarely change, then remote is no big deal. Otherwise it is a real time sin

    • by spire3661 ( 1038968 ) on Monday March 18, 2013 @04:20PM (#43207567) Journal
      Why not just do that on real workstation hardware and tap into it remotely?
      • Why not just do that on real workstation hardware and tap into it remotely?

        What 'real workstation' is left? The only workstations available these days are x64 workstations. SPARC, POWER, MIPS and even Itanium workstations are dead. Where exactly could one buy a RISC workstation anymore, if one wanted to get it, get the latest and greatest version of Debian or *BSD and run w/ it? Everything is now Intel/AMD, and all the CPUs that had superior floating point are either dead, or exclusive to servers that would cost millions.

    • The Mac Pros use Xeon chips, which are usually updated about 1 year after the mainstream Core processors are out.

    • by cpotoso ( 606303 )
      ???? Why do you need a mac for that? I run mac laptops and even imacs. Even have a mac pro from 2006 (at that time a good deal, 8 xeon 3GHz, not much more expensive than the equivalent Dell). Last month, a Dell Precision workstation with 2 hex core xeons (+ hyperthreading, making them effectively 24 cores--don't scream at me, I have benchmarked MY programs and for all practical purposes it acts as 24 CPUs) for just over $2k (including 32 GB ram, 3 TB disk). Runs linux nicely and the parallelism beats a
  • by MasseKid ( 1294554 ) on Monday March 18, 2013 @04:04PM (#43207449)
    For problems where you need floating point AND is not multithread friendly AND need large computing power AND is specially coded, then this will be of great use. However, most massive computing problems like this are multi-thread friendly and this will still be roughly an order of magnitude from the speeds you can get by using a GPU.
    • The good thing about manufacturers speeding up SSE/AVX/etc. is that the linear algebra libraries (specifically the ATLAS implementation of BLAS and LAPACK) usually release code that makes use of the new hawtness in about six months after release. Do you know how much software relies on BLAS and LAPACK for speed?
      • Intel's C/C++ and FORTRAN compilers are exceedingly efficient at vectorization, and are of course updated to use their new instructions. Does take a bit for software to be compiled using it, but you can see some real gains in a lot of things without special work.

        I also think people who do GPGPU get a little over focused on it and think it is the solution to all problems. You find that some things like, say, graphics rendering, are extremely fast on the stream processors that make up a modern GPU. However yo

        • by Anonymous Coward

          The downside of using Intel's compiler is that it will revert to using 80286 instruction set if you happen to run the code on an AMD chip.

    • by godrik ( 1287354 )

      Intel Xeon Phi relies on avx (version 1 I believe) and using avx gets you good improvement compared to not using avx for both sequential and parallel codes. Of course, course sequential code on Xeon Phi is typically slower than a regular sandy bridge processor.

      Many applications can use 16 float operations simultaneously. Certainly many video codecs and physics engine.

      GPUs can be good for many computations but tehre are many case where they are not so good. Most pointer chasing type of application tend not t

      • by godrik ( 1287354 )

        replying to self. Xeon Phi uses larger lanes than AVX. It is 512 bits in Xeon Phi and 256 in AVX, I got the names mixed up.

      • by Bengie ( 1121981 )
        http://software.intel.com/en-us/articles/intel-xeon-phi-coprocessor-codename-knights-corner [intel.com]

        An important component of the Intel Xeon Phi coprocessor’s core is its vector processing unit (VPU), shown in Figure 5. The VPU features a novel 512-bit SIMD instruction set, officially known as Intel® Initial Many Core Instructions (Intel® IMCI). Thus, the VPU can execute 16 single-precision (SP) or 8 double-precision (DP) operations per cycle. The VPU also supports Fused Multiply-Add (FMA) instructions and hence can execute 32 SP or 16 DP floating point operations per cycle. It also provides support for integers.

        • by godrik ( 1287354 )

          My bad, I realize later that AVX was the new instruction set for sandy bridge and not for xeon phi. AVX (version whatever) and IMCI instructions are quite similar (gather/scatter, Fused Multiply Add, swizzling/permute). Their main different is the SIMD width.

          My overall point remains valid. Doing floating point arithmetic by packs of 256 bits is overall useful.

      • OpenCL is suboptimal on NVIDIA only because NVIDIA refues to keep their support up to date, as it would chip in their vendor lock-in attempt with CUDA.

        I honestly think everybody doing serious manycore computing should use OpenCL. NVIDIA underperforms with that? Their problem. Ditch them.

    • by Bengie ( 1121981 )
      Not all multi-threaded code is large matrix friendly and GPUs need large matrix math to become useful.
    • Yeah, pretty much. Basically, they just doubled the width of the vector execution units. Obviously, that will double the FLOPS for vectorized code. In other news, 8 cores can do twice the work of 4 cores, if your code is multithreaded properly.
  • The thing that interests me most about this generation is the progress towards a single chip solution. Ultrabooks and tablets can get a multi chip package with the PCH (last remnant of the old chipset) soldered along the CPU/GPU die. Shouldn't take long till everything is fabbed onto one piece of silicon, reducing power requirements and gadget size.
  • wtf? fma3? (Score:2, Offtopic)

    could someone tell me how many separate instruction sets, pipelines and register files I
    get in a mainline CPU these days? i turned away for a second and completely lost track.

    what happens with the 10 that you aren't using? just sitting there reducing the yield?

  • While speed for single and double floats is all well and good, I wonder - when will there finally be hardware support for 128 bit (quadruple precission) floats? [wikipedia.org]

    • by godrik ( 1287354 )

      What is the use for them? for "personal" use, floats are all you will ever need. Many physics computation stays in single precision to avoid doubling the memory usage. I guess fluid mecanic computation use double, but is there really a use for quad. Who needs that kind of precisions?

      • Three years ago I was doing a SPICE simulation (SPICE uses doubles) for a radio receiver. The simulation ran into digital noise before the receiver would have, and it essentially ruined the critical part of the simulation. Software 128 bit floats is unacceptably slow.
      • here's an old paper describing octuple precision on the PowerPC G4 [apple.com]

        Many problems in number theory and the computational and physical sciences, espe- cially in recent times, require more floating point precision than is commonly available in fundamental computer hardware. For example, the new science of “experimental mathematics,” whereby algebraic truths are foreshadowed, even discovered numerically, requires much more than single (32-bit) or double (64-bit) precision.

        That paper references Bailey's 2000 paper on Quad double algorithms [lbl.gov], which alludes to "pure mathematics, study of mathematical constants, cryptography, and computational geometry

      • What is the use for them? for "personal" use, floats are all you will ever need. Many physics computation stays in single precision to avoid doubling the memory usage. I guess fluid mecanic computation use double, but is there really a use for quad. Who needs that kind of precisions?

        Not all uses are personal and the fact that some physics calculations trade precision for memory doesn't mean that all of them do.

        One example could be matrix inversions with somewhat ill-conditioned matrices. When you know you're going to lose 14 digits of precision inverting the matrix, you'd better have a lot of headroom. Cue quad floats.

        The car analogy that comes to mind is people often do sound mixing with 32-bit audio even though you 16-bit audio is perfectly fine for listening to the product.

    • by Twinbee ( 767046 )
      I would have hoped more bits were given to the exponent in quad precision. It's given 15 bits compared to double precision's 11.

      So many bits, and it almost all goes to the fraction - a real shame.
    • While speed for single and double floats is all well and good, I wonder - when will there finally be hardware support for 128 bit (quadruple precission) floats?

      It was there on PowerPC for many years, and with Haswell it will be there for x86 as well. FMA is all you need for efficient 128 bit arithmetic.

  • The serious [floating point] performance increase has a few caveats: you have to use either AVX2 or FMA3,

    Isn't AVX2 just the integer version of AVX? Like SSE2 added integer versions of the SSE floating point instructions? If so, that sentence doesn't make sense.

    • by godrik ( 1287354 )

      No, there is more to it:

      * Expansion of most integer AVX instructions to 256 bits
      * 3-operand general-purpose bit manipulation and multiply
      * Gather support, enabling vector elements to be loaded from non-contiguous memory locations
      * DWORD- and QWORD-granularity any-to-any permutes
      * Vector shifts
      * 3-operand fused multiply-accumulate suppor

  • "As you see in the red bar, the task is finished much faster on Haswell. It’s close, but not quite 2x." Sorry to ruin it for everyone but the RED bar is integer not floating point.
  • FMA4 (Score:4, Informative)

    by ssam ( 2723487 ) on Monday March 18, 2013 @05:31PM (#43208271)

    Pah. AMD had FMA4 since 2011

  • To get Cyrsis 3 at 30 fps is here!

  • "hey kids, our CPU is twice as fast as the next guys!"*

    *(you must rewrite your code to do twice as much stuff at once)
    **(which has been true for like, 15 years ever since SSE + friends made it into the PC market)
    ***(which means developers have to spend time writing non-portable optimization code)

  • GT3 (Score:3, Interesting)

    by edxwelch ( 600979 ) on Monday March 18, 2013 @06:00PM (#43208593)

    AMD has lost the CPU race a long time ago, but still beats Intel with integrated graphics. Now, It looks like Haswell could win that battle too.
    The article shows GT2 to be 15% - 50% faster than the old HD4000. That's still a bit slower than Trinity, but GT3 has double the execution units than GT2, potentially blowing anything away that AMD could offer.

  • by Anonymous Coward

    when avx came out, it was supposed to be a major speedup..
    guess what, lots of things are still faster in SSE2/3

    many of the new registers appear to speed things up, but what isn't readily apparent is there haven't always been improvements in memory ports.

    the major speedups are going to come from cleaning up the way instructions are handled and the memory lanes in the chip, not just throwing more registers at us

    This guy (Agner Fog) is the best reference on the net for what's going on in these chips:
    http://www

  • Will gcc use AVX or FMA3 if I write normal code in C++? How about Java and Python / numpy, could it be that python actually gets faster than C++ if gcc doesn't take advantage of these technologies?

No spitting on the Bus! Thank you, The Mgt.

Working...