Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Intel Upgrades Power Hardware

Intel Haswell CPUs Debut, Put To the Test 189

jjslash writes "Intel's Haswell architecture is finally available in the flagship Core i7-4770K and Core i7-4950HQ processors. This is a very volatile time for Intel. In an ARM-less vacuum, Intel's Haswell architecture would likely be the most amazing thing to happen to the tech industry in years. Haswell mobile processors are slated to bring about the single largest improvement in battery life in Intel history. In graphics, Haswell completely redefines the expectations for processor graphics. On the desktop however, Haswell is just a bit more efficient, but no longer much faster when going from one generation to another." Reader wesbascas puts some numbers on what "just a bit" means here: "Just as leaked copies of the chip have already shown, the i7-4770K only presents an incremental ~10% performance increase over the Ivy Bridge-based Core i7-3770K. Overclocking potential also remains in the same 4.3 GHz to 4.6 GHz ballpark."
This discussion has been archived. No new comments can be posted.

Intel Haswell CPUs Debut, Put To the Test

Comments Filter:
  • by rev0lt ( 1950662 ) on Saturday June 01, 2013 @10:34AM (#43883133)
    For me, this is by far the biggest architectural improvement I see in these line of processors (check http://en.wikipedia.org/wiki/Transactional_Synchronization_Extensions [wikipedia.org] and http://software.intel.com/sites/default/files/m/9/2/3/41604 [intel.com] for more information). If it sticks, it will help solving a lot of multi-core shared memory software development issues.
    • by Z00L00K ( 682162 ) on Saturday June 01, 2013 @11:05AM (#43883353) Homepage Journal

      It's an interesting addition which can be useful for some.

      But when it comes to general performance improvement it's rather disappointing. Looks like they have fine tuned the current architecture without actually adding something that increases the performance at the same rate as we have seen the last decades. To some extent it looks like we have hit a ceiling in increased performance with the current overall computer architecture and that new approaches are needed. The clock frequency is basically the same as for the decade old P4, the number of running cores on a chip seems to be limited too, at least compared to other architectures.

      One interesting path for improving performance that may be useful is what Xilinx has done with their Zynq-7000 which combines ARM cores with FPGA, but it will require a change in the way computers are designed.

      • I'm pretty sure if AMD would bring out something to be competitive, Intel would find a way to come out with faster processors, just like they did when the original Athlon kicked ass. Suddenly they were able to manufactures processors that were'nt just 33Mhz faster than the previous ones.

        • I dearly hope this meager performance improvement gives AMD a chance to catch up a bit.

        • by Frobnicator ( 565869 ) on Saturday June 01, 2013 @11:52PM (#43887139) Journal

          If all you care about is the perspective of the boring desktop business app, then this processor doesn't have much to excite you. Of course, that's just one field. Sending a few database queries over the wire or updating your text boxes doesn't exactly saturate a quad-core box. Business desktop apps don't really see much no matter what.

          For data-heavy, cache-intensive, and parallel-intensive programs the processor looks to offer quite a lot. HPC developers like that.

          For notebooks and low-power devices the processor is wonderful. If you are paying the power bill for a data center, the energy use will add up. Accountants and laptop users will like that.

          The option to have graphics integrated to the chip this way means better SOC options. Embedded developers will like that.

          Many fields will see great things out of this chip.

          If you are fixated on the world of desktop business software, you still get an incremental ~10% improvement. Unlike technologies such as SIMD, you get it without changing a line of code. So now you can add 10% more text boxes to fill out, or maybe pick up some more wasteful coding habits.

      • by cnettel ( 836611 )

        But when it comes to general performance improvement it's rather disappointing. Looks like they have fine tuned the current architecture without actually adding something that increases the performance at the same rate as we have seen the last decades. To some extent it looks like we have hit a ceiling in increased performance with the current overall computer architecture and that new approaches are needed. The clock frequency is basically the same as for the decade old P4, the number of running cores on a chip seems to be limited too, at least compared to other architectures.

        Even single-threaded performance, even if you normalize for identical frequencies, has increased since Core 2. That they have increased since Pentium 4 goes without saying. We do not see the same increases in instructions per second that we used to do, but we still have increases. This page [anandtech.com] (and the next) from Anandtech was quite illuminating.

        • by bored ( 40072 )

          You should look at that page again, i'm not sure is benchmarking is "fair". If he wants to compare the older CPU's it might have been ammusing to use the 5 year old version of the code too. That way, your not seeing the affects of code optimized for the latest CPU's at the expense of the old ones.

          Ignoring things that are using SSE modes not available on the core2 (cinebench, etc!).

          The i7-4770 is actually clocked at 3.9Ghz when running a single core. So for something like 7 zip, 4807/3.9=1232 units/ghz. Vs t

          • by cnettel ( 836611 )

            Of course we should use instruction sets that are available. However, I think you need to defend your point on the compiler issue. Most precompiled software is optimized for a pretty old common target. I would be highly surprised if default code emitted by e.g. GCC is slower on a Core now, than it was five years ago. This is also due to the fact that Conroe all the way to Haswell share so many characteristics. Netburst vs. Core entailed far more cases where you would actually prefer differences in emitted c

            • by bored ( 40072 )

              Most precompiled software is optimized for a pretty old common target. I would be highly surprised if default code emitted by e.g. GCC is slower on a Core now, than it was five years ago.

              Most software is not 1/2 of these benchmarks. I agree you build a generic binary for x86_64 with gcc its basically running the same code on all CPUs. But, povray, cinebench and x264 are all highly optimized. It wouldn't surprise me if all of them are using icc, and have tweaked in on way or the other by intel.

              PovRay an ope

              • I've seen code that takes different execution paths depending on what machine it's running on.
                • by bored ( 40072 )

                  That is fairly common when optimizing things.

                  Especially if your optimizing for different subsets of the instruction set. Trycrypt for sure, is using AES-NI paths on CPUs that support it, otherwise its using generic ones.

                  My comment was more in regard to gcc, which I don't believe i've ever seen automatically generate multiple arch dependent code paths. I think gcc assumes you will make multiple compilation paths with different -march's and glue them together if you want something better than the generic targ

        • by bored ( 40072 )

          Just to reply again, some of these benchmarks are obvious bullshit. Like the AES one, the new cpu's have AES-NI instructions for accelerating AES.

          So, yes if you happen to be doing AES, and your running code that can take advantage of AES-NI then the new CPU's are going to fly. But the whole benchmark is so tilted its not even funny. Why not use a benchmark that renders some SSL encoded web pages? Because the benchmark is going to be bottlenecked by the network stack and the rendering engine. Not the tiny pe

          • Why not use a benchmark that renders some SSL encoded web pages? Because the benchmark is going to be bottlenecked by the network stack and the rendering engine. Not the tiny percentage of the time the CPU spends doing AES.

            This benchmark is not quite relevant to the average user, but SSL can be a large bottleneck in web servers -- it is certainly relevant there.

            • by bored ( 40072 )

              The app I work on needs a couple GB/sec (not bit) of encryption bandwidth. So, I love having AES-NI.... But.... We have encryption/compression acceleration hardware in the machine. Which means we won't be using AES-NI because the CPU's haven't increased their compression speeds to match their new found encryption speeds. Plus, we (like many other things, including truecrypt) have always offered the user the ability to encrypt with algorithms that were more CPU friendly than AES.

              Anyway, two points. In the hi

        • by Z00L00K ( 682162 )

          Of course the performance has improved since the P4, but the point is that it has been by tweaking things like cache, parallel execution with discard of unwanted branches, pre-fetch, various types of pipelining, out of order execution and so on.

          All this means that in order to achieve high performance you have an architecture that does a lot of stuff that eventually is getting thrown away because it was done just in case. The catch here is that it costs energy and builds complexity. The benefit is of course

          • by cnettel ( 836611 )
            I wouldn't agree. The out of order depth has increased in later architectures, but not by that much. Rather, branch prediction is better, so we have fewer mis-predicts. We have wider execution units (thus necessitating the higher depth), but that's a "real" improvement in performance. The number of instructions retired per clock under ideal conditions per core, is higher. Hyperthreading has been kept, but it is implemented in a smarter manner with a more dynamic sharing of resources. All of this is happenin
      • by TheRaven64 ( 641858 ) on Saturday June 01, 2013 @01:00PM (#43884139) Journal
        The hardware lock elision stuff is going to be more than just a little bit useful. It means that software that uses coarse-grained locking can get the same sort of performance as software using fine-grained locking and close to the performance of software written specifically to support transactional memory. It will be interesting to see if Intel's cross-licensing agreements with other chip makers includes the relevant patents. If it's something that is widely adopted, then it is likely to change how we write parallel software. If not, then it will just make certain categories of code significantly more scalable on Intel than other CPUs.
      • by gnasher719 ( 869701 ) on Saturday June 01, 2013 @05:35PM (#43885673)

        The clock frequency is basically the same as for the decade old P4, the number of running cores on a chip seems to be limited too, at least compared to other architectures.

        P4 was an architecture where clock frequency was the design goal, with no regards for actual performance. Any non-trivial operation used lots of cycles, because one cycle was just too short to do useful work. A simple shift instruction was four cycles. An integer multiplication nine cycles. Since Banias, the design goal was performance, not clock speed. The amount of work done in a cycle is vastly increased. The clock speed of P4 and current processors is just not comparable.

        On the Macintosh side, Apple shipped pre-release Intel Macs with 3.6 GHz P4s to developers. The first real hardware with 1.83 GHz Core Duos ran _faster_. But if you look at benchmarks, current high-end consumer Macs run about 15 times faster again!

        To maybe make a stronger impression: If Apple replaces the processor in the fastest iMac with a Haswell chip, you'll get a computer that would make it into the top 100 of the June 2000 "Top 500 Supercomputer" list. That's how fast a modern Intel computer is, compared to a P4.

      • by rev0lt ( 1950662 )

        But when it comes to general performance improvement it's rather disappointing.

        If you run benchmarks compiled for previous architectures, it is expected for them to suck - or at least, have the same baseline performance.

        To some extent it looks like we have hit a ceiling in increased performance with the current overall computer architecture and that new approaches are needed

        He have. To go beyond that, we'll need a completely new technology stack. However, latest trends are short pipelines, specialized instructions, decreasing gate size, and smarter software. They do work quite well, up to a point. If you need beyond that point, you probably can still squeeze some performance by trimming/optimizing the software layer.

        The clock frequency is basically the same as for the decade old P4

        As it would be expecte

    • by PhrostyMcByte ( 589271 ) <phrosty@gmail.com> on Saturday June 01, 2013 @12:26PM (#43883915) Homepage

      A more immediately useful feature is backwards-compatible hardware lock elision. Before taking a lock, you emit an instruction which is a NOP for older CPUs but causes Haswell to ignore the lock and create an implicit transaction. Instant scalability improvement to just about every app out there with contention, without having to distribute Haswell-specific binaries.

      My favorite feature, though, is scatter/gather support for SIMD. Scatter/gather is very important because up until now loading memory from several locations for SIMD use has been a pain in the ass involving costly shuffles and often requires you to load more than you actually immediately wanted, possibly forcing you to spill registers. It's really not something you want to do, but sometimes there are no good alternatives. I'll be super interested to see benchmarks taking this into account.

      • My favorite feature, though, is scatter/gather support for SIMD. Scatter/gather is very important because up until now loading memory from several locations for SIMD use has been a pain in the ass involving costly shuffles and often requires you to load more than you actually immediately wanted, possibly forcing you to spill registers.

        Scattter/Gather will never be as efficient as aligned sequential reads and writes, so all this has bought you is a bit more efficiency on already highly inefficient data layouts.

        I see this problem time and again, where a programmer that just doesnt get it is trying to use SSE in sub-optimal ways and then only getting marginal performance improvements over their non-SSE code. It turns into one big waste of time and effort. Essentially, if your data is AoS instead of SoA, then you've already decided that y

        • I see this problem time and again, where a programmer that just doesnt get it is trying to use SSE

          Do you teach? I don't see that very often outside of SIMD newbies. Curiosities aside, as I said:

          It's really not something you want to do, but sometimes there are no good alternatives.

          There are some algorithms, like EWA/Jinc image resampling, which simply don't translate well to SIMD without gather support. Keeping with the image processing theme, sRGB gamma decompression will be lightning fast with a L1-friendly 1KiB LUT -- I bet faster than any low-precision approximation in use today.

          This is not about trying to fit AoS into a SIMD world. This is about enabling some things in SIMD which simp

        • Are you assuming that all programmer are working on simple 1:1 transformations of data? It is impossible to encode anything with a summation term without using a gather operation. If there is a projective transformation in the algorithm (i.e. a change of representation onto different axes / number of dimensions) then it is impossible to encode efficiently without a scatter. Perhaps there are more algorithms out there that are suitable for vector architectures that you are familiar with?

      • by mczak ( 575986 )
        That is not quite correct, Haswell (or any cpu which supports AVX2) can do gather but not scatter. I agree this can be useful. I'm not sure how much faster it'll actually be in practice compared to using multiple loads (note that ever since sse4.1 you will typically not need any shuffles to achieve the same with multiple loads as you can use pinsrx instructions to directly fill up those vectors hence your concerns about increased register usage are unfounded), since the instruction is microcoded in the cpu
    • For some yet unknown reason the unlocked "K" Haswells (and maybe others) were so far all listed to come without TSX.
    • Let's see if I understand this correctly: If I use a lock to protect an array of data, and two threads access the array simultaneously, but read / write different cache lines of the array, then they will both think they got the lock, but no (time consuming) lock operation has actually happened. But if they access the same cache line of the array, then both threads are restored to the point where they tried to lock, and this time the lock is performed for real on both processors, with one having to wait. And
      • by rev0lt ( 1950662 )
        That is, in summary, what HLE does. But RTM requires you to use explicit instructions to signal transactions. Using this approach, you do need code modification and lose backwards compatibility, but gain in granularity and simplicity of implementation.
  • by Technician ( 215283 ) on Saturday June 01, 2013 @10:36AM (#43883141)

    Hmm, Performance per Watt seems to have been glazed over.

    The possibility of a fanless media center PC, the ability of a server farm to eliminate over half the cooling cost, and long battery life in a gaming class laptop seems to not be the attention of the article.

    Gee it's only X percent faster..

    • by gl4ss ( 559668 )

      Hmm, Performance per Watt seems to have been glazed over.

      The possibility of a fanless media center PC, the ability of a server farm to eliminate over half the cooling cost, and long battery life in a gaming class laptop seems to not be the attention of the article.

      Gee it's only X percent faster..

      still uses plenty of juice when gaming..

    • by Molochi ( 555357 )

      Yeah they only tested the power consumption of the new unlocked/enthusiast desktop CPU (4770k) with a 84W TDP.

      Toward the beginning of the article they say that there will be a 35W i5 Haswell for socket 1150. No mention on specs tho'

    • by Kjella ( 173770 ) on Saturday June 01, 2013 @03:29PM (#43884945) Homepage

      Anandtech tested it [anandtech.com], idle power is down probably due to the new voltage regulator (FIVR) but active power.... 113% the performance for 111.8% so performance per watt is essentially unchanged. If what you need is CPU power then you're better off waiting for a IVB-E hex-core in Q3, in threaded applications a quad-core Haswell won't touch a hex-core Ivy Brigde - it's trailing Sandy Bridge hex-cores as well. If you're not interested in the graphics or battery life, it's a giant yawn.

      That said, the GT3e graphics for mobile looks to carve out a solid niche in the notebook market, the R-series desktop processors (GT3e graphics, BGA only) is probably compelling for AIOs that don't have room for graphics upgrades anyway and the lower idle wattage should be good for all laptops with Haswell graphics. None of the processors launched now have the new idle states for ultramobile/tablets, so the effect of those we'll have to wait to see. Anandtech tested the i7-4950HQ and it was impressive how a 47W mobile chip consistently beat AMDs A10-5800 100W desktop APU in gaming benchmarks. Of course it's going to sell in a price range of its own, but AMD just lost the crown here.

      As a CPU in a regular tower with discrete graphics it's at best incremental but I think the full launch lineup hit all of Intel's main competitors - it's threatening AMD and nVidia's low end discrete card sales, it's threatening AMDs APU sales and the lower idle power is promising for their lower power parts that will compete with ARM. They're just not winning much against the i7-3770K but then they're also fighting against themselves in that market, the FX-8350 is not even close. The 8-series chipset finally brings 6 SATA3 ports, so the main AMD advantage chipset-wise also disappeared.

  • by Anonymous Coward

    Intel's Haswell architecture would likely be the most amazing thing to happen to the tech industry in years.

    Seems I've been hearing this about each of their "tocks".... Nehalem, then Sandy Bridge, and now again with Haswell.

    Their process is good, and that kind of advertising may even be warranted, but the hype is really getting repetitive.

  • by Anonymous Coward on Saturday June 01, 2013 @10:41AM (#43883183)

    The lack of phenomenal hardware improvements may annoy the nerds, but the mass market PC is killed by the abysmal software environment. People are fleeing to tablets and phones and with that the cloud because maintaining a PC has become just about impossible for laymen. The slowness of a desktop that hasn't seen professional maintenance is astonishing, if it is still working at all. Viruses and trojans aside, every bit of software comes with its own updater, many of which are poorly implemented resource and attention hogs. If the updater doesn't do you in, it's the bundled adware, sometimes installed by the update "service". The PC is stiff and stone cold, a host overwhelmed and killed by its parasites. Time to put it 6ft underground.

    • by PopeRatzo ( 965947 ) on Saturday June 01, 2013 @01:31PM (#43884341) Journal

      Time to put it 6ft underground.

      This was the giveaway.

      What do you care if there are still people who would rather use a desktop PC that's not behind a garden wall and actually get work done? Why do you insist that the PC platform has to be killed off? Isn't there room in the world for more than one set of computing needs?

      This notion, that only the most popular form of anything should exist pops up strangely often around here. The iPad is phenomenal, so Android tablets should just disappear from the market. The iPhone is popular so no Windows phones can be allowed. That sort of thing.

      Friend, I can understand that you'd rather work on a tablet and have someone else make decisions about what you can and cannot have, what you can and cannot do, but why in the world are you so insistent that no one else be able to make their own choice?

      I don't get you.

      • About allowing both sides to exist, I understand there is a need for diversity but it is generally only appreciated by the geek community... and even here, people always secretly play favorites. Given that anything that can go wrong will do so, we end up with garbage decisions being incorporated by both sides little by little. Being broad here: think loss of 4:3 screens/resolution, crappy A/V software now coming to mobile OSs, App stores now unavoidable on windows and mac desktops, mobile GUIs taking over

    • by etash ( 1907284 )
      do you know how many people have declared the PC to be dead. It's usually either people who have "better" solutions to offer, or the useful idiots who believe them. Ever tried doing some real work on a tablet ? like video editing, image editing, mundane tasks like excel, word editing. how about video games ?
      • The workstation isn't going to die any time soon, but it is being marginalized by game consoles one the one hand and portable devices on the other. I perform more and more tasks on my phone because it is close to hand and because there's an app for that. I can now reasonably get some information on an address faster by firing up maps on my phone (which is a 2011 model, and not exactly hot shit) than by walking into the next room, or look up a wikipedia article on the same basis.

        The hobbyist computer is bein

        • by Seumas ( 6865 )

          I don't buy the "consoles are killing PCs" argument for one second (also, people have spent the last two years claiming that tablets and phones are killing consoles, so there you go).

          Steam *alone* has something like 40 or 50 million users and between 2.5 and 6 million concurrent users playing a game at any one moment.

          We don't even need to talk about the depth and breadth of games available on PC that simply don't exist on other platforms. Just with the user numbers alone -- and only the Steam ones, here --

          • I don't buy the "consoles are killing PCs" argument for one second (also, people have spent the last two years claiming that tablets and phones are killing consoles, so there you go).

            That's not the argument.

            Steam *alone* has something like 40 or 50 million users and between 2.5 and 6 million concurrent users playing a game at any one moment.

            And a number of them will probably buy a Steambox in the not too far future instead of upgrading their PC.

            We don't even need to talk about the depth and breadth of games available on PC that simply don't exist on other platforms.

            Personal computers have not gone away. They are simply diversifying.

      • Depends on your definition of a "tablet". I've been using a Surface Pro as both a tablet and a PC for the last few months and love it. Editing 5k RED footage, creating rough composites, recording naration, excel, word editing and playing games. And then I just pop the keyboard and use it as a tablet for web browsing on the couch.

    • by Seumas ( 6865 )

      The 40 or 50 some-odd million Steam gamers and the often 6 million concurrent online Steam gamers would beg to differ with you and, anecdotally, the new people I hear every day saying that they're building a PC gaming rig for the first time in their life is surprising and seems to be growing. There are a ton of gamers out there and they have always been a heavy push for the PC market. They aren't going anywhere. They continue to be just as important and just as numerous, even in a world full of consoles and

  • by maroberts ( 15852 ) on Saturday June 01, 2013 @10:52AM (#43883253) Homepage Journal

    With AMDs CPU/GPU solutions?

    • by Molochi ( 555357 )

      They compared the 4770k to the A10. A10 was still faster in games. 4770K was faster in OpenCL

      Tom's also said intel will have a faster integrated graphics setup (iris pro) but it will be exclusive to the BGA offerings and not offered on LGA1150 CPUs.

  • by gnasher719 ( 869701 ) on Saturday June 01, 2013 @11:13AM (#43883425)
    In the last few years, Intel has been adding new instructions that will give major performance gains when they are used. For example, Haswell can do two fused multiply-adds with four double or eight single precision operands per cycle per core, but no current code will use this. We'll get the advantage when HPC code is recompiled (in a few months time), and when general code assumes that everyone has this feature (in five years time). But on the other hand, we _now_ get the advantages of features they added five years ago.
    • Re: (Score:3, Informative)

      by Anonymous Coward

      This is absolutely the most overlooked aspect of Haswell. As an incremental improvement, its less than stellar, but in certain areas, it effectively doubles performance over the previous generation. Aside from FMA for floating point apps, the integer 256bit SIMD pipeline is now effectively feature complete.

      Your point about waiting for recompiles is a good one - all the more reason we should be moving to adaptable metaprogramming systems for HPC, rather than a constant manual reworking of codebases. Proje

      • For the vast majority of even HPC code, it means compiler rework and math library development. The vast majority of benefit can be achieved with a drop in of a new library without rebuild of the application. In your example, it would be the interpreter, which actually tend to be the last things to receive this attention.

  • I don't know what the OP is talking about saying it has only the same overclocking potential.

    The -K series has unlocked the base clock multiplier allowing it to be set up to 166MHz without destabilizing the other controllers on the die. This should allow for considerably more fine-tuning.

    Also, the theoretical maximum overclock is 8GHz (80x100MHz, 64x125MHz, or 48x166MHz). 4.6GHz may still be a reasonable goal for an air cooled system, but there is certainly more potential.
  • by stms ( 1132653 ) on Saturday June 01, 2013 @11:21AM (#43883473)

    How hot do these chips get. I have to throttle my (not overclocked) 3770k when I do media encodes because it gets too damn hot. I've been meaning to get a better cooler I just haven't gotten around to it.

    • by DaveGod ( 703167 )

      Hmm, does the Intel-supplied fan make quite a racket when it starts getting hot? When it gets hot the fan should speed up considerably compared to idle. Also the system should be throttling the CPU automatically if it gets too hot. If these are not happening, I suggest checking your BIOS settings (I assume you are not running any tweaking software supplied by the manufacturer, which is usually very clunky). Another possibility is the hot air is not being exhausted.

      If you end up getting a new cooler, have a

      • by stms ( 1132653 )

        It gets a bit louder not to the point that it's too bothersome. My mobo is a GIGABYTE GA-Z77X-UD5H all I did to it when I built my system is upgraded the firmware and configure boot options. Thanks for the tip on the the cooler I was considering the Noctua NH-D14 which is a bit more pricey but if it keeps my chip from frying it'll be worth it.

    • by Kjella ( 173770 )

      Sounds like you should check your fan, my CPU was hitting 99C under load and the reason was that the cheap plastic attaching it to the motherboard had failed from fatigue and the fan was loose on one side but still marginally attached to the processor, providing enough cooling for light loads. No properly attached cooler should have a problem with a 77W processor, not even the cheap OEM ones. The reason I noticed was because it was getting slightly unstable, so since I didn't want to do any more thermal dam

  • by zrelativity ( 963547 ) on Saturday June 01, 2013 @11:49AM (#43883677)
    Why would Intel care about raw CPU performance. They have no competition from AMD in CPU performance. The GPU performance may not be as good as A10, but it has improved and that's what matters for Intel.

    Intel for a little while has correctly perceived that their risk to business is from shift in computing to mobile devices and they are addressing this issue. One thing Intel has always been very good at, and I'm a great admirer of them for that, when they perceive a risk, they are extremely good at steering their giant ship very rapidly into the headwind and tackle that threat. Their process technology lead also gives them a huge advantage.

    Over the next couple of years, the battle front will be the mobile and server devices, the desktop processors will become a second class citizen. Maybe this will give some lifeline to AMD, but AMD is so far behind on performance.

    • A monopoly must always strive to slightly outdo itself, so that it may motivate its captive market to continue consuming. Regardless of the technical challenges inherent in improving performance now, a 10% improvement really says "here's something we can force the next cycle of bleeding-edge adopters to take up."
  • Get A Clue, Intel (Score:5, Insightful)

    by Jane Q. Public ( 1010737 ) on Saturday June 01, 2013 @11:55AM (#43883715)
    While I am all for advances in CPUs, I seriously wish Intel would go back to a naming scheme for its CPUs that made any kind of sense to the average buyer (or even the technically-oriented buyer). I have grown really weary of having to look at tables of CPU specifications every time I shop around for computers.

    Intel's naming scheme -- expecially in recent years -- has been a mishmash of names and numbers without any obvious coherence. Get a clue Intel. You're hurting your own market.

    If I didn't have to run OS X in my business, I'd buy AMD just for that reason. Their desktop CPUs may not be quite up to the latest Intel, but they are certainly adequate and the price is better.
  • TL;DR: Haswell is OK on the desktop, but nothing special; roughly 5%-10% better than Ivy Bridge. If you're on Sandy Bridge or better already, it's probably not worth upgrading. This architecture was designed for laptops first and foremost. Light power consumption/TDP on mobile parts, and a better GPU, are the big selling points. Apple will get much better integrated graphics so they don't need a Nvidia chip for their top rMBP, but they'll pay out the nose for it.

    This is what all the rumors and leaks over th

  • Most of the noise in my old P4 based system is from the power supply and the video card. Not that I game any more, but you need 3D acceleration to run the latest and greatest desktops under Linux.

    The lower power draw of Haswell and the more-than-adequate 3D support would fit my needs just about perfectly. That 10% performance boost that the article snears at would mean the machine would only run roughly 16 times as fast as my current box.

    I think I could live with that. A quiet, low-power system that

"To take a significant step forward, you must make a series of finite improvements." -- Donald J. Atwood, General Motors

Working...