Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Intel Hardware

Intel Officially Lifts the Veil On Ivy Bridge 200

New submitter zackmerles writes "Tom's Hardware takes the newly-released, top-of-the-line Ivy Bridge Core i7-3770K for a spin. All Core i7 Ivy Bridge CPUs come with Intel HD Graphics 4000, which despite the DirectX 11 support, only provides a modest boost to the Sandy Bridge Intel HD Graphics 3000. However, the new architecture tops the charts for low power consumption, which should make the Ivy Bridge mobile offerings more desirable. In CPU performance, the new Ivy Bridge Core i7 is only marginally better than last generation's Core i7-2700K. Essentially, Ivy Bridge is not the fantastic follow-up to Sandy Bridge that many enthusiasts had hoped for, but an incremental improvement. In the end, those desktop users who decided to skip Sandy Bridge to hold out for Ivy Bridge, probably shouldn't have. On the other hand, since Intel priced the new Core i7-3770K and Core i5-3570K the same as their Sandy Bridge counterparts, there is no reason to purchase the previous generation chips." Reader jjslash points out that coverage is available from all the usual suspects — pick your favorite: AnandTech, TechSpot, Hot Hardware, ExtremeTech, and Overclockers.
This discussion has been archived. No new comments can be posted.

Intel Officially Lifts the Veil On Ivy Bridge

Comments Filter:
  • by girlintraining ( 1395911 ) on Monday April 23, 2012 @01:11PM (#39773017)
    So, Intel, a company with no real competition right now in the market, has produced a product that offers only a very slight performance boost, and relied on tons of marketing to drum up anticipation for this mediocre offering. And then priced it the same as existing offerings as an apology to those who waited. Actually, that sounds about par for the course these days. The only real news in cpus and motherboards has been that they've gone multicore and continue to increase bandwidth. And now that they can't squeeze any more performance out of the designs, they're working on decreasing energy consumption.
    • by I.M.O.G. ( 811163 ) <> on Monday April 23, 2012 @01:20PM (#39773119) Homepage

      For people familiar with Intel's Tick-Tock cadence - this should not come as much surprise. Some people may have gotten caught up in marketing and expected more, but this is a "Tick" which brings a process shrink, power savings, and a modest performance increase. It is just about delivering that, though perhaps on the softer side of things.

      Sandy Bridge was a Tock - a BIG performance improvement. Haswell should be a Tock - a BIG performance improvement.

      On the tick, they set more modest performance goals, and focus on getting the process shrink right and tuning things up. On the tock, they should knock our socks off. So maybe Ivy Bridge is disappointing, but perhaps familiarity with their product development strategy helps to manage expectations

      • by SJHillman ( 1966756 ) on Monday April 23, 2012 @01:25PM (#39773189)

        Sounds a little like Microsoft's method.

        Win 95 - Tock
        Win 98 - Tick
        Win Me - Sproing
        Win 2000 - Tock
        Win XP - Tock
        XP SP1 - Tick
        XP SP2 - Tock
        XP SP3 - Tick
        Vista - Tock sprooooing
        Win 7 - Tick
        Win 8 - Tock (maybe)

        • Terrible analogy given that there's no analog in the software for the alteration between manufacturing process and microarchitecture design steps that Tick / Tock represents.

      • For people familiar with Intel's Tick-Tock cadence - this should not come as much surprise....

        It comes as a great surprise. Normally, the process shrink delivers either or both a clock boost or power efficiency improvement. Normally, also a speedup due to additional superscalar hardware, but Intel explained that one away as "improved graphics". Well, where did the clock boost go then? Power efficiency? OK, all missing in action. So, the big unwritten subtext here is: Intel's 22nm node has got problems. Big problems. Trigate not working out so well?

        • Higher clockspeeds use more power. Intel hasnt gone much above 3.3gHz for years, with 3.7 (i believe) being the top clock rate that they have ever done. You expect them to change that now when the focus is on higher efficiency, more cores, and lower power usage?

          It doesnt represent a problem at all, and for the record all of the benchmarks ive seen on hothardware (linky []) show it as being faster than sandy bridge, so theres that speedup youre complaining about.

          They never said that there would be a clock boo

          • You twisted my words up entirely. Let me put it more simply: I am underwhelmed by the "tock" this time. As is every commentator with a clue. This process node appears to be a fail for Intel.

            • And, oh yes, I am underwhelmed by the "tick". On the face of it, Intel would have accomplished more with another go around at 28nm.

              Now for the Intel fanboys in the thread, let's shed some authoritative light [] on the subject.

            • vWell, where did the clock boost go then?

              Normally, also a speedup due to additional superscalar hardware, but Intel explained that one away as "improved graphics".

              Your words, not mine. You wanted a clockboost (which they really havent done for about 6 years now, and did not promise), and a speedup (which they delivered). You claimed that power efficiency is, to quote your post, "missing in action" (even though it isnt).

              You seem to have assumed they reneged on all their promises despite the reality of the situation, for no other apparent reason than that you wanted something to rail about. Possibly check the sources before buying into the slashdot spin bs.

        • by Calos ( 2281322 ) on Monday April 23, 2012 @03:06PM (#39774543)

          ower efficiency? OK, all missing in action.
          Per some of the articles, power consumption is down nearly 20W between the two generations.

          So, the big unwritten subtext here is: Intel's 22nm node has got problems. Big problems. Trigate not working out so well?
          Far too early to tell. The fact that they introduced a brand new, immensely complex process into manufacturing and it is working so well actually says a lot of good about how the trigate process is fairing. It will, of course, need some tuning and massaging. But it is already performing as well as/slightly better than the previous generation on its first release, at lower power (at least per Anand).

          IVB is also farking small, which as the process matures, should mean more parts and lower prices.

        • This brings up a question I have been wanting to ask, and since i'm sure this article is full of hardware guys maybe someone here can low CAN you go before electron leakage makes it no longer worth it? I mean the whole reason we went multicores is both Intel and AMD hit a MHZ wall, where even slight increases equaled tons of extra power and heat, so are we approaching that with shrinkage? If not, how far do you think we can continue to shrink before we end up with it just not worth it?


          • Re: (Score:3, Interesting)

            by drhank1980 ( 1225872 )
            I saw a presentation a couple years ago at SPIE that has Intel showing cross sections from a sub 10nm process. They had completely wrapped the gate around the device to get those to work so the transistors were just tubes. In the same presentation, they were also showing that the current / voltage improvements between the 32nm node and the 22nm node were much more like the improvements from the 130nm to the 90nm nodes (65nm to 45nm to 32nm have all leaked too much to get much bang for the buck on the shrink
          • by cheesybagel ( 670288 ) on Tuesday April 24, 2012 @04:35AM (#39779919)

            Leakage was handled in several ways. Materials technology in semiconductor manufacturing (in particular CPU manufacturing) advanced a lot in the last decade and a half. It used to be chips were all made from polysilicon. Eventually as the transistors got smaller closer to the nanoscale there was work done on new materials (so called low-k and high-k materials). You probably heard about names such as Black Diamond low-k or Hafnium high-k (aka metal gates) along the way. These reduced the leakage issue. Instead of using aluminum for the wires today we use copper to reduce power consumption because copper is a better conductor. Then there is germanium doping to produce so called 'strained silicon' so that the silicon atoms are further apart to improve electron mobility. Taking these material changes and a couple of design changes today's processors are clocking higher than they were 10 years ago even if not at the rate Intel used to predict back then. You probably noticed by now we are either hitting or close to hitting 4 GHz on CPUs while not so long ago they used to be 2 GHz or less with regular air cooling.

            Today people are either doing chip stacking (e.g. on cellphones it is common to stack the DRAM and Flash on top of the CPU module) to make the system more compact. Then there are people working on so called vertical transistors and trigate transistors instead of using regular planar transistors. Ivy Bridge for example is the first processor featuring trigate transistors which is one reason for its low power consumption and reduced leakage over Sandy Bridge. It has been more trouble than usual but it seems everything is ok for the next two process shrinks to work in technological terms. Ultimately we will see the whole system on a chip and CPU/GPU integration is simply the first step with DRAM probably following soon afterwards.

        • It offers a significant power reduction (~22%), plus a slight boost in IPC, same clock rates, and a notable boost in IGP performance (~30%). For instance, i7 3770K (77W TDP, and HD 4000) vs i7 2700K (95W TDP and HD 3000) []. Both are quad core, 8 thread, 3.5GHz with max turbo of 3.9GHz, and 8MB L3 cache. On the mobile CPU side, a new i7 3612QM, 35W quad core, 8 thread, 6MB L3 cache, and HD 4000 graphics, compared to at least 45W TDP on all prior quad core mobile i7 CPUs (with slower IGP).

      • Okay, so I have been planning a long term computer strategy since 2006 when I got a decent first gen Quad Core.

        So hopefully if I can hold out that long, I should wait for the Tock - Haswell architecture, at the same time waiting for the Post-Win8-Metro consensus, which might just be either a Tock for Microsoft or maybe even a paradigm explosion into Apple and/or Linux if by some Mayan Miracle Microsoft implodes as a company. Or, if there is no "Windows 9", then I'll have to think about what to do then.

      • It avoids a problem many companies have of fighting with a new design and a new process, and having a product that gets delayed, has issues, etc. They either use a new design, on a stable process, or an untested process, with a proven design.

        Sometimes a new process can give a moderate sized performance bump due to higher clock speeds, but that doesn't always happen.

        It does reduce power consumption though and that is always nice.

      • by rev0lt ( 1950662 )

        Haswell should be a Tock - a BIG performance improvement.

        Don't expect a raw BIG performance improvement. Not like p4 vs core2. But Haswell DOES implement a breaktrough - Transactional Synchronization Extensions. It translates to "transactional memory for threads". An addition of couple of new instructions (may even be available on current CPUs, as they are already documented in the development manuals) to control thread context. Check [] for details.

    • Yes and no. Power consumption is important to some people. The graphics boost will help the majority of consumers that rely on integrated video. Gamers won't care about either but are interested in the performance increase which is slightly better. At the same price as the previous generation, a potential customer has few reasons to buy Sandy Bridge unless they are more focused on cost.
      • Power consumption is important to some people.. Gamers won't care about either.../quote.

        Wrong. Power consumption determines cooling requirements, which determines fan noise. Power consumption also determines whether your long suffering power supply needs yet another upgrade. We are already in circuit breaking blowing zone on a lot of gaming rigs.

        • I would think a gaming rig being able to blow a breaker would be a source of pride to some gamers. MOAR POWER! Ivy Bridge shaves about 18W TDP for some of the new chips compared to Sandy Bridge equivalents.
    • Re: (Score:3, Insightful)

      by timeOday ( 582209 )
      A 50% GPU improvement [] over Sandy Bridge is VERY significant.
      • by 0123456 ( 636235 ) on Monday April 23, 2012 @01:45PM (#39773475)

        A 50% GPU improvement [] over Sandy Bridge is VERY significant.

        Not particularly. A 50% faster GPU will still suck for gamers and will be irrelevant to non-gamers.

        • Intel's onboard GPUs are good enough to play games these days. No you won't be cranking up the graphics detail, but they'll do the trick to play many games. You might notice on the linked page it is running Skyrim in medium detail at a playable framerate. That is a modern game title. The other games tested are similar. None of them are running stellar, but they are doing 30fps at medium quality.

          For non-gamers, well more and more is making use of the GPU. All that shiny UI stuff is done on the GPU, and all t

          • by 0123456 ( 636235 )

            Sure, but low resolution and medium quality is of no interest to most gamers. Being able to play a game badly isn't really a big selling point when you can play it well for another $100.

          • by zlives ( 2009072 )

            Not for gaming

            "More disheartening is the fact that you have to dial down to 1280x720 and use detail settings that make five-year-old consoles look good."

        • Intel HD graphics 2000 is sufficient to play Starcraft 2 at reasonable levels, I think 50% improvement over the already 50% faster HD 3000 would be very welcome for gamers.

      • Very significant for Intel, not for users. AMD is leading in that segment by far, still being the only relevant option in integrated graphics. And Trinity will only widen that gap. Sandy Bridge was already enough for desktop effects, video playback and legacy gaming, and Ivy is good for exactly the same things. The performance gains, impressive as they are (50% is a major leap) aren't that significant for any of those tasks nor improve serious usage by a lot. And, given that Llano has far superior performan

        • Not really true –the HD4000 is fast enough to beat all Llanos except for the top end one. That, and the fact that it's possible to buy a faster intel CPU *and* a discrete radeon 6570 for less than the top end llano with an integrated 6550 llano is pretty much irrelevant.

          • That's not my point. My point is that HD4000 isn't significantly better than HD3000 because there's a monstrous leap in needed performance between "casual use" and "serious use". It's especially not enough for an i7, being too bottlenecky. If they make it to the Celerons without too much gimping, though, Intel will pack a reasonable punch in the low end.

            Also, Llano won't be competing against Ivy, Trinity will. We will have to see how good of an iGPU it has, but seems like they have achieved the same 50% inc

            • Funny enough what I am seeing in all the B&M stores is NOT Liano but Bobcat. the power usage of Bobcat is better while having graphics powerful enough you can play L4D at 1366x768 which is what most laptops are coming with anyway. the problem right now for AMD isn't Intel so much as it is AMD, they put too much faith in bulldozer and its a turkey and by killing Thuban they gave away a lot of the low end desktop market to pentiums. the nice thing about Thuban is they could simply switch off cores and fil

      • A 50% GPU improvement [] over Sandy Bridge is VERY significant.

        Compared to other Intel. But compared to AMD and NVidia it still sucks major donkey poo.

    • by Jeremi ( 14640 ) on Monday April 23, 2012 @01:52PM (#39773585) Homepage

      And now that they can't squeeze any more performance out of the designs, they're working on decreasing energy consumption.

      Is it really because they can't squeeze out more performance, or is it because decreased energy consumption is primarily what consumers are demanding these days?

      I can't remember the last time I heard anyone complaining about their CPU being too slow (barring software problems), but people still wish their laptop/tablet had longer battery life.

      • by vegge ( 184413 )
        Indeed. Lower power consumption has driven my last 3 or 4 CPU purchases. The 45 watt Athlon in my desktop means it can now get by without a CPU fan, slower case fans and a smaller, fanless, power supply. Power consumption is maybe even more important for media center PCs, they're generally on all the time...
        • by jedidiah ( 1196 )

          An HTPC only needs to be on when you're using it.

          Even MythTV supports the idea of putting a backend to sleep when it's not being used. Putting a frontend to sleep is pretty trivial. You hit the off switch.

          Intel GPUs continue to be disappointing: something you either try to ingore or work around (by upgrading to AMD or Nvidia).

        • I have been waiting for the ivy bridge to be released in the macbook pro because power draw is huge when on batteries.

          But a "tock" which I feel nobody has mentioned and is almost the sole reason why I am patiently waiting for the next MBP is 4K screen resolution. I feel that "retina display" type dpi becomes possible with this feature. The next release of OS X shows development to utilize 4K potential.

          Gaming may be poor performance since GPUs may have to get a substantial overhaul and nobody probably
    • Right, a company with no competition releasing better products for the same price... That's really unfair on consumers.

    • by gman003 ( 1693318 ) on Monday April 23, 2012 @01:59PM (#39773691)

      Yes, IB isn't a massive improvement on SB. But it's also worth stating what Intel did right:
      Same price
      Compatible with old sockets/motherboards

      And who said every generation of processors had to be a significant improvement? Toyota puts out essentially the same car every year for a decade, with only minor, incremental improvements. There's no reason why you can't do the same for processors. The only downside is for people who like to brag about having the very-latest processor.

      Personally, I'm going to be grabbing an Ivy Bridge laptop, if only because my old, reliable Core 2 laptop finally died. And I'll probably skip over Haswell, maybe Broadwell too, before upgrading again.

      Long story short, if you've got a Sandy Bridge, you don't need to upgrade yet. If you've got a Nehalem and some spare cash, an upgrade may (or may not) be useful. If you're on something before that, IB is the chip to upgrade to.

      PS: I'm not really a fanboy for either company (I've used both extensively - the Phenom's were great, and even my old Athlon 900 still sees service now and again), but AMD really doesn't have any attractive higher-end options. The Fusion processors look good compared to Intel's low-power options, though - I seriously considered getting a small Fusion laptop and then building a more powerful SB or IB desktop at home, but decided single-device was better.

    • by LWATCDR ( 28044 )

      Maybe or you may be seeing just a mature product segment.
      Look at airliners. They are not getting any faster for the most part but incrementally more efficient. Every technology reaches a point of maturity when improvements become incremental. The I7 right now is fast enough for the vast majority of users needs, what will be interesting is to see how the i5 and i3 do.

    • by LordLimecat ( 1103839 ) on Monday April 23, 2012 @02:18PM (#39773955)

      Except the summary seems wrong by its own sources:


      Since late last year Ivy Bridge seems to be the architecture everyone is waiting for. Although Intel is only anticipating a 10–15% processing performance bump when compared to Sandy Bridge,

      Which is what they have been saying for about a year now, and what everyone expected. And for the record, 15% speed boost at the same clock with lower power usage is not insignificant, at all.


      Ivy Bridge is a tick+, as we've already established. ... The end result is a reasonable increase in CPU performance (for a tick), a big step in GPU performance, and a decrease in power consumption.


      For raw numbers, the top HD 4000 only has 16 shaders, but the underlying architecture is completely new. .....Intel is claiming about 2x the graphics performance from 33% more units. We don't think these claims are out of line for the general case.

      Way to go, summary, you successfully implied that the chip was a flop when your sources indicate it hit its target, has substantially better GPU performance, and has a launch price in line with its current lineup. Slashdot truly is master of the art of spin.

    • by tlhIngan ( 30335 )

      So, Intel, a company with no real competition right now in the market, has produced a product that offers only a very slight performance boost, and relied on tons of marketing to drum up anticipation for this mediocre offering. And then priced it the same as existing offerings as an apology to those who waited. Actually, that sounds about par for the course these days. The only real news in cpus and motherboards has been that they've gone multicore and continue to increase bandwidth. And now that they can't

    • by Mashiki ( 184564 )

      So, Intel, a company with no real competition right now in the market, has produced a product that offers only a very slight performance boost, and relied on tons of marketing to drum up anticipation for this mediocre offering.

      I'm guessing you weren't working in the industry back in the 90's when we had cyrix in the game too. Even back then, Intel during the MHZ race would do this if not to simply one-up the competition. Hell it was even worse during the socket/slot fiasco.

    • by hairyfeet ( 841228 ) <> on Monday April 23, 2012 @05:04PM (#39775849) Journal

      Well there HAS been some innovation, just not much. Intel finally accepted that truly piss poor graphics simply won't cut it (although I still wouldn't consider them great, they are a lot better than say the 945 shitpiles they used to push) and of course what AMD is doing is showing a shift in direction, pairing more minimal CPUs like Bobcat with a much more powerful GPU.

      And THAT to me is the real question we are gonna see answered in the next couple of years, is the GPU or CPU more important in mobile? the interesting thing is both Intel and AMD has chosen a different side of the debate and they both have interesting points. AMD believes that with A/V and gaming the push should be on the GPU which on the consumer side makes sense as the home users are much more likely to be watching HD movies than say working a large spreadsheet while Intel believes that with an uber powerful CPU the GPU frankly doesn't have to be that great and they too have a point as many of the jobs the GPU does can be done by the CPU if it has enough cycles.

      Personally I believe what we are gonna end up with is a split, with AMD taking the home users who are more price sensitive and more multimedia heavy while Intel takes the workstation and business users who are more likely to be doing CPU heavy tasks. I have been noticing this trend in the B&M stores where all the consumer machines, both desktop and laptop, are AMD Fusion whereas the business section is dominated by Core based laptops.

      But in any case the next couple of years will be interesting to watch. I just hope AMD is able to keep a horse in the race as we have seen in the past how terrible a monopoly is on a market and the whole Intel tick/tock strategy didn't really come about until they got worried about the Athlon. Intel can afford to coast for the most part and simply concentrate on lowering the power of what they already have as there hasn't been a "killer app" that has needed more power in quite awhile, whereas AMD has a real turkey with bulldozer and the moron that killed Thuban left them with no real alternatives other than bobcat so if they don't either come out with a new design or fix faildozer they could end up toast.

      All I know is as a system builder when i can't get any more socket AM3 chips I'll be going to Intel, bulldozer really is a bad chip, as bad if not worse than Phenom I. Its too expensive, its a bunch of triples and quads with hardware accelerated hyperthreading they are having to sell as hexas and octos because of how high the chips cost to make, and the performance actually improves when you kill hyperthreading. As much as I love competition anyone with eyes can see even an Intel dual Sandy frankly curb stomps bulldozer and i'm sure ivy will just make that beat down even more obvious. Congrats Intel designers, you have a killer design on your hands.

  • Review Roundup (Score:5, Informative)

    by I.M.O.G. ( 811163 ) <> on Monday April 23, 2012 @01:15PM (#39773075) Homepage
    A roundup of reviews from the usual major sites as well as others not mentioned in the summary above: Overclockers Review [], Anandtech Review [], Anandtech Undervolting/Overclocking [], HardwareSecrets [], Bit-tech [], PCPer [], Tweaktown [], Hard OCP [], The Inquirer [], Techspot [], Computer Shopper [], Tom's Hardware [], ExtremeTech [], PC Mag [], Overclockers Club [], and Guru 3d []
    • The Tech Report has chimed in with its own review [], which contains a unique look at gaming performance with the integrated graphics and discrete GPUs. There's also a dedicated overclocking article [] that looks at the experience on four different motherboards.
    • X-bit Labs [] review.

      Not much new stuff in there compared to other reviews. I miss the days when they accurately measured CPU and GPU power consumption... Now it's just meaningless "total power".

  • In the end, those desktop users who decided to skip Sandy Bridge to hold out for Ivy Bridge, probably shouldn't have.

    Well, that rather depends on how many Ivy Bridge recalls there will be, doesn't it?

    • When building a new PC I went for the Pentium G620. It's pretty much the lowest-end Sandy Bridge CPU in existence. I've been running this model for a while now.

      With Ivy Bridge coming out, hopefully the prices on Sandy Bridge CPUs will come down. Maybe I could move to an i3 on the cheap, then. Or perhaps I'll even wait for Haswell; Sandy Bridge CPUs will probably be dirt cheap by then.

  • If we put the cost aside for a moment, the new Intel CPU keeps the Moore's law standard and the power to compute ration is accordingly increased compered to the previous CPU. So I don't get why people complain.
  • by jbeaupre ( 752124 ) on Monday April 23, 2012 @03:04PM (#39774515)

    For those of us who need a reminder: []

    Yeah, it's Wikipedia. But it's short and to the point.

It's fabulous! We haven't seen anything like it in the last half an hour! -- Macy's