Forgot your password?
typodupeerror
Intel Hardware

Intel Officially Lifts the Veil On Ivy Bridge 200

Posted by samzenpus
from the check-it-out dept.
New submitter zackmerles writes "Tom's Hardware takes the newly-released, top-of-the-line Ivy Bridge Core i7-3770K for a spin. All Core i7 Ivy Bridge CPUs come with Intel HD Graphics 4000, which despite the DirectX 11 support, only provides a modest boost to the Sandy Bridge Intel HD Graphics 3000. However, the new architecture tops the charts for low power consumption, which should make the Ivy Bridge mobile offerings more desirable. In CPU performance, the new Ivy Bridge Core i7 is only marginally better than last generation's Core i7-2700K. Essentially, Ivy Bridge is not the fantastic follow-up to Sandy Bridge that many enthusiasts had hoped for, but an incremental improvement. In the end, those desktop users who decided to skip Sandy Bridge to hold out for Ivy Bridge, probably shouldn't have. On the other hand, since Intel priced the new Core i7-3770K and Core i5-3570K the same as their Sandy Bridge counterparts, there is no reason to purchase the previous generation chips." Reader jjslash points out that coverage is available from all the usual suspects — pick your favorite: AnandTech, TechSpot, Hot Hardware, ExtremeTech, and Overclockers.
This discussion has been archived. No new comments can be posted.

Intel Officially Lifts the Veil On Ivy Bridge

Comments Filter:
  • Re:HD 4000 (Score:5, Interesting)

    by h4rr4r (612664) on Monday April 23, 2012 @01:28PM (#39773231)

    The vast majority of users will use it. Intel integrated has been a good enough solution for most users for a long time now.

    It would cost more to fab a chip without it, would you pay extra for that? Since they would be making so few.

    This is a normal tick in the Intel tick-tock cycle. You will get that 50%-100% with Haswell.

  • by Calos (2281322) on Monday April 23, 2012 @03:06PM (#39774543)

    ower efficiency? OK, all missing in action.
    Per some of the articles, power consumption is down nearly 20W between the two generations.

    So, the big unwritten subtext here is: Intel's 22nm node has got problems. Big problems. Trigate not working out so well?
    Far too early to tell. The fact that they introduced a brand new, immensely complex process into manufacturing and it is working so well actually says a lot of good about how the trigate process is fairing. It will, of course, need some tuning and massaging. But it is already performing as well as/slightly better than the previous generation on its first release, at lower power (at least per Anand).

    IVB is also farking small, which as the process matures, should mean more parts and lower prices.

  • by hairyfeet (841228) <bassbeast1968@ g m a i l.com> on Monday April 23, 2012 @05:04PM (#39775849) Journal

    Well there HAS been some innovation, just not much. Intel finally accepted that truly piss poor graphics simply won't cut it (although I still wouldn't consider them great, they are a lot better than say the 945 shitpiles they used to push) and of course what AMD is doing is showing a shift in direction, pairing more minimal CPUs like Bobcat with a much more powerful GPU.

    And THAT to me is the real question we are gonna see answered in the next couple of years, is the GPU or CPU more important in mobile? the interesting thing is both Intel and AMD has chosen a different side of the debate and they both have interesting points. AMD believes that with A/V and gaming the push should be on the GPU which on the consumer side makes sense as the home users are much more likely to be watching HD movies than say working a large spreadsheet while Intel believes that with an uber powerful CPU the GPU frankly doesn't have to be that great and they too have a point as many of the jobs the GPU does can be done by the CPU if it has enough cycles.

    Personally I believe what we are gonna end up with is a split, with AMD taking the home users who are more price sensitive and more multimedia heavy while Intel takes the workstation and business users who are more likely to be doing CPU heavy tasks. I have been noticing this trend in the B&M stores where all the consumer machines, both desktop and laptop, are AMD Fusion whereas the business section is dominated by Core based laptops.

    But in any case the next couple of years will be interesting to watch. I just hope AMD is able to keep a horse in the race as we have seen in the past how terrible a monopoly is on a market and the whole Intel tick/tock strategy didn't really come about until they got worried about the Athlon. Intel can afford to coast for the most part and simply concentrate on lowering the power of what they already have as there hasn't been a "killer app" that has needed more power in quite awhile, whereas AMD has a real turkey with bulldozer and the moron that killed Thuban left them with no real alternatives other than bobcat so if they don't either come out with a new design or fix faildozer they could end up toast.

    All I know is as a system builder when i can't get any more socket AM3 chips I'll be going to Intel, bulldozer really is a bad chip, as bad if not worse than Phenom I. Its too expensive, its a bunch of triples and quads with hardware accelerated hyperthreading they are having to sell as hexas and octos because of how high the chips cost to make, and the performance actually improves when you kill hyperthreading. As much as I love competition anyone with eyes can see even an Intel dual Sandy frankly curb stomps bulldozer and i'm sure ivy will just make that beat down even more obvious. Congrats Intel designers, you have a killer design on your hands.

  • by drhank1980 (1225872) on Tuesday April 24, 2012 @12:23AM (#39779005)
    I saw a presentation a couple years ago at SPIE that has Intel showing cross sections from a sub 10nm process. They had completely wrapped the gate around the device to get those to work so the transistors were just tubes. In the same presentation, they were also showing that the current / voltage improvements between the 32nm node and the 22nm node were much more like the improvements from the 130nm to the 90nm nodes (65nm to 45nm to 32nm have all leaked too much to get much bang for the buck on the shrinks), so theoretically the next generation 22nm Haswell may see some clock improvements again but we will have to see as there are significant challenges in shrinking the 1st layer of metal interconnect that may sink any improvements in the transistor performance.

    Also at this same conference the TSMC CEO was very confident that they could make devices that worked well at 7-8nm; the real question was could you manufacture those in a cost effective way as EUV lithography is too slow and going to triple pattern 193nm immersion is going to to be very expensive.

Facts are stubborn, but statistics are more pliable.

Working...