Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel To Debut Limited-Run Ivy Bridge Processor 86

abhatt writes "Intel is set to debut the most power efficient chip in the world — a limited edition 'Ivy Bridge' processor in the upcoming annual Consumer Electronics Show in Las Vegas. Only a select group of tablet and ultrabook vendors will receive the limited Ivy Bridge chips. From the article: 'Intel did not say how far below 10 watts these special "Y" series Ivy Bridge processors will go, though Intel vice president Kirk Skaugen is expected to talk about the processors at CES. These Ivy Bridge chips were first mentioned at Intel's annual developer conference last year but it wasn't clear at that time if Intel and its partners would go forward with designs. But it appears that some PC vendors will have select models in the coming months, according to Intel.'"
This discussion has been archived. No new comments can be posted.

Intel To Debut Limited-Run Ivy Bridge Processor

Comments Filter:
  • Why not servers? (Score:5, Insightful)

    by jackb_guppy ( 204733 ) on Thursday January 03, 2013 @09:18PM (#42470085)

    We need to cut the power and heat of NOCs. Why only build these for the junk market of throw way toys?

    • by WarJolt ( 990309 )

      Just because it's efficient in a tablet/laptop doesn't make it efficient in a data center.

      • Re:Why not servers? (Score:5, Interesting)

        by TheGratefulNet ( 143330 ) on Thursday January 03, 2013 @09:38PM (#42470309)

        a bay area company (that got bought by AMD) makes its business using atoms and atom-like cpus in datacenter 'io clusters'.

        not all DS's need compute-power. often, its about io and you don't need fast cpus for io-bound tasks.

        • Your just looking at the wrong CPU. An Atom processor maybe controlling the general function, but there's some other processor(s) handling the hard work then.

          High speed IO still requires high speed processing.

          • by gl4ss ( 559668 )

            high width io.

            shitloads of connections, not much of processing per connection.

            • Most high-speed connections these days are serial, not parallel. As you ramp up the speed, the complexity of keeping the signals from the different wires synchronised gets harder. Above a certain threshold, it's easier to make a serial connection ten times faster than to keep ten wires synchronised.
        • by gfody ( 514448 )
          that company is seamicro [seamicro.com] btw
    • I think they need more cpu power and maybe more IO then some of the very low end chipsets.

      Also what about ECC ram.

    • Re: (Score:2, Troll)

      Probably because the price vs performance vs power triangle is focused almost entirely on power. Which is ok for something like an overpriced macbook air but datacentres would want more of a balance.

    • by jd2112 ( 1535857 )
      Because it is still more efficient to buy a few big honkin' servers and virtualize as many of your workloads as possible.
    • We have hundreds of NOCs all over the country and power isn't our problem. Heat, however, definitely is. But, most of the equipment we have does not have Intel chips in them. I think most of the heat comes from Cisco stuff. A large cisco core router puts out magnitudes more heat than any Intel chip could possibly put out. When our guys bring those things up to do testing on their desk, the fans sound like large vacuum cleaners or something.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Well, why do a limited run at all?

      Maybe they get crap yields, or have to do aggressive binning to meet the specs.

      Either way it looks like a warning shot to stave off the growth of ARM in server and netbook sectors. "We don't want to serve this market because it would hurt our profits. But, y'know... we could."

      • As I understand it--and I'm not up on the latest and greatest, granted--Intel is coming out with a new family of processors sometime this spring (Haswell, I believe they're calling it) which are better than the Ivy Bridge CPUs regarding power/heat.

        So I would imagine that this is a stop-gap type of thing.

        • If Sandy Bridge to Ivy Bridge was any indication, Haswell will be a 22nm "tock" aimed at performance but it'll be followed by Broadwell that takes it down to 14nm and gives remarkable power and thermal performance improvements.

          • You're right that Haswell is a tock aimed at boosting performance... But, the resulting chips will be sold at increments only 5% higher than ivy bridge's performance, and significantly reduced power consumption.

            Everything is about power consumption these days.

    • by hamjudo ( 64140 ) on Thursday January 03, 2013 @11:03PM (#42471039) Homepage Journal
      If they could make enough of these wonder chips to satisfy the projected demand, they wouldn't bother with a "Limited Edition". They're limiting sales to match their manufacturing capacity. They don't want to cannibalize potential Atom design wins with this chip that they can't yet make in high enough quantity. Expect the "Limited Edition" moniker and associated high price to go away "real soon now".

      Once they can make the things in sufficient quantity they will undoubtedly make versions with server features. Most server buyers don't need or want on chip graphics, but do want ECC.

      • If I had to guess this is probably being done to test out production facilities that will be used for Haswell. They can make limited runs with this special version of Ivy Bridge and start to generate more interest in the low power CPUs while also getting some good data for the real productions runs of Haswell in a few months.
        • by Anonymous Coward

          If I had to guess this is probably being done to test out production facilities that will be used for Haswell. They can make limited runs with this special version of Ivy Bridge and start to generate more interest in the low power CPUs while also getting some good data for the real productions runs of Haswell in a few months.

          You're almost right, but missing the whole picture.

          All Ivy Bridge CPUs are made on the exact same production facilities which will be used for Haswell. Intel's "tick-tock" strategy is to alternate between doing process shrinks and new designs. When they bring a new process online, at first they manufacture a shrunk version of an older processor design, to reduce the amount of unknown problems they might have to deal with. After about a year of learning, they switch over to a new design which is fully opt

    • Because for servers you'd be better off buying a handful of faster, higher-power chips than pile of slower, low-power chips?

    • Presumably they can't produce these chips at that scale yet. I would be stunned if Intel didn't announce a CPU with a similar profile in a later microarchitecture.

  • by Anonymous Coward

    I wonder how true this ACTUALLY is? Are we talking x86 flop/watt comparisons, or...?

  • ... that apple buy all of them for the Macbook Air?
  • Intel has always been about Value Add... There are "crippled" products on the market, sold by others as well as Intel. Sometimes it's so they can build one part in their fab, cripple the mainstream part with a fuse, and then charge a premium for the un-crippled part. Sometimes it's so a crippled system can be sold, and then for an upgrade fee, be "enhanced" in the field. But in any case, it's all about revenue. The annoying thing about this is that they've gone to extra expense and effort to produce th

    • by Anonymous Coward

      Sometimes it's so they can build one part in their fab, cripple the mainstream part with a fuse

      Yea a fuse. Wait what?

      I don't think that as a company they're very comfortable with this whole "power thing"

      That's why Ivy Bridge already basically kills anything remotely comparable in terms of power usage?

      I will never understand the brand hate people exhibit, in this case against Intel by an AMD fan.

      • by Pinhedd ( 1661735 ) on Thursday January 03, 2013 @11:15PM (#42471135)

        By a 'fuse' he's talking about the selective factory or post factory programming of a chip.

        Intel has only 5 different pieces of silicon serving 150+ different Sandybridge and Sandybridge-E processors, the same is true for Ivybridge.

        When the fabrication process is finished, the chips on each wafer are tested for quality. Chips that fail completely are discarded. Chips that have flaws in a core or cache segment will have that core or cache segment disabled disabled. This allows a faulty chip to be sold as a lower end model.

        Similarly, if demand for a lower end model is higher than the supply of the lower end models, higher quality chips can have parts disabled so that they may be repackaged as a lower end product for marketing purposes.

        All of this is done at the factory before the chip is inserted into a processor package. An additional step invented by IBM allows for firmware upgrades themselves to reprogram the chip, possibly reactivating parts that have been deactivated at the factory, or changing CPU parameters so that older firmware revisions cannot be installed (this is done with the PS3)

    • by Gadget_Guy ( 627405 ) on Thursday January 03, 2013 @11:40PM (#42471325)

      There are "crippled" products on the market, sold by others as well as Intel. Sometimes it's so they can build one part in their fab, cripple the mainstream part with a fuse, and then charge a premium for the un-crippled part.

      Sometimes that actually turns our cheaper to make it this way. They could design a low end CPU to sell cheaper than their premium product, but the cost of producing an entirely different fabrication line for that CPU might actually more than just including a switch in their higher end processor to cripple the chip. The costs include having to reconfigure the production line to a different line of wafers.

      Using the same chip design means that they can still sell the CPUs that fail quality control testing. If one of the cores fails in a quad core CPU, they can just turn that one off and sell it as a dual core part. So instead of increasing the price of the premium chip by having the "fuse" as you put it, they are making the chip cheaper because it reduces the wastage if having to discard the failed processors.

    • by Sycraft-fu ( 314770 ) on Thursday January 03, 2013 @11:44PM (#42471351)

      Intel is NOT crippling Ivy Bridge processors. Rather what happens is that minor variations silicon wafer mean that different chips come out with different characteristics. It doesn't take much to change things either, we are talking thins with features just 22nm wide, little things have large effects.

      When you get a wafer of chips, you have to test and bin them. Some just flat out won't work. There'll have been some kind of defect on the wafer and it screws the chip over. You toss those. Some will work, but not in the range you want, again those get tossed. Some will work but not completely, parts will be damaged. For processors you usually have to toss them, GPUs often will disable the affected areas and bin it as a lower end part.

      Of the chips that do work, they'll have different characteristics in terms of what clock speed they can handle before having issues and what their resistance is, and thus their power usage.

      What's happening here is Intel is taking the best of the best resistance wise and binning them for a new line. They discovered that some IB chips are much lower power usage than they though (if properly frequency limited) and thus are selling a special line for power critical applications.

      They can't just "make all the chips better" or something. This is normal manufacturing variation and as a practical matter Intel has some of the best fab processes out there and thus best yields.

      CPU speeds are sometimes an artificial limit (though often not, because not only must a chip be capable of a given speed, it has to do it at it's TDP spec) but power usage is not. It uses what it uses.

      • by dtdmrr ( 1136777 )

        I highly doubt that's the case. I suspect the defect/variation distributions of few if any generations of intel chips have actually matched the distribution of market demand. Going back at least to the 386, they have artificially crippled higher end models (beyond what was necessary from defects), to provide different price/feature/performance points for consumers. The SX line was just DX chips with the internal floating point unit disabled.

        We might feel a little less cheated if Intel actually designed

    • Intel has always been about Value Add... There are "crippled" products on the market, sold by others as well as Intel. Sometimes it's so they can build one part in their fab, cripple the mainstream part with a fuse, and then charge a premium for the un-crippled part. Sometimes it's so a crippled system can be sold, and then for an upgrade fee, be "enhanced" in the field. But in any case, it's all about revenue. The annoying thing about this is that they've gone to extra expense and effort to produce the crippled part - the premium part would actually cost less without the extra crippling capability.

      While you're correct that Intel relies heavily on testing chips, disabling whatever doesn't work/lowering the clock speed until it works, and selling it as a cheaper product, that's really a cost-saving measure, not a revenue-boosting one. When a chip rolls out with half the cores broken, they'd much rather sell it as a cheap processor than throw it away. AMD does the same thing - even more, actually. As does Nvidia, and pretty much any company that produces enough chips.

      These chips are likely a high bin, n

      • but for the consumer market, the *only* thing they have going for them right now is Fusion, having a powerful GPU on the same die as a half-decent CPU.

        Really?

        The top end non fusion CPU genreally comes between the i5 and close to much more expensive i7, sometimes beating out the i7 in multithreaded benchmarks. It's over 75% of the speed of the i5 single threaded now.

        Piledriver is much better, and the performance is much more on a par with Intel now for many things.

        These days an awful lot of stuff is multithr

        • The top end non fusion CPU genreally comes between the i5 and close to much more expensive i7, sometimes beating out the i7 in multithreaded benchmarks. It's over 75% of the speed of the i5 single threaded now.

          No, no it doesn't. It gets beaten by the i3 3220 for everything except for very multithreaded tasks, where it roughly draws with it:
          http://www.anandtech.com/bench/Product/675?vs=677 [anandtech.com]

          Piledriver is not much better - it's the same architecture as trinity, but with the GPU stripped off.

          There actually really aren't that many tasks where multithreading makes up the difference, as you can see from a comparison of a top end piledriver, against a cheaper i5 that consumes about half the power:
          http://www.anandtech.com [anandtech.com]

        • The top end non fusion CPU genreally comes between the i5 and close to much more expensive i7, sometimes beating out the i7 in multithreaded benchmarks

          Wish it were true but its not. Very little that AMD has is "close" to the i7s. What AMD has is value, and pretty decent graphics cores, as well as top end core counts.

  • by antifoidulus ( 807088 ) on Thursday January 03, 2013 @10:13PM (#42470653) Homepage Journal
    Some "Limited Edition", doesn't even come with a free Gordon Moore in battle uniform statue....
  • by CuteSteveJobs ( 1343851 ) on Thursday January 03, 2013 @10:22PM (#42470705)
    Intel really needs to get its act together: It's Atom processors are a decent low power x86 solution, but as usual Intel has delivered them with a crappy 3D graphics to the point the graphical benchmarks can't even run on them, let alone any recent computer games. For the Atom Cedar Trail release they didn't even do DX10 drivers, and sheepishly back-speced it to the now outdated DX9. ARM tablets can deliver decent 3D, so why can't Intel? Even AMD can provide 3D graphics for low-power PCs. Why can't Intel? And Intel wonders why it's becoming irrelevant to the future of computing!?

    No DX10 for you!
    http://semiaccurate.com/2012/01/03/intel-thinks-cedar-trail-is-a-dog-reading-between-bullet-points/#.UOY58uRJNxA [semiaccurate.com]

    Windows must live with DX9. Linux can't do anything at all...
    http://tanguy.ortolo.eu/blog/article56/beware-newest-intel-atom [ortolo.eu]

    Oh and did I mention it doesn't work on Windows 8.
    http://communities.intel.com/message/175674 [intel.com]
    http://www.eightforums.com/hardware-drivers/12305-intel-gma-3600-3650-windows-8-driver.html [eightforums.com]
    http://answers.microsoft.com/en-us/windows/forum/windows_8-hardware/windows-8-on-intel-atom-d2700dc-graphics-driver/2a6015d3-af92-453d-b0c2-20cc56b764de [microsoft.com]
    • So what is your solution then? Does Intel need to come out with a range of very low powered CPUs based on their main Ivy Bridge processors with better performance than their Atom line? Do you think that they could announce this, and then we could discuss the story here on Slashdot?

      You can see where I am smugly going here. That is exactly what TFA was all about. In act, it also said:

      Atom chips will move to an entirely new design later this year that is expected to get them closer to Intel's mainstream processors in performance.

      • That is exactly what TFA was all about

        Thank you for this.

        It's true, though, that the current Atom chipset is poorly considered. They even broke VGA text mode, for Pete's sake. FreeBSD 9.1 has the patch, at least, but boy was I surprised last spring!

      • Well, one solution would be for Intel to lift some of the restrictions on the Atom CPU, which are mainly in place because Intel fears that they could otherwise cut into their other more profitable CPU lines. Though I see that you can now buy an Atom board with a PCI x16 slot so I guess Intel may be seeing the light, or perhaps just feeling some pressure from AMD's Bobcat line in that segment.

      • > You can see where I am smugly going here. That is exactly what TFA was all about. In act, it also said:
        >> Atom chips will move to an entirely new design later this year that is expected to get them closer to Intel's mainstream processors in performance.

        I own quite a few Atom PCs and in terms of performance, I think Atom's are quite okay. They're not big on grunt, but still sufficiently powerful to do anything you throw at it EXCEPT 3D. That's their big weakness. Office, web services, software
        • That is true. When I was looking for a netbook, I chose an AMD based one precisely for the better 3D. I now use it as a gaming system. As long as you are happy to play games from around 2005, then it performs fine. But it is far beyond what the Atom based netbooks can do.

          There are occasions where I think that it is the CPU that is limiting a game rather than the GPU. I wonder in those situations how the Atom systems would fare.

          I just wish that you could still find these AMD netbooks around. Netbooks are

    • Intel's onboard SB / IB graphics are pretty darn competitive for an "integrated" solution. I believe they support DX10, and certainly are sufficient for most games on a "modest" setting.

  • by Anonymous Coward
    This is an Intel parlor trick to draw attention away from other vendors who have something new and interesting to offer in the sub 10W power envelope. The fact that they are pulling these shenanigans leads me to suspect AMD will have something interesting to show off at CES.
    • by Anonymous Coward
      Yep. January. The time of year when Intel trots out "100s of design wins" for "fabulous consumer technology" you're never going to see. It's part of the run-up to CES. This year they have an imaginary cable box, which is new, and imaginary tablets, which are not. It's in the Autumn that we get amazingly cool server technologies we're never going to see. If I was them, I'd reverse this as autumn is when the ramp to a big consumer holiday happens and winter is the best time to be warning enterprise cust
    • Im not aware of other players in the sub 10w region who can provide performance competitive with ANY of Intel's core line.

      Before responding, possibly go take a look at performance-per-watt comparisons between a modern intel and anything ARM, theyre in different leagues.

  • Did anyone else read "Intel To Debut Limited-Run Ivy Bridge Processor" and ponder why anyone would want a processor that was guaranteed to run only a limited number of times? Perhaps this is the new monetization (I hate that word) strategy, where you are forced to buy desktop processor hours on subscription. Or perhaps the limited-runs could be along the lines of "5 runs with DVD/Bluray player or non-Microsoft OS running" to give the MPAA/Microsoft something else to prop up profits with?

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...