Intel To Debut Limited-Run Ivy Bridge Processor 86
abhatt writes "Intel is set to debut the most power efficient chip in the world — a limited edition 'Ivy Bridge' processor in the upcoming annual Consumer Electronics Show in Las Vegas. Only a select group of tablet and ultrabook vendors will receive the limited Ivy Bridge chips. From the article: 'Intel did not say how far below 10 watts these special "Y" series Ivy Bridge processors will go, though Intel vice president Kirk Skaugen is expected to talk about the processors at CES.
These Ivy Bridge chips were first mentioned at Intel's annual developer conference last year but it wasn't clear at that time if Intel and its partners would go forward with designs. But it appears that some PC vendors will have select models in the coming months, according to Intel.'"
Why not servers? (Score:5, Insightful)
We need to cut the power and heat of NOCs. Why only build these for the junk market of throw way toys?
Re: (Score:3)
Just because it's efficient in a tablet/laptop doesn't make it efficient in a data center.
Re:Why not servers? (Score:5, Interesting)
a bay area company (that got bought by AMD) makes its business using atoms and atom-like cpus in datacenter 'io clusters'.
not all DS's need compute-power. often, its about io and you don't need fast cpus for io-bound tasks.
Re: (Score:2)
Your just looking at the wrong CPU. An Atom processor maybe controlling the general function, but there's some other processor(s) handling the hard work then.
High speed IO still requires high speed processing.
Re: (Score:2)
high width io.
shitloads of connections, not much of processing per connection.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I think they need more cpu power and maybe more IO (Score:2)
I think they need more cpu power and maybe more IO then some of the very low end chipsets.
Also what about ECC ram.
Re: (Score:2, Troll)
Probably because the price vs performance vs power triangle is focused almost entirely on power. Which is ok for something like an overpriced macbook air but datacentres would want more of a balance.
Re: (Score:2)
Re: (Score:3)
We have hundreds of NOCs all over the country and power isn't our problem. Heat, however, definitely is. But, most of the equipment we have does not have Intel chips in them. I think most of the heat comes from Cisco stuff. A large cisco core router puts out magnitudes more heat than any Intel chip could possibly put out. When our guys bring those things up to do testing on their desk, the fans sound like large vacuum cleaners or something.
Re: (Score:1)
Get a heat exchanger and put it into good use somewhere else.
(Or lamer and more stupid I guess (if you don't combine it with a heat exchanger) do as Facebook do and build it up north: http://www.idg.se/2.1085/1.475671/video-facebook-datacenter-fran-luften [www.idg.se])
Re: (Score:2)
Sez you.
Re: (Score:2, Insightful)
Well, why do a limited run at all?
Maybe they get crap yields, or have to do aggressive binning to meet the specs.
Either way it looks like a warning shot to stave off the growth of ARM in server and netbook sectors. "We don't want to serve this market because it would hurt our profits. But, y'know... we could."
Re: (Score:2)
As I understand it--and I'm not up on the latest and greatest, granted--Intel is coming out with a new family of processors sometime this spring (Haswell, I believe they're calling it) which are better than the Ivy Bridge CPUs regarding power/heat.
So I would imagine that this is a stop-gap type of thing.
Re: (Score:3)
If Sandy Bridge to Ivy Bridge was any indication, Haswell will be a 22nm "tock" aimed at performance but it'll be followed by Broadwell that takes it down to 14nm and gives remarkable power and thermal performance improvements.
Re: (Score:2)
You're right that Haswell is a tock aimed at boosting performance... But, the resulting chips will be sold at increments only 5% higher than ivy bridge's performance, and significantly reduced power consumption.
Everything is about power consumption these days.
Limited Edition is a Euphemism (Score:4, Interesting)
Once they can make the things in sufficient quantity they will undoubtedly make versions with server features. Most server buyers don't need or want on chip graphics, but do want ECC.
Re: (Score:3)
Re: (Score:1)
If I had to guess this is probably being done to test out production facilities that will be used for Haswell. They can make limited runs with this special version of Ivy Bridge and start to generate more interest in the low power CPUs while also getting some good data for the real productions runs of Haswell in a few months.
You're almost right, but missing the whole picture.
All Ivy Bridge CPUs are made on the exact same production facilities which will be used for Haswell. Intel's "tick-tock" strategy is to alternate between doing process shrinks and new designs. When they bring a new process online, at first they manufacture a shrunk version of an older processor design, to reduce the amount of unknown problems they might have to deal with. After about a year of learning, they switch over to a new design which is fully opt
Re: (Score:2)
Because for servers you'd be better off buying a handful of faster, higher-power chips than pile of slower, low-power chips?
Re: (Score:2)
Presumably they can't produce these chips at that scale yet. I would be stunned if Intel didn't announce a CPU with a similar profile in a later microarchitecture.
Re: (Score:2)
Re: (Score:2)
Static power gives you the upper bound for how much power will be consumed over a given period. Benchmarks will give you the workload per period. Math will help you bridge the two.
Wattage has been a standard way of comparing the power usage of chips for a long time.
"Most power efficient chip in the world" (Score:1)
I wonder how true this ACTUALLY is? Are we talking x86 flop/watt comparisons, or...?
what's the bet... (Score:2)
Re: (Score:2)
Apple already use higher end ivy bridge chips than these in the MacBook Air... Why would they downgrade them?
Re: (Score:2)
Welcome to the new Value Add (Score:1)
Intel has always been about Value Add... There are "crippled" products on the market, sold by others as well as Intel. Sometimes it's so they can build one part in their fab, cripple the mainstream part with a fuse, and then charge a premium for the un-crippled part. Sometimes it's so a crippled system can be sold, and then for an upgrade fee, be "enhanced" in the field. But in any case, it's all about revenue. The annoying thing about this is that they've gone to extra expense and effort to produce th
Re: (Score:1)
Sometimes it's so they can build one part in their fab, cripple the mainstream part with a fuse
Yea a fuse. Wait what?
I don't think that as a company they're very comfortable with this whole "power thing"
That's why Ivy Bridge already basically kills anything remotely comparable in terms of power usage?
I will never understand the brand hate people exhibit, in this case against Intel by an AMD fan.
Re:Welcome to the new Value Add (Score:5, Insightful)
By a 'fuse' he's talking about the selective factory or post factory programming of a chip.
Intel has only 5 different pieces of silicon serving 150+ different Sandybridge and Sandybridge-E processors, the same is true for Ivybridge.
When the fabrication process is finished, the chips on each wafer are tested for quality. Chips that fail completely are discarded. Chips that have flaws in a core or cache segment will have that core or cache segment disabled disabled. This allows a faulty chip to be sold as a lower end model.
Similarly, if demand for a lower end model is higher than the supply of the lower end models, higher quality chips can have parts disabled so that they may be repackaged as a lower end product for marketing purposes.
All of this is done at the factory before the chip is inserted into a processor package. An additional step invented by IBM allows for firmware upgrades themselves to reprogram the chip, possibly reactivating parts that have been deactivated at the factory, or changing CPU parameters so that older firmware revisions cannot be installed (this is done with the PS3)
Re:Welcome to the new Value Add (Score:5, Informative)
There are "crippled" products on the market, sold by others as well as Intel. Sometimes it's so they can build one part in their fab, cripple the mainstream part with a fuse, and then charge a premium for the un-crippled part.
Sometimes that actually turns our cheaper to make it this way. They could design a low end CPU to sell cheaper than their premium product, but the cost of producing an entirely different fabrication line for that CPU might actually more than just including a switch in their higher end processor to cripple the chip. The costs include having to reconfigure the production line to a different line of wafers.
Using the same chip design means that they can still sell the CPUs that fail quality control testing. If one of the cores fails in a quad core CPU, they can just turn that one off and sell it as a dual core part. So instead of increasing the price of the premium chip by having the "fuse" as you put it, they are making the chip cheaper because it reduces the wastage if having to discard the failed processors.
Re: (Score:2)
Welcome to the concept of chip manufacturing (Score:5, Informative)
Intel is NOT crippling Ivy Bridge processors. Rather what happens is that minor variations silicon wafer mean that different chips come out with different characteristics. It doesn't take much to change things either, we are talking thins with features just 22nm wide, little things have large effects.
When you get a wafer of chips, you have to test and bin them. Some just flat out won't work. There'll have been some kind of defect on the wafer and it screws the chip over. You toss those. Some will work, but not in the range you want, again those get tossed. Some will work but not completely, parts will be damaged. For processors you usually have to toss them, GPUs often will disable the affected areas and bin it as a lower end part.
Of the chips that do work, they'll have different characteristics in terms of what clock speed they can handle before having issues and what their resistance is, and thus their power usage.
What's happening here is Intel is taking the best of the best resistance wise and binning them for a new line. They discovered that some IB chips are much lower power usage than they though (if properly frequency limited) and thus are selling a special line for power critical applications.
They can't just "make all the chips better" or something. This is normal manufacturing variation and as a practical matter Intel has some of the best fab processes out there and thus best yields.
CPU speeds are sometimes an artificial limit (though often not, because not only must a chip be capable of a given speed, it has to do it at it's TDP spec) but power usage is not. It uses what it uses.
Re: (Score:1)
I highly doubt that's the case. I suspect the defect/variation distributions of few if any generations of intel chips have actually matched the distribution of market demand. Going back at least to the 386, they have artificially crippled higher end models (beyond what was necessary from defects), to provide different price/feature/performance points for consumers. The SX line was just DX chips with the internal floating point unit disabled.
We might feel a little less cheated if Intel actually designed
Re: (Score:3)
Intel has always been about Value Add... There are "crippled" products on the market, sold by others as well as Intel. Sometimes it's so they can build one part in their fab, cripple the mainstream part with a fuse, and then charge a premium for the un-crippled part. Sometimes it's so a crippled system can be sold, and then for an upgrade fee, be "enhanced" in the field. But in any case, it's all about revenue. The annoying thing about this is that they've gone to extra expense and effort to produce the crippled part - the premium part would actually cost less without the extra crippling capability.
While you're correct that Intel relies heavily on testing chips, disabling whatever doesn't work/lowering the clock speed until it works, and selling it as a cheaper product, that's really a cost-saving measure, not a revenue-boosting one. When a chip rolls out with half the cores broken, they'd much rather sell it as a cheap processor than throw it away. AMD does the same thing - even more, actually. As does Nvidia, and pretty much any company that produces enough chips.
These chips are likely a high bin, n
Re: (Score:1)
but for the consumer market, the *only* thing they have going for them right now is Fusion, having a powerful GPU on the same die as a half-decent CPU.
Really?
The top end non fusion CPU genreally comes between the i5 and close to much more expensive i7, sometimes beating out the i7 in multithreaded benchmarks. It's over 75% of the speed of the i5 single threaded now.
Piledriver is much better, and the performance is much more on a par with Intel now for many things.
These days an awful lot of stuff is multithr
Re: (Score:3)
The top end non fusion CPU genreally comes between the i5 and close to much more expensive i7, sometimes beating out the i7 in multithreaded benchmarks. It's over 75% of the speed of the i5 single threaded now.
No, no it doesn't. It gets beaten by the i3 3220 for everything except for very multithreaded tasks, where it roughly draws with it:
http://www.anandtech.com/bench/Product/675?vs=677 [anandtech.com]
Piledriver is not much better - it's the same architecture as trinity, but with the GPU stripped off.
There actually really aren't that many tasks where multithreading makes up the difference, as you can see from a comparison of a top end piledriver, against a cheaper i5 that consumes about half the power:
http://www.anandtech.com [anandtech.com]
Re: (Score:2)
The top end non fusion CPU genreally comes between the i5 and close to much more expensive i7
Oh waid, you've compared a fusion processor to an i3 rather than the non fusion ones I was talking about.
Score: -1 Lying
Re: (Score:2)
Score: -1 Lying
For you? I agree. After all, the first link you posted was this:
http://www.anandtech.com/bench/Product/675?vs=677 [anandtech.com]
Which compares the Fusion A10-5800K to an i3. You also claimed "piledriver is not much better" where it actually wins by a wide margin in every single benchmark.
http://www.anandtech.com/bench/Product/675?vs=697 [anandtech.com]
Re: (Score:3)
No, I claimed that the A10-5800k *is* a piledriver. Which it is.
You made three assertions.
1. That fusion was between an i5 and i7 in terms of speed
2. That piledriver was faster still.
3. That you were talking about the FX line, not the fusion line all along.
All are false.
1. Is false because the fastest fusion chip (the A10-5800K) is only roughly as fast as the i3-3220, no where near as fast as an i5 or i7.
2. Is false because the A10 *is* a piledriver chip. There are faster piledriver chips out there, I com
Re: (Score:2)
1. That fusion was between an i5 and i7 in terms of speed
No, read my post. I said "non fusion".
2. That piledriver was faster still.
Than fusion, but not than the i5 or i7.
The phrase "piledriver is much better" is in comparison to bulldozer. It does not make any sense in relation to fusion, since the A10 core is a piledriver microarchitecture based core.
3. That you were talking about the FX line, not the fusion line all along.
Yes. Actually read my post. Of course, I said "non fusion" so I could have been talk
Re: (Score:2)
The top end non fusion CPU genreally comes between the i5 and close to much more expensive i7, sometimes beating out the i7 in multithreaded benchmarks
Wish it were true but its not. Very little that AMD has is "close" to the i7s. What AMD has is value, and pretty decent graphics cores, as well as top end core counts.
They aren't even trying (Score:4, Funny)
Intel needs to embrace 3D to remain relevant (Score:3)
No DX10 for you!
http://semiaccurate.com/2012/01/03/intel-thinks-cedar-trail-is-a-dog-reading-between-bullet-points/#.UOY58uRJNxA [semiaccurate.com]
Windows must live with DX9. Linux can't do anything at all...
http://tanguy.ortolo.eu/blog/article56/beware-newest-intel-atom [ortolo.eu]
Oh and did I mention it doesn't work on Windows 8.
http://communities.intel.com/message/175674 [intel.com]
http://www.eightforums.com/hardware-drivers/12305-intel-gma-3600-3650-windows-8-driver.html [eightforums.com]
http://answers.microsoft.com/en-us/windows/forum/windows_8-hardware/windows-8-on-intel-atom-d2700dc-graphics-driver/2a6015d3-af92-453d-b0c2-20cc56b764de [microsoft.com]
Re: (Score:2)
So what is your solution then? Does Intel need to come out with a range of very low powered CPUs based on their main Ivy Bridge processors with better performance than their Atom line? Do you think that they could announce this, and then we could discuss the story here on Slashdot?
You can see where I am smugly going here. That is exactly what TFA was all about. In act, it also said:
Atom chips will move to an entirely new design later this year that is expected to get them closer to Intel's mainstream processors in performance.
Re: (Score:2)
That is exactly what TFA was all about
Thank you for this.
It's true, though, that the current Atom chipset is poorly considered. They even broke VGA text mode, for Pete's sake. FreeBSD 9.1 has the patch, at least, but boy was I surprised last spring!
Re: (Score:2)
Well, one solution would be for Intel to lift some of the restrictions on the Atom CPU, which are mainly in place because Intel fears that they could otherwise cut into their other more profitable CPU lines. Though I see that you can now buy an Atom board with a PCI x16 slot so I guess Intel may be seeing the light, or perhaps just feeling some pressure from AMD's Bobcat line in that segment.
Re: (Score:2)
>> Atom chips will move to an entirely new design later this year that is expected to get them closer to Intel's mainstream processors in performance.
I own quite a few Atom PCs and in terms of performance, I think Atom's are quite okay. They're not big on grunt, but still sufficiently powerful to do anything you throw at it EXCEPT 3D. That's their big weakness. Office, web services, software
Re: (Score:2)
That is true. When I was looking for a netbook, I chose an AMD based one precisely for the better 3D. I now use it as a gaming system. As long as you are happy to play games from around 2005, then it performs fine. But it is far beyond what the Atom based netbooks can do.
There are occasions where I think that it is the CPU that is limiting a game rather than the GPU. I wonder in those situations how the Atom systems would fare.
I just wish that you could still find these AMD netbooks around. Netbooks are
Z2760 only supports DirectX 9 (Score:2)
Here's a review of a Z
Re: (Score:2)
PS. If you have been trolling me, well played!
Re: (Score:3)
Intel's onboard SB / IB graphics are pretty darn competitive for an "integrated" solution. I believe they support DX10, and certainly are sufficient for most games on a "modest" setting.
Smoke and Mirrors (Score:1)
Re: (Score:1)
Re: (Score:2)
Im not aware of other players in the sub 10w region who can provide performance competitive with ANY of Intel's core line.
Before responding, possibly go take a look at performance-per-watt comparisons between a modern intel and anything ARM, theyre in different leagues.
Limited run? (Score:1)
Did anyone else read "Intel To Debut Limited-Run Ivy Bridge Processor" and ponder why anyone would want a processor that was guaranteed to run only a limited number of times? Perhaps this is the new monetization (I hate that word) strategy, where you are forced to buy desktop processor hours on subscription. Or perhaps the limited-runs could be along the lines of "5 runs with DVD/Bluray player or non-Microsoft OS running" to give the MPAA/Microsoft something else to prop up profits with?
Re: (Score:2)
Their success is mostly because they produce some of the best and most efficient chips in any market.
You can argue that they got their by virtue of their HUGE R&D which were funded by the shady behavior you mention, but at this point choosing someone other than intel would be a principled, rather than technical, decision (unless you need high core count or extremely low power usage).