Intel On Track For 32 nm Manufacturing 139
yaksha writes "Intel said on Wednesday that it has completed the development phase of its next manufacturing process that will shrink chip circuits to 32 nanometers.
The milestone means that Intel will be able to push faster, more efficient chips starting in the fourth quarter.
In a statement, Intel said it will provide more technical details at the International Electron Devices Meeting next week in San Francisco. Bottom line: Shrinking to a 32 nanometer is one more step in its 'tick tock' strategy, which aims to create a new architecture with new manufacturing process every 12 months. Intel is obviously betting that its rapid-fire advancements will produce performance gains so jaw dropping that customers can't resist."
The new ones are impressive (Score:2)
Re:The new ones are impressive (Score:4, Informative)
Re: (Score:2, Informative)
Re: (Score:1)
Now if only Windows supported RAID in software.
Re:The new ones are impressive (Score:5, Informative)
It does, here is a RAID 5 example: http://support.microsoft.com/kb/323434 [microsoft.com]
Re: (Score:3, Insightful)
Yes, but unless they've changed stuff lately, he can't use RAID 5 on his boot disk - only mirroring is supported, and only sorta at that.
Though with the way SSDs are going, I'd seriously consider putting the OS on a SSD, then going with the RAID array.
And have things really changed so much that true hardware RAID is slower? I'm aware that there are RAID devices that depend on the CPU much like winmodems did, but surely a good RAID card still beats software?
Re: (Score:3, Informative)
Re: (Score:2)
Not a bad idea, but SSDs have reached usable sizes for ~$100 for 32GB, enough for an OS and most program files - just install the games, user directories, and other multimedia stuff to the raid system.
Heh, I wonder how large and cheap a SSD made with a 32 nm process would be.
Re: (Score:2)
Know of any recovery CD/DVD for a Windows RAID 5 system when it won't boot anymore ? It happened to me on a system I did not set up.
Linux has recovery CDs to the hilt - many with RAID XX support, so you can recover data even when your system won't boot. Under Linux, you can't use RAID5 for a boot device anyway, so it will boot. I thought this was also true of Windows, but this machine had it.
Note: Hardware RAID is dead, long live RAID!
Never use a motherboards SATA for RAID, buy a cheap SiI 3132 or Si
Re: (Score:1)
You are mistaken, windows does not support RAID-5 on the boot partition or where the OS is either. If it truly did have RAID-5 and the boot or OS partition was on it, then it was via hardware.
Re: (Score:2, Informative)
... if you could call motherboard RAID hardware, then yes.
As far as I can tell, its the worst kind of RAID and it has given software RAID a bad name.
The motherboard doesn't have parity chips, its just a flag to Windows to handle the RAID5.
This one went bad and not only marked it as degraded, but windows would not boot and the only tool we could find to get access to the data was a DOS boot floppy with the RAID drivers installed - but then, it didn't have permission to read the files, and the USB tools
Re: (Score:1, Informative)
Assuming you're on Linux, buy a processor with more cores, and use softraid. Autodetect = painless movement.
Re: (Score:2)
Re: (Score:1)
Normal people don't need faster computers (Score:3, Insightful)
Re:Normal people don't need faster computers (Score:4, Insightful)
Good point. With solid-state drives coming down the pipe, even that bottle-neck will be somewhat relieved for what most people do (lot's of disk reads, few writes). I write programs to help designers place and route chips. The problem size scales with Moore's Law, so we never have enough CPU power. I'm part of a shrinking population that remains focused on squeezing a bit more power out of their code. I wrote the DataDraw [sourceforge.net] CASE tool to dramatically improve overall place-and-route performance, but few programmers care all that much now days. On routing-graph traversal benchmarks, it sped up C-code 7X while cutting memory required by 40%. But what's a factor of 7 now days?
Re: (Score:3, Informative)
Re: (Score:3, Insightful)
was this professor involved with the design of vista at all?
there is this thing called 'documentation' that you add to your code so other people can understand it.
ignore your instructor. as a user, i very much appreciate whatever gains in efficiency i can get.
Re: (Score:3, Insightful)
Re: (Score:2)
While true, you do want to keep performance in mind when designing your _architecture_. If your program is algorithmically slow, or if it requires a virtual function call for any operation, then all profiling will show is time spent all over the map, because literally everything is slow.
Too true.. (Score:2)
I first learned the importance of getting things right. (A payroll program that gets the wrong answers gets the doors torn off the front of cookie factory. [That was my predecessor's mistake. :-])
Then I learned the importance of getting them to run fast. (I had a twelve hour window for calculation that suddenly got chopped in six as the company spread over a wider geographic area. The company bought their competitor. Now I had more impatient people to deal with [See previous 'font door' problem.])
Squeaked b
Re: (Score:2)
GP is correct, it's highly counterproductive to put 1337 hax into every line of code you write. This is why you write clear, correct code and then run a profiler. Then 1337hax the few lines that eat the most cycles.
Its a question of how you beak up code/processing. (Score:2)
O-O code can be optimized by knowing how (and therefore where) to cut up your code.
The code itself doesn't need to be any different, but how and where you cut it up can make an enormous difference in performance.
If you can take advantage of RAM to cache intermediate results of seek (find/get) operation, you can get incredible speed out of otherwise 'dead code'.
Re: (Score:3, Interesting)
The sad part is that improved runtime speed and code readability can be had together at the same time. The reason the DataDraw based code ran 7x faster was simple: cache performance. C, C++, D, and C# all specify the layout of objects in memory, making it impossible for the compiler to optimize cache hit rates. If we simply go to a slightly more readable higher level of coding, and let the compiler muck with the individual bits and bytes, huge performance gains can be had. The reason DataDraw saved 40%
Re: (Score:2)
Re: (Score:3, Interesting)
Check out the benchmark table at this informative link [sourceforge.net]. On every cache miss, the CPU loads an entire cache line, typically 64 or more bytes. Cache miss rates are massively dependent on the probability that those extra bytes will soon be accessed. Since typical structures and objects are 64 bytes or more, the cache line typically gets filled with fields of just one object. Typical inner loops may access two of those object's fields, but rarely three, meaning that the cache is loaded with useless junk. B
Re: (Score:3, Insightful)
Why don't you write an article about how to go about teaching them? I agree that "so many programmers are batshit stupid!" but what one doesn't understand is that most learning is unconscious, and the fact that you know it better then others means it's highly likely your interested in it for it's own sake. Many programmers don't know where to begin, I really wish everyone complaining about dumb programmers would write articles to teach them the tricks of the trade. If you don't they won't get passed on.
Re: (Score:2)
Not a bad idea, but where would I publish it? I could post it on my Dumb Idea of the Day [billrocks.org] blog, but no one reads it (which is ok with me). I would certainly be interested in writing an article about coding for cache performance.
Re: (Score:2)
Check it out:
http://accu.org/ [accu.org]
They also have a discussion list. I think it would be a good idea to see if anyones interested in a "wikibooks" project, i.e. people contribute small articles, and over-time the community edits it into something cohesive.
http://en.wikibooks.org/wiki/WB:FB [wikibooks.org]
When dealing with teaching, one should teach from the ground up. I've seen way too many programming books that assume previous knowledge and most are really bad. I like the zero-to-hero mentality, where you take someone knowi
Re: (Score:2)
Another place where it gets interesting is when the objects are more than 64 bytes. In those cases, a simple re-ordering of the fields can double performance.
Consider a common case of doubly linked structs. Prev and next pointers at the beginning followed by a bunch of other data. If your program needs to scan the list for cantidate objects for an operation, particularly where only a few of the structs will be operated on in a given pass, If the fields you check in the scan passes can be packed into the sa
Re: (Score:2)
Yep! If you talk to DSP guys, they do this kind of thing all the time. DataDraw allows me to specify which fields of a class I want kept together in memory, and by default, they're kept in arrays of individual properties. I was able to speed up random-access of large red-black trees in DataDraw 50% with this feature, simply because you almost always want both the left and right child pointers, not just one or the other.
Nice to hear from a fellow geek who for whatever reason still keeps an eye on low-leve
sometimes, yes, sometimes no (Score:2)
premature optimization is sometimes as you say, bad. however there is an idea of mature optimization where you know something needs to be written in such a way as to be fast.
say your task has to run in realtime, and it involves iterating over most of the machine's memory. if it doesn't run fast, you have a real problem.
always choose the correct read/write patterns, the correct architecture, and then make that code as clear as possible...
Re: (Score:2)
I'm fine with some code being very easy to read even at the expense of performance. But you seem to imply that doing so should always be the case. Which is a huge mistake.
Re: (Score:2)
In most applications, maintainability is the more important factor. Even with that, there's a lot of room for improvement. Well thought out code can be efficient and maintainable. In some cases, just cleaning up older code to improve maintainability ends up making it more efficient as well.
Too frequently code re-use is over-emphasized so you get a stack of objects that goes a bit like: (A) does something, (B) un-does about half of that and re-does it differently, (C) does a bit more and derives some informa
Re: (Score:3, Funny)
Soon enough people will have robots in their homes, doing chores. Very fast computers will be needed for that.
Re:Normal people don't need faster computers (Score:5, Insightful)
A surprising number of people that I know - and not just tech-savvy people - do video compression, either for converting camcorder movies into DVDs, creating slideshows, or using DVDshrink. And those are apps where more CPU is always good...
Just wait until HD camcorders are more prevalent, and you have people that want to convert their home movies into X.264 Bluray discs...
Re:Normal people don't need faster computers (Score:4, Interesting)
Re: (Score:2)
Why would you want a CPU for video encoding when a decent GPU can do it ten times faster? [pcper.com].
Re: (Score:2)
Wake me up when software that normal folks has support for this.
Re: (Score:3, Insightful)
Until the next version of Windows is out...
Seriously though. Of course the top-of-the-line, state-of-the-art, bleeding-edge PC's are irrelevant for the general populace when they are released. That doesn't mean that they're irrelevant to the general populace in a year or two.
When the next Windows is released, some new fancy games are released, websites are even more riddled with flash, java and whatever new tech they come up with to use more resources.
Re: (Score:1)
Re: (Score:2, Funny)
If I go to buy a new computer and I can buy a new model with a super fast processor for $1900, or a refurbished older model for $1300 that is slower, but more than fast enough for my needs, then I'll get the cheaper one and save myself $600. In fact, I did just that 4 months ago and completely love my iMac.
You got an Apple product cheaper? Amazing.
Re: (Score:1)
BTW, you can move your drives from one motherboard to the next so long as the raid is/was done via an intel raid controller. I've moved my complete OS from other motherboard to another with a different chipset with no problems, and that was on a 4-drive raid-0.
It was from a ICH6R to ICH8R I believe. Of course if you went from an nvidia/amd chipset to an intel one, then you can't. Unless the raid was done via an add-in card, of course.
Not surprising. (Score:4, Interesting)
Re:Not surprising. (Score:5, Insightful)
Nm (Score:5, Funny)
Newton-metres? You mean Joules?
What could possibly make you confuse N which is a symbol for Newton with n which is a prefix for nano.
You're definitely not geeky enough.
Captain Metric to the rescue (Score:3, Informative)
For this reason the SI standard dictates that metric units such as "km" or "nm" are never capitalized, even on a sign that is written ALL-CAPS [ltsa.govt.nz].
Re: (Score:2)
Of course. Because capitalisation changes the meaning in some cases, e.g. nm -> Nm
Re:Nm (Score:5, Funny)
It's nautical miles. The chips are gigantic. Marvels of engineering.
Re: (Score:2)
Re: (Score:2)
And yet. (Score:1)
Re:And yet. (Score:4, Funny)
At some point, it will stop getting smaller.
As opposed to the more common problem where it stops getting bigger.
Re: (Score:1, Funny)
Re: (Score:1)
At some point, it will stop getting smaller.
That's the point where you have achieved 'significant shrinkage'.
Chipsets (Score:5, Interesting)
It's great that Intel are working on die shrinks for their processors, but I wish they would do the same for their support chipsets. It's annoying that on most laptops the northbridge for Atom processors uses more power than the processor does.
Re:Chipsets (Score:5, Interesting)
This should be partially alleviated once the i7 architecture is fully adopted. Pretty much no more north bridge. That's probably why they're neglecting the current chip set technology with more aggressive updates.
And who knows, if a better chip interconnect comes around in the next generation (unlikely, but possible), Intel could start putting more and more in the CPU package. Things like a Larrabee GPU and south bridge functionality (audio, networking, general I/O). System on a chip is common place in embedded systems now. If Intel wants to eat ARM's lunch they're going to have to adopt some of the same techniques.
Re: (Score:2)
I think you are probably right (Score:2)
The whole separate northbridge thing is kind of a legacy idea. AMD ditched it some time ago, now Intel is ditching it. Well, that being the case, little point in pushing forward with advances on it, only to then deprecate it immediately after. It'll probably be till the next "tick" before it is totally gone, but it should happen soon.
The other problem people have to remember is that they have a limited amount of the highest tech fabs. It isn't as though you flip a switch and the fab suddenly is on a smaller
Re: (Score:3, Insightful)
Very true. The problem is that chipsets don't sell computers like processors do. Joe Shopper at WalMart doesn't know what a northbridge is but he has some understanding of what a Core 2 Duo is.
Re:Chipsets (Score:5, Insightful)
That's entirely a marketing issue.
Joe shopper doesn't know what a core 2 duo is any more than he knows what a northbridge is. The only difference between the two is there are millions of dollars poured into making sure Joe recognized the term "core 2 duo". He still doesn't know a damn thing about it.
Computers are funny from a marketing standpoint. They are purchased by people that don't know anything about them. Sold by people that don't know much about them and supported by people that don't even speak the same language. (often literally).
Even more interesting, they are the only consumer device I know of where there is very little difference between first and third party parts. Obviously the technical specs change, but the average computer buyer wouldn't know the difference if you highlighted it in red.
Selling computers therefore is a the most perfect example of marketing at work. Your customer doesn't know ANYTHING about the product in question, and so wants the one that he's heard the most about. So the customer buys what is best advertised.
Re: (Score:2)
Computers are funny from a marketing standpoint. They are purchased by people that don't know anything about them. Sold by people that don't know much about them and supported by people that don't even speak the same language. (often literally).
Do you really think that is different to most things out there? TVs, Fridges, Cars, Phones.
Re: (Score:1)
Interestingly enough, the primary goal of die shrinks is not better performance, but lower cost. If a given die can be shrunk by a factor of k, we can fit roughly k^2 devices on a wafer of the same size. If the smaller chips work just as well as the larger chips we can then turn around and sell them for exactly the same price. It's like printing money(Step 3: PROFIT!). Of course, there's the expense in R&D and equipment to consider as well (Step 2: ????), but the basic reasoning is sound. If our competi
It's a question of manufacturing capacity (Score:2)
Really, this is just a matter of having limited manufacturing capacity. Every time they create a new manufacturing process, they have to upgrade a factory to use it. This puts the factory out of service for however long it takes to roll out the new tech, and costs billions of dollars in the process. In other words, even Intel doesn't have the resources to upgrade all of their factories at once.
Instead, they take one or two factories running the oldest tech, and upgrade them. Once they are ready, they start
Re: (Score:2)
Actually, memory devices often use the most cutting-edge technology available,
Re: (Score:2)
You're correct, I forgot about Intel being a big player in the SSD market. A quick search shows that their flash memory fabs run on different node sizes though (50 nm, 34 nm coming) so I guess those fabs are outside of their processor rotation.
Point of Diminishing Returns? (Score:3, Interesting)
Am I the only one feeling we might have reached the point of diminishing returns, at least for desktops, in the last 2-3 years. All the shrinkage past 90 nanometers just feels underwhelming. Stuff beyond Pentium 3 has not been revolutionary, performance wise, for a desktop.
Re: (Score:3, Interesting)
Yea, there's a pretty big wall that's been hit in terms of clock speed, which is why multiple core processors is the direction instead of ramping up speeds.
Re: (Score:2)
Re: (Score:2, Funny)
Re: (Score:2, Informative)
I see we haven't been using Adobe software. Or Windows. Or Crysis. Or Slashdot's CSS 'implementation'.
But if browsing Usenet with Lynx is where you're out, more power to you.
Re: (Score:3, Interesting)
Anything past the P3 may not have been revolutionary, but it's steadily progressed quite nicely.
I have a dual 1.4GHz P3 system, and a 1.6GHz Core Duo. The Core Duo is *much* faster, and that chip is already outdated. Not to mention the fact that it's comparing the fastest P3s made to the lowest of the Core Duo lineup.
People also forget about things that can't be measured in nanometers or gigahertz, like the advances that have greatly lowered leakage current. Without them, something like 85% of the power
Re: (Score:2)
Stuff beyond Pentium 3 has not been revolutionary, performance wise, for a desktop.
It has. You've been living under a rock.
Re: (Score:2)
Really? A 3-year-old Core Duo (don't recall the clock speed, but in the 1.5-2 GHz range) is about 10x faster than a P3-733 on cpu-bound (small amount of memory accessed, no disk access) code. That's single-threaded code, so only using one core. 2-3x of that is the clock speed, the rest is the better architecture and process. 3-5x performance gain at the same clock speed is pretty good, in my book.
You're right that whole-system performance has not kept pace with that, but it never does.
Re: (Score:2)
Re: (Score:2)
Well, instead we're seeing bumps in the number of cores. We went from a single core processor to a dual core, a 100% bump, and now quad-core is becoming mainstream on mid to higher end desktop systems, which is another 100% bump over dual core. And there are sockets like LGA 775 and AM2 where it's possible to all the way from a single core to a quad core processor without having to change your motherboard.
What about AMD? (Score:1)
Re:What about AMD? (Score:5, Informative)
If Intel is able to shrink its die size every 12 months AMD is in trouble.
For what it's worth "tick-tock" is actually alternating between a new architecture and a process shrink every 12 months. "Q4" in the summary means Q4 2009.
Am I the only one feeling we might have reached the point of diminishing returns, at least for desktops, in the last 2-3 years. All the shrinkage past 90 nanometers just feels underwhelming. Stuff beyond Pentium 3 has not been revolutionary, performance wise, for a desktop.
I hate to be snarky but you sound like one of those people who bought the crap about the "Megahertz Myth". Processor clock rate has little to do with performance. I'll agree that pentium 4 was underwhelming, but Core was a huge hit and saw huge performance, especially toward the ones that were released in early this year that used the high k dielectric.
Re: (Score:1)
Re:What about AMD? (Score:5, Interesting)
Re: (Score:2)
Being that I have a tendency to run a few pieces of software that'll peg a CPU to 100% today, going to a dual core processor was a 'I LOVE THIS!!!' moment.
I went with a dual core for the higher individual core speed and that games were, on the whole, still not optimized for using multiple cores, so the best you could get is the game on one core and everything else on the second, which STILL wouldn't be strained. Of course, prices come down, performance goes up, software advances, I'd consider a quad today.
Obligatory Shrinkage Comment (Score:1)
This is one case where shrinkage is damn good.
Don't take that out of context.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Its a common term for what would, in normal SI parlance, be a "micrometer", because "micron" predates the adoption of the SI system, and "micrometer", in English at least, has a well established meaning referring to a particular kind of measuring device.
Intel (Score:3, Funny)
It's all about splitting hair nowadays
So long to the competition... (Score:5, Insightful)
Intel has always enjoyed a much better manufacturing technology than AMD. But, Intel made some stupid architectural decisions with the P4 architecture.
Once Intel came out with the Core series, then the combination of a decent architecture and terrific fab capabilities really started eating away at AMD. This will only continue the rally.
The sad thing is that this will actually be a step back in pricing... it's getting back to where AMD simply cannot touch the higher-end Intel territory, and so Intel is back to enjoying terrific profit margins on those chips.
So long to the competition...Race...relations. (Score:2)
I think AMDs strategy is overclocking and lots of it. Look at what it's introducing in it's latest and upcoming hardware. Features that make overclocking easier. Also I wouldn't count AMD out too soon. Amd is just one design correction away from having perfect hardware for HTPCs And their IGP is still better than Intels.
Amazing...Grace. (Score:2)
"Intel is obviously betting that its rapid-fire advancements will produce performance gains so jaw dropping that customers can't resist.""
Two things. One it doesn't matter how awesome your hardware is. If the majority can't afford it then it doesn't matter? Second as Microsoft is learning prior success can be a barrier to future growth. How many are going to throw out their Core 2 Duos in order to have the most amazing hardware from Intel?
So can someone summarize the current state? (Score:2)
I found out from my wife that our home server died and won't reboot. AMD Athlon 3200+ running Fedora.
It is almost certainly a hardware problem, and that server has been running 24/7 for years now... time to upgrade.
My hardware philosophy has been to buy big and milk it for a long time. You pay more up front for that power, but the fact that it has power means it doesn't get obsoleted immediately either.
So then, cut through the marketing crap. Assume a desktop PC purchase in the May-ish time frame, to run Li
Re: (Score:2)
Re: (Score:2)
If it's running 24/7 and your old Athlon 3200+ was good enough, then pick any current dual or quad desktop CPU with the lowest energy usage.
Pick a motherboard with an IGP and plenty of SATA2.
Throw in 8 or 16GB RAM and a couple of hdds.
Most importantly, check silentpcreview.com so you know which case to buy and how to silence it.
Re: (Score:2)
Re: (Score:2)
Newegg has an MSI Wind Atom based barebones box for $139 that looks perfect to me for a home server. I'm with the others here - go low power rather than high power.
nm, not Nm (Score:2)
What's meant here is nanometer, not Newtonmeter - which, by the way, is equal to Joule.
And now here I am, unable to think of a good pun about a 32 Joule chip...
Jaw Dropping? (Score:2)
Every 24 months, not 12 (Score:3, Informative)
Just to clarify: the tick-tock strategy means that one year gets a new architecture, the next year gets a new manufacturing process, and the cycle repeats. This means that there is a new architecture and new manufacturing every 24 months, not 12, and in alternating years.
winning via nm process, not design optimizations (Score:2)
I was looking at the range of low power CPUs and noticed that Intel's Atom seemed to do ok compared to the other low powered chips but then noticed that all the other chips were being built on a 65nm process while Intel had the Atom on the 45nm process. Looking at Intels standard "Core" processors showed that their newest CPUs were also on the 45nm process but not the majority of them.
This was a few months ago but it made me wonder why all the other low power CPU manufacturers were able to get the power and
Re: (Score:3, Funny)
Intel: I'm a chip company. I make chips, that's all I'm programmed to do.
AC: Were you any good?
Intel: Are you kidding? I was a star. I could make a chip to any size. 30 nm, 32 nm, you name it. 31... But I couldn't go on living once I found out what the chips were for.
AC: What for?
Intel: MacBooks.