Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Moves Up 32nm Production, Cuts 45nm 193

Vigile writes "Intel recently announced that it was moving up the production of 32nm processors in place of many 45nm CPUs that have been on the company's roadmap for some time. Though spun as good news (and sure to be tough on AMD), the fact is that the current economy is forcing Intel's hand as they are unwilling to invest much more in 45nm technologies that will surely be outdated by the time the market cycles back up and consumers and businesses start buying PCs again. By focusing on 32nm products, like Westmere, the first CPU with integrated graphics, Intel is basically putting a $7 billion bet on a turnaround in the economy for 2010."
This discussion has been archived. No new comments can be posted.

Intel Moves Up 32nm Production, Cuts 45nm

Comments Filter:
  • by alain94040 ( 785132 ) * on Tuesday February 10, 2009 @06:34PM (#26805959) Homepage

    I used to work for a processor company. I learned one thing: it's impossible to beat Intel, they just invest so much in technology that even if you come up with a smarter cache algorithm, a better pipeline, or (god forbid) a better instruction set, they'll still crush you.

    That used to be true for the last 20 years. The only problem today is that no one really cares anymore about CPU speed. 32nm technology will allow Intel to put more cores on a die. They'll get marginal, if any, frequency improvements. We just need to wait for the applications to follow and learn to use 16 cores and more. I know my workload could use 16 cores, but the average consumer PC? Not so sure. That's why I'd like to see prices starting to fall, instead of having same prices, more power PCs.

    --
    FairSoftware.net [fairsoftware.net] -- where geeks are their own boss

    • by CarpetShark ( 865376 ) on Tuesday February 10, 2009 @06:47PM (#26806115)

      I know my workload could use 16 cores, but the average consumer PC? Not so sure.

      The average consumer PC uses: * wordprocessing, which barely needs it, but can use it when performance is necessary, for background processing like print jobs, grammar checking and speech recog * spreadsheets, which lend themselves very well to multithreading * games, which could lend themselves well, if engines start doing stuff like per-creature-ai and pathfinding (ignoring stuff that's already on the GPU like physics and gfx) in proper threads. * web browsing. Admittedly, webpages are not the ideal scenario for multicore, but with multiple tabs, and multiple subprograms (flash, javascript, downloads, etc.) all running in threads, this could utilise multicores well too. Presumably future use of more XML etc. will help to push the boundaries there. If we ever get down the road of RDF on the desktop, then multicores will be very useful, in collecting and merging data streams, running subqueries, etc.

      • The smaller feature sizes bring power savings as well. So they're taking the server of yesteryear and putting it in your pocket. They're delivering the technology that'll bring the next billion users online because those folks don't have the watts to burn that we do.

        They're also working to solve the whole I/O problem with servers that happens when you get too much processing power in one box.

        In fact, they're pretty well focused on not just learning new things and creating new products, but in delivering

    • Re: (Score:3, Informative)

      by Jurily ( 900488 )

      That used to be true for the last 20 years. The only problem today is that no one really cares anymore about CPU speed. 32nm technology will allow Intel to put more cores on a die. They'll get marginal, if any, frequency improvements. We just need to wait for the applications to follow and learn to use 16 cores and more. I know my workload could use 16 cores, but the average consumer PC? Not so sure. That's why I'd like to see prices starting to fall, instead of having same prices, more power PCs.

      We don't need more cores. Someone should have realized it by now. Raw CPU output isn't what the market needs anymore (even on Gentoo, which is kinda hard to accept).

      We need the same CPU with less power usage.

      • Re: (Score:3, Insightful)

        by von_rick ( 944421 )

        We need the same CPU with less power usage.

        If people are going to stick with web browsing and multimedia entertainment for the rest of their lives, the processors in their present state can serve the purpose just fine. However if more and more people actually take computing seriously, the availability of multiple cores to do parallel computing on your own desktops would be a dream come true for most people involved in computationally intensive research disciplines. If I had the ability to use 8 cores at 2GHz, at all times, I'd have finished my analy

        • by Jurily ( 900488 ) <jurily AT gmail DOT com> on Tuesday February 10, 2009 @07:32PM (#26806663)

          However if more and more people actually take computing seriously, the availability of multiple cores to do parallel computing on your own desktops would be a dream come true for most people involved in computationally intensive research disciplines. If I had the ability to use 8 cores at 2GHz, at all times, I'd have finished my analysis in less than a week. But with no such luxury (back in 2005) I had to queue my process on a shared cluster and wait until morning to see the results.

          Blah. Do you know how much CPU it took to fucking land someone on the moon? Why does it take 200 times that just to browse the web?

          I know some people need raw computation, but c'mon. The average boot time is still ~60 seconds on the desktop. Why?

          And it doesn't even matter, which OS. Why do we need more calculations to get ready to so something than it took to get someone up there? Seriously.

          Modern software is bloat. Let's do something about that, first.

          • by CastrTroy ( 595695 ) on Tuesday February 10, 2009 @07:36PM (#26806735)
            Landing on the moon was simple newtonian physics. Not a hard problem to solve at all. If you want something really hard, try cracking RSA. Try protein folding. There's a lot of problems out there that are a lot harder to solve than landing a craft on the moon.
            • by Jurily ( 900488 )

              Landing on the moon was simple newtonian physics. Not a hard problem to solve at all.

              Yeah, browsing the web should take up at least 10000x that.

              • by Anonymous Coward on Tuesday February 10, 2009 @11:40PM (#26808125)

                The Apollo computers only had to cope with up to a few thousand kilobits per second of telemetry data and the like. Decoding a high definition YouTube stream means converting a few million bits per second of h.264 video into a 720p30 video stream (which is about 884 million bits per second [google.com]).

                Given that h.264 video is enormously more complicated to decode than telemetry data, and that the volume of it is at least several thousand times greater, I would be outright surprised if web browsing required ONLY 10000 times as much CPU power as the Apollo landers.

                • The Apollo computers only had to cope with up to a few thousand kilobits per second of telemetry data and the like. Decoding a high definition YouTube stream means converting a few million bits per second of h.264 video into a 720p30 video stream (which is about 884 million bits per second [google.com]).

                  Given that h.264 video is enormously more complicated to decode than telemetry data, and that the volume of it is at least several thousand times greater, I would be outright surprised if web browsing required ONLY 10000 times as much CPU power as the Apollo landers.

                  But, to be honest, the chipsets are just as likely to come with dedicated video decoding hardware than can handle HD H.264 without breaking a sweat. Take a look at the Atom's Poulsbo chipset [anandtech.com] for example.

            • Sad that this is rated funny rather than insightful...

            • And winning the stanley cup is easy - you just have to score the most goals!

              All problems can be boiled down to simple essentials, but figuring out the details is usually pretty hard.

              RSA and protein folding may seem hard now, but once they're solved, and passed thorough the filters of Nova and New Scientist, boiled down to their most uninformative and simple essentials, people will probably say that cracking RSA was simply applied math and modeling protein just took the principles of biochemistry and a lot o

              • Re: (Score:3, Insightful)

                by evanbd ( 210358 )
                The physics and math of the navigation is (computationally) easy. The hard part is building the high performance, high reliability vehicle. There are many, many hard problems in rocket engineering, but most of the ones associated with the software aspects of guidance, navigation, and control are staightforward. Going to the Moon is hard; no doubt about it. That really says nothing about the computing required, though.
            • That's ridiculous (Score:3, Insightful)

              by tjstork ( 137384 )

              RSA is a problem that is much more simply stated than landing a man on the moon. You only say landing a man on the moon is easy because it was done. It was the culmination of many, many years of research to do it and it requires a lot of risk management and luck to do it. You say mathematically that landing a rocket on the moon is easier than protein folding, but try a realistic computer model of the effects of fuel spray and burn inside the combustion chamber.

              • try a realistic computer model of the effects of fuel spray and burn inside the combustion chamber.

                Fortunately, the Apollo-era computers didn't have to do that, or we'd never have gotten there.

          • by mephistophyles ( 974697 ) on Tuesday February 10, 2009 @07:48PM (#26806857)
            I wasn't around when they landed someone on the moon so I can't quite comment on that bit, but I can tell you what I (and the rest of my kind) use the extra processing power for:

            Finite Element Analysis (simulating car crashes to make them safer before we crash the dummies in them).
            Multibody Dynamics (Simulation of robot behavior saves a ton of money, we can simulate the different options before we build 10 different robots or spend a year figuring out something by trial and error)
            Computational Fluid Dynamics (designing cars, jets and pretty much anything in between like windmills and how they affect their surroundings and how efficient they are)
            Simulating Complex Systems (designing control schemes for anything from chemical plants, to cruise control to autopilots) Computational Thermodynamics (Working on that tricky global warming thing, or just trying to figure out how to best model and work with various chemicals or proteins)

            This is just the uses (that I know of) that more raw power can help out in Mechanical Engineering. I still have to wait about an hour for certain simulations or computations to run and they're not even all that complex yet. The faster these things run (even a few percent increases) can save us tons of time in the long run. And time is money...
            • by Jurily ( 900488 )

              This is just the uses (that I know of) that more raw power can help out in Mechanical Engineering.

              I see your point. Raw power is needed when you do things that need raw power.

              But for the average desktop? Why would even watching a video on youtube need a 16-core processor?

              People got along just fine on Pentium II's.

              • Re: (Score:3, Interesting)

                Why would even watching a video on youtube need a 16-core processor?

                You clearly underestimate how much Flash sucks.

                People got along just fine on Pentium II's.

                And they did quite a lot less. Ignoring Flash, those Pentium IIs, I'm guessing, are physically incapable of watching a YouTube video, and are certainly incapable of watching an HD video from, say, Vimeo.

                • I clearly remember when the Pentium (original 60-MHZ version) came out, that was the big selling point was the capability of watching videos on it. In fact, I've got a CD I picked up back then that had the Beatles movie A Hard Days Night on it, and it played fine on my old 486.

                  • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Tuesday February 10, 2009 @11:58PM (#26808293) Journal

                    Again: What quality of movie?

                    I can watch 1920x1080 movies, smoothly, at least 30fps, if not 60. A quick calculation shows that the poor machine would likely be using over half its RAM just to store a single frame at that resolution. I'd be amazed if your 486 could do 640x480 at an acceptable framerate -- note that we had a different measure of "acceptable" back then.

                    Also consider: Even if we disregard Flash, I am guessing talking to the network -- just straight TCP and IP -- is going to be its own kind of difficult. Keep in mind, Ogg Vorbis was named for how it "ogged" the audio, and machines of the time couldn't really do much else -- while decoding audio.

                    Yes, there are hacks we could use to make it work. There are horribly ugly (but efficient) codecs we could use. We could drop JavaScript support, and give up the idea of rich web apps.

                    And yes, there is a lot of waste involved. But it's been said before, and it is worth mentioning -- computers need to be faster now because we are making them do more. Some of it is bloat, and some of it is actual new functionality that would've been impossible ten years ago.

                  • I clearly remember when the Pentium (original 60-MHZ version) came out, that was the big selling point was the capability of watching videos on it. In fact, I've got a CD I picked up back then that had the Beatles movie A Hard Days Night on it, and it played fine on my old 486.

                    At what resolution and frame rate?

                    Back then, it was probably 160x120, 15fps. Which was pretty common for the Intel Indeo codec (IIRC). If you were lucky, it was MPEG2 320x240 at 30fps.

                    The first is a data stream which is proba
                • Re: (Score:3, Funny)

                  Comment removed based on user account deletion
                  • Re: (Score:3, Interesting)

                    But how much of that is the need for raw power VS the problem of really crappy code?

                    There's a lot of each.

                    Every now and then, I run a test of a YouTube (or other) video played in its native Flash player, and in a third-party player like VLC or mplayer.

                    Not only is the mplayer version higher quality (better antialiasing), and more usable (when I move my mouse to another monitor, Flash fullscreen goes away), but it's the difference between using 30-50% CPU for the tiny browser version, and using 1% or less fullscreen.

                    In Flash 10 -- yes, I'll say that again, flash TEN, the latest version -- th

                  • The point is when I got into computing ( and yes I'm old, dammit!) programmers squeezed every bit of performance they possibly could while using as little resources as possible. Why? Because they didn't have multicores with craploads of RAM to waste. But now I have noticed the software has taken on the SUV model of not caring how crappy the resource suckage as long as you can add more crap to it. That is why I am hoping that this trend towards Netbooks ends up with programmers looking at performance again.
              • But you can't watch a Flash video on a PII, can you? My 2.2 GHz Mobile Pentium IV on my Thinkpad is a bit slow with YouTube in full-screen. Keep that in mind.
                • by Jurily ( 900488 )

                  But you can't watch a Flash video on a PII, can you?

                  Can you watch any other video? If so, Flash is bloated.

                  'Nuff said.

                  • I'm not saying that Flash is or isn't bloated. I'm just trying to prove that the Internet, for better or for worse, is heading in that direction, and old computers are starting to show their age.
                    • by Jurily ( 900488 )

                      I'm not saying that Flash is or isn't bloated.

                      No, I was. If you can play other videos on a machine without problems but not flash, then flash is slow, not the computer.

                  • by drsmithy ( 35869 )

                    Can you watch any other video? If so, Flash is bloated.

                    Your logic is broken.

          • by Timothy Brownawell ( 627747 ) <tbrownaw@prjek.net> on Tuesday February 10, 2009 @07:52PM (#26806901) Homepage Journal

            Blah. Do you know how much CPU it took to fucking land someone on the moon? Why does it take 200 times that just to browse the web?

            Because space travel is mathematically dead simple, you have a couple of low-degree differential equations to solve for a very small data set. A high-school student could probably do it in an afternoon with a slide rule (in fact, I think I recall hearing that (early?) astronauts actually did carry slide rules in case of computer failure). Video codecs (like for youtube) are much more complex and operate on much larger sets of data.

            • by dfn_deux ( 535506 ) <datsun510&gmail,com> on Tuesday February 10, 2009 @11:10PM (#26807933) Homepage
              I believe that they still have a slide rule as standard issue equipment on NASA space missions. It's hard to argue with the cost associated with adding an additional layer of fault tolerance... If it could, in a pinch, be used to plot a survivable reentry or a similarly life saving task when they sent the first rockets to space it can still serve the same function today. Sort of like the saying, "an elevator can't break, it can only become stairs."
            • Re: (Score:2, Funny)

              by blool ( 798681 )

              space travel is mathematically dead simple

              Welcome to Slashdot, one of the few places where rocket science is considered simple.

            • by tjstork ( 137384 ) <todd.bandrowskyNO@SPAMgmail.com> on Wednesday February 11, 2009 @03:39AM (#26809615) Homepage Journal

              Because space travel is mathematically dead simple

              It's only dead simple if you have a rocket that works. Design one of those? If it were so easy, SpaceX would have people up there by now, and I don't even know if they have their first orbit yet.

            • by Overzeetop ( 214511 ) on Wednesday February 11, 2009 @09:03AM (#26811701) Journal

              Actually, space travel is very complex. The only "simple" part about it is that, for two body motion and the limits of our ability to control thrutser force and duration, there are explicit solutions to the differential equations. The brain power behind the programming is immensely difficult, but once coded the computational power needed is not excessive.

              More to the point, all the pencil and paper math HAD to be done to make the available processors capable of performing the operations. The fact that they had slide rules indicates that the complexity of the brain work was immense to reduce the solution set to something that can be solved near-real-time on a slide rule. If the same mission were done today, we'd have none of this higher math involved. With the available processor power, it would be a brute force numerical solution. That's what most video codecs are, in essence, is a numerical solution to an equation with known boundary conditions. The more compression you want, the less exact the solution is (And hence the compression artifacts).

              Short of computationally intensive activities like video decoding, it shouldn't take much processor power to browse the web. It only does because it's faster (from a programmers time) to do things with brute force than to slim them down. It shouldn't require 250-500+ separate requests to open a page, and there shouldn't be 200kB of formatting for a page which contains - maybe - 5kB of text. That's why Skyfire works so fast on cell phones - there's so much crap in HTML pages now, and so many requests, that its faster to make a VGA snapshot of a page and load that as a damned image than it is to download the actual page.

          • by drsmithy ( 35869 )

            Blah. Do you know how much CPU it took to fucking land someone on the moon? Why does it take 200 times that just to browse the web?

            It's probably more like 200,000 times, and we need it because "browsing the web" involves processing orders of magnitudes more data with dramatically lower required response times.

            I know some people need raw computation, but c'mon. The average boot time is still ~60 seconds on the desktop. Why?

            For the same reason it still takes your car a minute or two to warm up in the mor

    • Re: (Score:3, Informative)

      by digitalunity ( 19107 )

      I disagree strongly. Processor speed is still very important - just not for the average consumer. For quite some time now, the majority of consumer applications have been IO and/or GPU bound.

      There is no such thing as a 'fastest useful processor' for some people, primarily in research and academia.

    • by Chabo ( 880571 ) on Tuesday February 10, 2009 @06:54PM (#26806193) Homepage Journal

      Disclaimer: I work for Intel, but have no bearing on company-wide decisions, and I'm not trying to make a marketing pitch. I'm merely making observations based on what I read on public websites like /. and Anandtech.

      That's why I'd like to see prices starting to fall, instead of having same prices, more power PCs.

      Prices are falling. Price cuts were just made nearly across the board.

      Plus you can buy a $50 CPU today that's cheaper and more powerful than a CPU from 4 years ago.

      Die shrinks necessarily make CPUs cheaper to make, because more chips can fit onto a wafer. Also, if you take a 65nm chip of a certain speed, and move it to 45nm, then power consumption is reduced. The same will be true moving to 32nm.

      • Things are getting really cheap. I just replaced my home motherboard+cpu+ram for $200, and now I have dual core with 2GB of RAM. At work, we just got a quad core with 4x500 GB hard disks and 8 Gigs of ram, a complete system, including case and power supply for $1000. To contrast, I bought a computer 10 years ago, it cost $1800, and only had P2-266, 1x4GB HD, and 64 MB of RAM. Boy are things cheap these days. You can get a state of the art gaming rig for $1500.
        • $1800? That's pretty pricey for those specs in 1999. I got my Celeron 366, 1x4GB HD, 32 MB RAM, 4 MB SiS Graphics Card for a little under $500 in 1999. I do get your point though. Things are cheap these days you could build a decent gaming rig for that amount.

          • Re: (Score:3, Insightful)

            by MBGMorden ( 803437 )

            Yeah. I remember speccing out my first home-built system. The Socket 5 motherboard cost $175. You can now get motherboards for $30-40. The Cyrix 6x86 chip was $150 (an actual Intel chip cost nearly twice that). You can now get basic CPU's for under $50. The case + power supply was $80. Current price about $35. A fairly small hard drive ran $150. You can get drives for $35 now. RAM was $40 per stick for about the smallest useful size. A 1GB stick of DDR2 will now cost you $12.

            Computers have been g

        • by Eugene ( 6671 )

          20 years ago I have to pay more then that just to get a computer with 80286-10Mhz, 1MB RAM, 40MB HD, with 5.25" floppy..

          technology always drives price lower..

        • In general, you can build a mid-range gaming rig for about $900 ($250 for the MB/CPU/RAM, $150 for the case/PSU, $150 for the Windows license, $150 for a mid-range video card, $200 for drives and misc).

          You could probably shave a few corners and still have a very good rig for low-end gaming for about $700.

          Not sure I'd go much below that price point personally, as you end up with too many low-end components, or things that you'll have to replace constantly.
      • Also, if you take a 65nm chip of a certain speed, and move it to 45nm, then power consumption is reduced. The same will be true moving to 32nm.

        Maybe. Capacitance-related power consumption will fall, but didn't one of the more recent process shrinks actually increase power usage because of unexpectedly high leakage currents? I know there were news articles about some sort of unexpected power issues relating to a process shrink.

    • We just need to wait for the applications to follow and learn to use 16 cores and more

      No, not every application needs to be written to operate on X number of cores, operating systems and virtual machines (Java, .NET, etc.) need to allow the applications to run, regardless. What makes sense, optimizing many many new (not legacy) applications to suit more cores, when in a few months (moore's law) more cores will be crammed on a chip? Or, perhaps the OS designers and virtual machine architects need to allow their software to act as a hypervisor to both new and old applications to take advantage

      • look up the inferno OS. Basically someone created their own version of java/.NET and embedded it into the Kernel. number of cores, processor types, hell even where on the network doesn't matter.

        while I don't know if Plan 9 will be the next answer. Inferno's Ideas are what is really needed. MSFT singularity is a more modern version of it.

        My personal idea is that during Boot,a built in virtual machine(maybe FPGA based so it could be upgraded with new tech) starts. Apps can then be run from arm, x86, it

      • Re: (Score:3, Funny)

        IANA (I Am Not Awake):

        No, not every application needs to be written to operate on X number of cores, operating systems and virtual machines (Java, .NET, etc.) need to allow the applications to run on multiple cores, regardless of development/other factors.

        ...possibly dynamically updating the software on a per-machine/core# basis to set the number of cores for the software to run on tailored better for that user's processor in a more HAL-like manner..

        There, fixed it for... me.

    • Re: (Score:3, Informative)

      by zippthorne ( 748122 )

      32nm means that the same processor can take half the area on the die. You could use that to get more cores, or you could just use that to get more out of the wafer.

      I think someone noted not too long ago that the price of silicon (in ICs) by area hasn't changed much over the years. But the price per element has sure gone down due to process reductions.

      If you change nothing else, your 32 nm chip will consume less power and cost less than an otherwise nearly identical 45 nm chip.

      • We should be getting close to where you can put the whole system - CPU, RAM, video card - all on one chip. That should slash costs for packaging and interconnects. It should be fast, too, since the system RAM basically becomes all cache.
    • Re: (Score:3, Insightful)

      by ChrisA90278 ( 905188 )

      I know my workload could use 16 cores, but the average consumer PC? Not so sure. That's why I'd like to see prices starting to fall, instead of having same prices, more power PCs.

      What will happen is that the "average consumer PC" wiil do different tasks, not just today's job faster. For example what about replacing a mouse with just your hand. A webcam-like camera watches your hands and finders. It's multi-touch but without the touch pad. OK there is one use for 8 or your 16 cores. Maybe the other 8 co

      • by Eivind ( 15695 )

        "start" to fall ?

        Listen, when my uncle got his first PC (he was an early adopter) he paid about $3500 for it, which at the time was more than a months salary. It was a fairly average PC at the time.

        Today, the most commonly sold PC for the home-consumer market is a $700 laptop, or something along those lines. But in the time between, salaries have aproximately doubled. So, the reality is that a typical home-pc today costs 1/10th of what it did when he got his first PC, it's aproximately 20 years ago. (a 386,

      • Are you seriously suggesting that answering a telephone is a task that is inherently parallel? The CPU is not a big truck. More cores aren't simply "extra power" just lying around.
    • I understand the questioning of the need for CPU speed, fine, but I wouldn't dismiss the potential for power consumption gains. Wouldn't smaller feature sizes also allow Intel to make lower power processors? I'd like to see more notebooks that work longer without having to be tied to a wall outlet.

    • by kf6auf ( 719514 )

      First of all, going from 45nm to 32nm means that every transistor takes up half the space it used to. The choice then is between the same number of transistors per chip resulting in lower per unit cost or twice as many transistors per chip resulting in better performance. As usual, there will be some of both.

      Some people need better single-core performance, some people need more cores, and some people just need lower power consumption. Not everyone needs the same thing, which is why there are different

  • At some point this roller coaster ride has to end. I mean, why not put off development until the NEXT iteration then?

    • Re: (Score:3, Interesting)

      by plague911 ( 1292006 )
      Just a guess of mine. But the fact of the matter is that some semiconductor phd's out their think that the end of the line is coming for the reduction in device feature size. I believe my professor last term said he figured the end would come around 22nm mark not much further. I could be wrong about the exact number (i hated that class). But the point is once the end of the line is reached. Profits hit a brick wall and the whole industry may take a nose dive. Right now every year there is bigger and better
      • You know, the thing is, everytime we hear the sky is falling, some new tech happens, and life is extended again.

        BUT, it does seem that the miracles get fewer and farther between and it seems that they are getting more and more expensive as we go on. Yep, at some point it's all going to end, but at the end, will there be a beginning of something else entirely?

        Optical computing? Quantum? Universal Will To Become?

  • We;ve seen leaprog attempts lead to delays before. If this means AMD gets 45nm before Intel gets 32nm, doesn't that give AMD a performance window?

    • by Chabo ( 880571 ) on Tuesday February 10, 2009 @06:46PM (#26806107) Homepage Journal

      ... AMD has 45nm. [wikipedia.org]

    • by Jurily ( 900488 ) <jurily AT gmail DOT com> on Tuesday February 10, 2009 @06:58PM (#26806253)

      If this means AMD gets 45nm before Intel gets 32nm, doesn't that give AMD a performance window?

      You mean being only one step behind instead of two?

    • by Sycraft-fu ( 314770 ) on Tuesday February 10, 2009 @07:58PM (#26806951)

      For one thing, Intel has always been ahead of, well, everyone pretty much on fab processes. This isn't saying Intel will skip 45nm, they can't do that as they a;ready are producing 45nm chips in large quantities. They have a 45nm fab online in Arizona cranking out tons of chips. Their Core 2s were the first to go 45nm, though you can still get 65nm variants. All their new Core i7s are 45nm. So they've been doing it for awhile, longer than AMD has (AMD is also 45nm now).

      The headline isn't great because basically what's happening is Intel isn't doing any kind of leapfrog. They are doing two things:

      1) Canceling some planned 45nm products. They'd planned on rolling out more products on their 45nm process. They are now canceling some of those. So they'll be doing less 45nm products than originally planned, not none (since they already have some).

      2) Redirecting resources to stepping up the timescale on 32nm. They already have all the technology in place for this. Now it is the implementation phase. That isn't easy or fast. They have to retool fabs, or build new ones, work out all the production problems, as well as design chips for this new process. This is already under way, a product like this is in the design phases for years before it actually hits the market. However they are going to direct more resources to it to try and make it happen faster.

      More or less, they are just trying to shorten the life of 45nm. They want to get 32nm out the door quicker. To do that, they are going to scale back new 45nm offerings.

      Makes sense. Their reasoning is basically that the economy sucks right now, so people are buying less tech. Thus rolling out new products isn't likely to make them a whole lot of money. Also it isn't like the products they have are crap or anything, they compete quite well. So, rather than just try to offer incremental upgrades that people probably aren't that interested in, unless they are buying new, they'll just wait. They'll try and have 32nm out the door sooner so that when the economy does recover, their offerings are that much stronger.

      Over all, probably a good idea. Not so many people are buying systems just to upgrade right now, so having something just a bit better isn't a big deal. If someone needs a new system, they'll still buy your stuff, it's still good. Get ready so that when people do want to buy upgrades, you've got killer stuff to offer.

    • by treeves ( 963993 )
      No, because Intel had 45nm before AMD had it.
  • by Anonymous Coward

    Actually they were able to step up some of there fabs faster then expected.

  • Too big to fail (Score:5, Insightful)

    by unlametheweak ( 1102159 ) on Tuesday February 10, 2009 @06:46PM (#26806111)

    Intel is basically putting a $7 billion bet on a turnaround in the economy for 2010."

    And if they lose the bet then they can just ask for a bailout like the financial firms and auto industry did. Because Intel is too big to fail.

    • The market cannot allow Intel to fall. No other company in the world can supply x86 processors with the reliability and volume that Intel does. AMD does not have the processor fabs to meet worldwide demand for x86 products. Even if Intel really screws things up, it still has significant market power.

      • by ErikZ ( 55491 ) *

        Yeah. Because if Intel failed it's fabs would dissipate in a puff of smoke.

        No they WOULD NOT.

        Another company would buy them and hire the people that were working there.

      • by BrentH ( 1154987 )
        AMD can certainly supply the material Intel supplies. Sure, Intel has us addicted to x86 with a turnover rate of only a few months, but this certainly can be stretched by a few months to allow AMD to play catchup. Every year a new cpu instead of six months.
  • by Futurepower(R) ( 558542 ) on Tuesday February 10, 2009 @06:48PM (#26806127) Homepage
    The biggest issue for Intel is that most people already have computers that are fast enough for them.... Or, they don't have the money or desire to buy a computer.

    The 32nm processors, I understand, will reduce the power needed even further, making it sensible for data centers to upgrade.
    • by RajivSLK ( 398494 ) on Tuesday February 10, 2009 @07:18PM (#26806517)

      most people already have computers

      Really? Have an eyeopening look here:

      http://www.economist.com/research/articlesBySubject/displayStory.cfm?story_id=12758865&subjectID=348909&fsrc=nwl [economist.com]

      Computer ownership is really very low worldwide. Even the US has only 76 computers per 100 people. Keep in mind that includes people like myself who, between work and home use, have 4 computers alone.

      Some other socking figures:
      Italy 36 computers per 100 people
      Mexico 13 computers per 100 people
      Spain 26 computers per 100 people
      Japan 67 computers per 100 people
      Russia 12 computers per 100 people

      And the billions of people in China and India don't even make the list.

      Seems to me that there are a lot more computers Intel could be selling in the future. The market is far from saturated.

      • FYI, socking figures are similar to 'punching figures'. They're designed to put you in shock.

    • Re: (Score:3, Interesting)

      by von_rick ( 944421 )

      Great point. People who bought their machines when the processors were at 65-nm won't need to replace them until about 2011. By then, according to Intel's own prediction, we would be in the sub 10-nm range.

      This is from an article from mid 2008: full article [crn.com]

      Intel debuted its 45nm process late last year and has been ramping its Penryn line of 45nm processors steadily throughout this year. The next die shrink milestone will be the 32nm process, set to kick off next year, followed by 14nm a few years after that and then sub-10nm, if all goes according to plan.

  • by pugugly ( 152978 ) on Tuesday February 10, 2009 @06:49PM (#26806137)

    Or at least, if the economy *doesn't* turn around by 2010, that the shitstorm will be so bad at that point they don't care.

    Pug

    • The shitstorm may be bad for them, but it'll likely be far worse for AMD to begin with. This is perhaps the best time for them to outspend AMD in research.
  • bet (Score:5, Funny)

    by Gogo0 ( 877020 ) on Tuesday February 10, 2009 @06:52PM (#26806169)

    a 7 billion dollar bet? thats peanuts! wake me up when someone makes a 1.5 trillion dollar bet on the economy.

  • by Hadlock ( 143607 ) on Tuesday February 10, 2009 @07:05PM (#26806337) Homepage Journal

    Intel is basically putting a $7 billion bet on a turnaround in the economy

    NEWSFLASH: Intel has been dumping 10 BILLION dollars a year into R&D since at least 1995. Did not RTFA, but if the blurb is to be taken at face value, the reporter obviously did no real research on the topic.

  • by hydertech ( 122031 ) on Tuesday February 10, 2009 @08:01PM (#26807009) Homepage

    Intel announced today that it was investing $7bln to build new manufacturing facilities in the US to manufacture these chips.

    The new facilities will be built at existing manufacturing plants in New Mexico, Oregon, and Arizona. Intel is estimating 7,000 new jobs will be created. BizJournals.com [bizjournals.com]

    • by adpowers ( 153922 ) on Tuesday February 10, 2009 @09:27PM (#26807535)

      Yeah, I noticed that this morning when I read about the investment. They closed a bunch of older facilities in Asia, laying off the workers, and are building the new fancy fabs in the US (and creating high paying jobs in the process).

      Of course, the next thing that came to my mind is whether Slashdot would cover that aspect of the story. Sure enough, Slashdot's summary completely disregards that Intel is creating jobs in America. I suspect there are two reasons for this: 1. It hurts Slashdot's agenda if they report about companies insourcing, readers should only know about outsourcing by "the evil corporations". 2. Because Intel is the big bad wolf and we can't report anything good they do.

    • Sure, with the US economy going in the toilet, it's going to be affordable to pay US workers (in US dollars.)

      The simple truth is that companies are currently seeking low power consumption, they're using virtualization and buying servers with lower TDP. 32nm increases yields and reduces power consumption so it can save everyone some money. These businesses are not going to suddenly stop needing an upgrade path because of the economy, although the refresh cycle is sure to slow.

  • by wiredlogic ( 135348 ) on Tuesday February 10, 2009 @08:24PM (#26807277)

    That would be the Cyrix MediaGX circa 1997.

Life is a whim of several billion cells to be you for a while.

Working...