Intel Moves Up 32nm Production, Cuts 45nm 193
Vigile writes "Intel recently announced that it was moving up the production of 32nm processors in place of many 45nm CPUs that have been on the company's roadmap for some time. Though spun as good news (and sure to be tough on AMD), the fact is that the current economy is forcing Intel's hand as they are unwilling to invest much more in 45nm technologies that will surely be outdated by the time the market cycles back up and consumers and businesses start buying PCs again. By focusing on 32nm products, like Westmere, the first CPU with integrated graphics, Intel is basically putting a $7 billion bet on a turnaround in the economy for 2010."
Performance Is Overrated (Score:5, Insightful)
I used to work for a processor company. I learned one thing: it's impossible to beat Intel, they just invest so much in technology that even if you come up with a smarter cache algorithm, a better pipeline, or (god forbid) a better instruction set, they'll still crush you.
That used to be true for the last 20 years. The only problem today is that no one really cares anymore about CPU speed. 32nm technology will allow Intel to put more cores on a die. They'll get marginal, if any, frequency improvements. We just need to wait for the applications to follow and learn to use 16 cores and more. I know my workload could use 16 cores, but the average consumer PC? Not so sure. That's why I'd like to see prices starting to fall, instead of having same prices, more power PCs.
--
FairSoftware.net [fairsoftware.net] -- where geeks are their own boss
Re:Performance Is Overrated (Score:5, Interesting)
The average consumer PC uses: * wordprocessing, which barely needs it, but can use it when performance is necessary, for background processing like print jobs, grammar checking and speech recog * spreadsheets, which lend themselves very well to multithreading * games, which could lend themselves well, if engines start doing stuff like per-creature-ai and pathfinding (ignoring stuff that's already on the GPU like physics and gfx) in proper threads. * web browsing. Admittedly, webpages are not the ideal scenario for multicore, but with multiple tabs, and multiple subprograms (flash, javascript, downloads, etc.) all running in threads, this could utilise multicores well too. Presumably future use of more XML etc. will help to push the boundaries there. If we ever get down the road of RDF on the desktop, then multicores will be very useful, in collecting and merging data streams, running subqueries, etc.
It's not just about the cores (Score:3, Informative)
The smaller feature sizes bring power savings as well. So they're taking the server of yesteryear and putting it in your pocket. They're delivering the technology that'll bring the next billion users online because those folks don't have the watts to burn that we do.
They're also working to solve the whole I/O problem with servers that happens when you get too much processing power in one box.
In fact, they're pretty well focused on not just learning new things and creating new products, but in delivering
Re: (Score:3, Informative)
That used to be true for the last 20 years. The only problem today is that no one really cares anymore about CPU speed. 32nm technology will allow Intel to put more cores on a die. They'll get marginal, if any, frequency improvements. We just need to wait for the applications to follow and learn to use 16 cores and more. I know my workload could use 16 cores, but the average consumer PC? Not so sure. That's why I'd like to see prices starting to fall, instead of having same prices, more power PCs.
We don't need more cores. Someone should have realized it by now. Raw CPU output isn't what the market needs anymore (even on Gentoo, which is kinda hard to accept).
We need the same CPU with less power usage.
Re: (Score:3, Insightful)
We need the same CPU with less power usage.
If people are going to stick with web browsing and multimedia entertainment for the rest of their lives, the processors in their present state can serve the purpose just fine. However if more and more people actually take computing seriously, the availability of multiple cores to do parallel computing on your own desktops would be a dream come true for most people involved in computationally intensive research disciplines. If I had the ability to use 8 cores at 2GHz, at all times, I'd have finished my analy
Re:Performance Is Overrated (Score:5, Insightful)
However if more and more people actually take computing seriously, the availability of multiple cores to do parallel computing on your own desktops would be a dream come true for most people involved in computationally intensive research disciplines. If I had the ability to use 8 cores at 2GHz, at all times, I'd have finished my analysis in less than a week. But with no such luxury (back in 2005) I had to queue my process on a shared cluster and wait until morning to see the results.
Blah. Do you know how much CPU it took to fucking land someone on the moon? Why does it take 200 times that just to browse the web?
I know some people need raw computation, but c'mon. The average boot time is still ~60 seconds on the desktop. Why?
And it doesn't even matter, which OS. Why do we need more calculations to get ready to so something than it took to get someone up there? Seriously.
Modern software is bloat. Let's do something about that, first.
Re:Performance Is Overrated (Score:5, Insightful)
Re: (Score:2)
Landing on the moon was simple newtonian physics. Not a hard problem to solve at all.
Yeah, browsing the web should take up at least 10000x that.
Re:Performance Is Overrated (Score:5, Insightful)
The Apollo computers only had to cope with up to a few thousand kilobits per second of telemetry data and the like. Decoding a high definition YouTube stream means converting a few million bits per second of h.264 video into a 720p30 video stream (which is about 884 million bits per second [google.com]).
Given that h.264 video is enormously more complicated to decode than telemetry data, and that the volume of it is at least several thousand times greater, I would be outright surprised if web browsing required ONLY 10000 times as much CPU power as the Apollo landers.
Re: (Score:2)
The Apollo computers only had to cope with up to a few thousand kilobits per second of telemetry data and the like. Decoding a high definition YouTube stream means converting a few million bits per second of h.264 video into a 720p30 video stream (which is about 884 million bits per second [google.com]).
Given that h.264 video is enormously more complicated to decode than telemetry data, and that the volume of it is at least several thousand times greater, I would be outright surprised if web browsing required ONLY 10000 times as much CPU power as the Apollo landers.
But, to be honest, the chipsets are just as likely to come with dedicated video decoding hardware than can handle HD H.264 without breaking a sweat. Take a look at the Atom's Poulsbo chipset [anandtech.com] for example.
Re: (Score:2)
Sad that this is rated funny rather than insightful...
Re: (Score:2)
And winning the stanley cup is easy - you just have to score the most goals!
All problems can be boiled down to simple essentials, but figuring out the details is usually pretty hard.
RSA and protein folding may seem hard now, but once they're solved, and passed thorough the filters of Nova and New Scientist, boiled down to their most uninformative and simple essentials, people will probably say that cracking RSA was simply applied math and modeling protein just took the principles of biochemistry and a lot o
Re: (Score:3, Insightful)
That's ridiculous (Score:3, Insightful)
RSA is a problem that is much more simply stated than landing a man on the moon. You only say landing a man on the moon is easy because it was done. It was the culmination of many, many years of research to do it and it requires a lot of risk management and luck to do it. You say mathematically that landing a rocket on the moon is easier than protein folding, but try a realistic computer model of the effects of fuel spray and burn inside the combustion chamber.
Re: (Score:2)
try a realistic computer model of the effects of fuel spray and burn inside the combustion chamber.
Fortunately, the Apollo-era computers didn't have to do that, or we'd never have gotten there.
Re:Performance Is Overrated (Score:5, Informative)
Finite Element Analysis (simulating car crashes to make them safer before we crash the dummies in them).
Multibody Dynamics (Simulation of robot behavior saves a ton of money, we can simulate the different options before we build 10 different robots or spend a year figuring out something by trial and error)
Computational Fluid Dynamics (designing cars, jets and pretty much anything in between like windmills and how they affect their surroundings and how efficient they are)
Simulating Complex Systems (designing control schemes for anything from chemical plants, to cruise control to autopilots) Computational Thermodynamics (Working on that tricky global warming thing, or just trying to figure out how to best model and work with various chemicals or proteins)
This is just the uses (that I know of) that more raw power can help out in Mechanical Engineering. I still have to wait about an hour for certain simulations or computations to run and they're not even all that complex yet. The faster these things run (even a few percent increases) can save us tons of time in the long run. And time is money...
Re: (Score:2)
This is just the uses (that I know of) that more raw power can help out in Mechanical Engineering.
I see your point. Raw power is needed when you do things that need raw power.
But for the average desktop? Why would even watching a video on youtube need a 16-core processor?
People got along just fine on Pentium II's.
Re: (Score:3, Interesting)
Why would even watching a video on youtube need a 16-core processor?
You clearly underestimate how much Flash sucks.
People got along just fine on Pentium II's.
And they did quite a lot less. Ignoring Flash, those Pentium IIs, I'm guessing, are physically incapable of watching a YouTube video, and are certainly incapable of watching an HD video from, say, Vimeo.
Re: (Score:2)
I clearly remember when the Pentium (original 60-MHZ version) came out, that was the big selling point was the capability of watching videos on it. In fact, I've got a CD I picked up back then that had the Beatles movie A Hard Days Night on it, and it played fine on my old 486.
Re:Performance Is Overrated (Score:4, Informative)
Again: What quality of movie?
I can watch 1920x1080 movies, smoothly, at least 30fps, if not 60. A quick calculation shows that the poor machine would likely be using over half its RAM just to store a single frame at that resolution. I'd be amazed if your 486 could do 640x480 at an acceptable framerate -- note that we had a different measure of "acceptable" back then.
Also consider: Even if we disregard Flash, I am guessing talking to the network -- just straight TCP and IP -- is going to be its own kind of difficult. Keep in mind, Ogg Vorbis was named for how it "ogged" the audio, and machines of the time couldn't really do much else -- while decoding audio.
Yes, there are hacks we could use to make it work. There are horribly ugly (but efficient) codecs we could use. We could drop JavaScript support, and give up the idea of rich web apps.
And yes, there is a lot of waste involved. But it's been said before, and it is worth mentioning -- computers need to be faster now because we are making them do more. Some of it is bloat, and some of it is actual new functionality that would've been impossible ten years ago.
Re: (Score:2)
At what resolution and frame rate?
Back then, it was probably 160x120, 15fps. Which was pretty common for the Intel Indeo codec (IIRC). If you were lucky, it was MPEG2 320x240 at 30fps.
The first is a data stream which is proba
Re: (Score:3, Funny)
Re: (Score:3, Interesting)
But how much of that is the need for raw power VS the problem of really crappy code?
There's a lot of each.
Every now and then, I run a test of a YouTube (or other) video played in its native Flash player, and in a third-party player like VLC or mplayer.
Not only is the mplayer version higher quality (better antialiasing), and more usable (when I move my mouse to another monitor, Flash fullscreen goes away), but it's the difference between using 30-50% CPU for the tiny browser version, and using 1% or less fullscreen.
In Flash 10 -- yes, I'll say that again, flash TEN, the latest version -- th
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
But you can't watch a Flash video on a PII, can you?
Can you watch any other video? If so, Flash is bloated.
'Nuff said.
Re: (Score:2)
Re: (Score:2)
I'm not saying that Flash is or isn't bloated.
No, I was. If you can play other videos on a machine without problems but not flash, then flash is slow, not the computer.
Re: (Score:2)
Can you watch any other video? If so, Flash is bloated.
Your logic is broken.
Re:Performance Is Overrated (Score:5, Insightful)
Blah. Do you know how much CPU it took to fucking land someone on the moon? Why does it take 200 times that just to browse the web?
Because space travel is mathematically dead simple, you have a couple of low-degree differential equations to solve for a very small data set. A high-school student could probably do it in an afternoon with a slide rule (in fact, I think I recall hearing that (early?) astronauts actually did carry slide rules in case of computer failure). Video codecs (like for youtube) are much more complex and operate on much larger sets of data.
Re:Performance Is Overrated (Score:4, Funny)
Re: (Score:2)
Re: (Score:2, Funny)
space travel is mathematically dead simple
Welcome to Slashdot, one of the few places where rocket science is considered simple.
Except that the rockets are a bitch (Score:4, Insightful)
Because space travel is mathematically dead simple
It's only dead simple if you have a rocket that works. Design one of those? If it were so easy, SpaceX would have people up there by now, and I don't even know if they have their first orbit yet.
You havn't read Bruhn (Score:4, Insightful)
Actually, space travel is very complex. The only "simple" part about it is that, for two body motion and the limits of our ability to control thrutser force and duration, there are explicit solutions to the differential equations. The brain power behind the programming is immensely difficult, but once coded the computational power needed is not excessive.
More to the point, all the pencil and paper math HAD to be done to make the available processors capable of performing the operations. The fact that they had slide rules indicates that the complexity of the brain work was immense to reduce the solution set to something that can be solved near-real-time on a slide rule. If the same mission were done today, we'd have none of this higher math involved. With the available processor power, it would be a brute force numerical solution. That's what most video codecs are, in essence, is a numerical solution to an equation with known boundary conditions. The more compression you want, the less exact the solution is (And hence the compression artifacts).
Short of computationally intensive activities like video decoding, it shouldn't take much processor power to browse the web. It only does because it's faster (from a programmers time) to do things with brute force than to slim them down. It shouldn't require 250-500+ separate requests to open a page, and there shouldn't be 200kB of formatting for a page which contains - maybe - 5kB of text. That's why Skyfire works so fast on cell phones - there's so much crap in HTML pages now, and so many requests, that its faster to make a VGA snapshot of a page and load that as a damned image than it is to download the actual page.
Re: (Score:2)
Blah. Do you know how much CPU it took to fucking land someone on the moon? Why does it take 200 times that just to browse the web?
It's probably more like 200,000 times, and we need it because "browsing the web" involves processing orders of magnitudes more data with dramatically lower required response times.
I know some people need raw computation, but c'mon. The average boot time is still ~60 seconds on the desktop. Why?
For the same reason it still takes your car a minute or two to warm up in the mor
Re: (Score:3, Informative)
I disagree strongly. Processor speed is still very important - just not for the average consumer. For quite some time now, the majority of consumer applications have been IO and/or GPU bound.
There is no such thing as a 'fastest useful processor' for some people, primarily in research and academia.
Re:Performance Is Overrated (Score:4, Interesting)
If someone made a CPU with many cores (>25, let's say), then one easy way to use all those cores would be to have each NPC have their own pathfinding thread.
The problem right now in game design is the wide variety of hardware on the market. You still have gamers like me who are still running on single-core machines, and you have people who are running quad-core hyper-thread machines. As a game studio, you have to code for everyone. If you make a thread for each NPC now, then the task switching alone would choke the CPU for most games.
You can read about Valve's difficulties making the Source engine multi-threaded in their paper "Dragged Kicking and Screaming: Source Multicore". http://valvesoftware.com/publications.html [valvesoftware.com]
Re: (Score:2)
Re:Performance Is Overrated (Score:5, Insightful)
Disclaimer: I work for Intel, but have no bearing on company-wide decisions, and I'm not trying to make a marketing pitch. I'm merely making observations based on what I read on public websites like /. and Anandtech.
That's why I'd like to see prices starting to fall, instead of having same prices, more power PCs.
Prices are falling. Price cuts were just made nearly across the board.
Plus you can buy a $50 CPU today that's cheaper and more powerful than a CPU from 4 years ago.
Die shrinks necessarily make CPUs cheaper to make, because more chips can fit onto a wafer. Also, if you take a 65nm chip of a certain speed, and move it to 45nm, then power consumption is reduced. The same will be true moving to 32nm.
Re: (Score:2)
Re: (Score:2)
$1800? That's pretty pricey for those specs in 1999. I got my Celeron 366, 1x4GB HD, 32 MB RAM, 4 MB SiS Graphics Card for a little under $500 in 1999. I do get your point though. Things are cheap these days you could build a decent gaming rig for that amount.
Re: (Score:3, Insightful)
Yeah. I remember speccing out my first home-built system. The Socket 5 motherboard cost $175. You can now get motherboards for $30-40. The Cyrix 6x86 chip was $150 (an actual Intel chip cost nearly twice that). You can now get basic CPU's for under $50. The case + power supply was $80. Current price about $35. A fairly small hard drive ran $150. You can get drives for $35 now. RAM was $40 per stick for about the smallest useful size. A 1GB stick of DDR2 will now cost you $12.
Computers have been g
Re: (Score:2)
20 years ago I have to pay more then that just to get a computer with 80286-10Mhz, 1MB RAM, 40MB HD, with 5.25" floppy..
technology always drives price lower..
Re: (Score:2)
You could probably shave a few corners and still have a very good rig for low-end gaming for about $700.
Not sure I'd go much below that price point personally, as you end up with too many low-end components, or things that you'll have to replace constantly.
Re: (Score:2)
Also, if you take a 65nm chip of a certain speed, and move it to 45nm, then power consumption is reduced. The same will be true moving to 32nm.
Maybe. Capacitance-related power consumption will fall, but didn't one of the more recent process shrinks actually increase power usage because of unexpectedly high leakage currents? I know there were news articles about some sort of unexpected power issues relating to a process shrink.
Re: (Score:3, Insightful)
Re:Performance Is Overrated (Score:5, Informative)
Additional disclaimer: I'm not a CPU engineer, and this is still based on things I read on public websites.
I can't find the article, but Anandtech explained this well. Apparently the high-k+ process that's used in 45nm and smaller Intel chips make for incredibly low leakage currents.
I did, however, find a graph that shows total system power consumption moving from 65nm (Conroe) to 45nm (Penryn), at the same clock speed: http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3137&p=6 [anandtech.com]
Re: (Score:2)
We just need to wait for the applications to follow and learn to use 16 cores and more
No, not every application needs to be written to operate on X number of cores, operating systems and virtual machines (Java, .NET, etc.) need to allow the applications to run, regardless. What makes sense, optimizing many many new (not legacy) applications to suit more cores, when in a few months (moore's law) more cores will be crammed on a chip? Or, perhaps the OS designers and virtual machine architects need to allow their software to act as a hypervisor to both new and old applications to take advantage
Re: (Score:2)
look up the inferno OS. Basically someone created their own version of java/.NET and embedded it into the Kernel. number of cores, processor types, hell even where on the network doesn't matter.
while I don't know if Plan 9 will be the next answer. Inferno's Ideas are what is really needed. MSFT singularity is a more modern version of it.
My personal idea is that during Boot,a built in virtual machine(maybe FPGA based so it could be upgraded with new tech) starts. Apps can then be run from arm, x86, it
Re: (Score:3, Funny)
No, not every application needs to be written to operate on X number of cores, operating systems and virtual machines (Java, .NET, etc.) need to allow the applications to run on multiple cores, regardless of development/other factors.
...possibly dynamically updating the software on a per-machine/core# basis to set the number of cores for the software to run on tailored better for that user's processor in a more HAL-like manner..
There, fixed it for... me.
Re: (Score:3, Informative)
32nm means that the same processor can take half the area on the die. You could use that to get more cores, or you could just use that to get more out of the wafer.
I think someone noted not too long ago that the price of silicon (in ICs) by area hasn't changed much over the years. But the price per element has sure gone down due to process reductions.
If you change nothing else, your 32 nm chip will consume less power and cost less than an otherwise nearly identical 45 nm chip.
Re: (Score:2)
Re: (Score:3, Insightful)
I know my workload could use 16 cores, but the average consumer PC? Not so sure. That's why I'd like to see prices starting to fall, instead of having same prices, more power PCs.
What will happen is that the "average consumer PC" wiil do different tasks, not just today's job faster. For example what about replacing a mouse with just your hand. A webcam-like camera watches your hands and finders. It's multi-touch but without the touch pad. OK there is one use for 8 or your 16 cores. Maybe the other 8 co
Re: (Score:2)
"start" to fall ?
Listen, when my uncle got his first PC (he was an early adopter) he paid about $3500 for it, which at the time was more than a months salary. It was a fairly average PC at the time.
Today, the most commonly sold PC for the home-consumer market is a $700 laptop, or something along those lines. But in the time between, salaries have aproximately doubled. So, the reality is that a typical home-pc today costs 1/10th of what it did when he got his first PC, it's aproximately 20 years ago. (a 386,
Re: (Score:2)
Re: (Score:2)
I understand the questioning of the need for CPU speed, fine, but I wouldn't dismiss the potential for power consumption gains. Wouldn't smaller feature sizes also allow Intel to make lower power processors? I'd like to see more notebooks that work longer without having to be tied to a wall outlet.
Re: (Score:2)
First of all, going from 45nm to 32nm means that every transistor takes up half the space it used to. The choice then is between the same number of transistors per chip resulting in lower per unit cost or twice as many transistors per chip resulting in better performance. As usual, there will be some of both.
Some people need better single-core performance, some people need more cores, and some people just need lower power consumption. Not everyone needs the same thing, which is why there are different
Why wait for 22nm? (Score:2)
At some point this roller coaster ride has to end. I mean, why not put off development until the NEXT iteration then?
Re: (Score:3, Interesting)
Re: (Score:2)
You know, the thing is, everytime we hear the sky is falling, some new tech happens, and life is extended again.
BUT, it does seem that the miracles get fewer and farther between and it seems that they are getting more and more expensive as we go on. Yep, at some point it's all going to end, but at the end, will there be a beginning of something else entirely?
Optical computing? Quantum? Universal Will To Become?
A problem for AMD? (Score:2)
We;ve seen leaprog attempts lead to delays before. If this means AMD gets 45nm before Intel gets 32nm, doesn't that give AMD a performance window?
Re:A problem for AMD? (Score:4, Informative)
... AMD has 45nm. [wikipedia.org]
Re:A problem for AMD? (Score:5, Insightful)
If this means AMD gets 45nm before Intel gets 32nm, doesn't that give AMD a performance window?
You mean being only one step behind instead of two?
This isn't a leapfrog attempt (Score:5, Interesting)
For one thing, Intel has always been ahead of, well, everyone pretty much on fab processes. This isn't saying Intel will skip 45nm, they can't do that as they a;ready are producing 45nm chips in large quantities. They have a 45nm fab online in Arizona cranking out tons of chips. Their Core 2s were the first to go 45nm, though you can still get 65nm variants. All their new Core i7s are 45nm. So they've been doing it for awhile, longer than AMD has (AMD is also 45nm now).
The headline isn't great because basically what's happening is Intel isn't doing any kind of leapfrog. They are doing two things:
1) Canceling some planned 45nm products. They'd planned on rolling out more products on their 45nm process. They are now canceling some of those. So they'll be doing less 45nm products than originally planned, not none (since they already have some).
2) Redirecting resources to stepping up the timescale on 32nm. They already have all the technology in place for this. Now it is the implementation phase. That isn't easy or fast. They have to retool fabs, or build new ones, work out all the production problems, as well as design chips for this new process. This is already under way, a product like this is in the design phases for years before it actually hits the market. However they are going to direct more resources to it to try and make it happen faster.
More or less, they are just trying to shorten the life of 45nm. They want to get 32nm out the door quicker. To do that, they are going to scale back new 45nm offerings.
Makes sense. Their reasoning is basically that the economy sucks right now, so people are buying less tech. Thus rolling out new products isn't likely to make them a whole lot of money. Also it isn't like the products they have are crap or anything, they compete quite well. So, rather than just try to offer incremental upgrades that people probably aren't that interested in, unless they are buying new, they'll just wait. They'll try and have 32nm out the door sooner so that when the economy does recover, their offerings are that much stronger.
Over all, probably a good idea. Not so many people are buying systems just to upgrade right now, so having something just a bit better isn't a big deal. If someone needs a new system, they'll still buy your stuff, it's still good. Get ready so that when people do want to buy upgrades, you've got killer stuff to offer.
Re: (Score:2)
Nicwe economic conspiracy (Score:2, Insightful)
Actually they were able to step up some of there fabs faster then expected.
Too big to fail (Score:5, Insightful)
Intel is basically putting a $7 billion bet on a turnaround in the economy for 2010."
And if they lose the bet then they can just ask for a bailout like the financial firms and auto industry did. Because Intel is too big to fail.
Re: (Score:2)
The market cannot allow Intel to fall. No other company in the world can supply x86 processors with the reliability and volume that Intel does. AMD does not have the processor fabs to meet worldwide demand for x86 products. Even if Intel really screws things up, it still has significant market power.
Re: (Score:2)
Yeah. Because if Intel failed it's fabs would dissipate in a puff of smoke.
No they WOULD NOT.
Another company would buy them and hire the people that were working there.
Re: (Score:2)
Hey, I bet that example works for banks and auto plants too!
Re: (Score:2)
The 32nm processors use less power. (Score:4, Insightful)
The 32nm processors, I understand, will reduce the power needed even further, making it sensible for data centers to upgrade.
Re:The 32nm processors use less power. (Score:5, Informative)
most people already have computers
Really? Have an eyeopening look here:
http://www.economist.com/research/articlesBySubject/displayStory.cfm?story_id=12758865&subjectID=348909&fsrc=nwl [economist.com]
Computer ownership is really very low worldwide. Even the US has only 76 computers per 100 people. Keep in mind that includes people like myself who, between work and home use, have 4 computers alone.
Some other socking figures:
Italy 36 computers per 100 people
Mexico 13 computers per 100 people
Spain 26 computers per 100 people
Japan 67 computers per 100 people
Russia 12 computers per 100 people
And the billions of people in China and India don't even make the list.
Seems to me that there are a lot more computers Intel could be selling in the future. The market is far from saturated.
Re: (Score:2)
FYI, socking figures are similar to 'punching figures'. They're designed to put you in shock.
Re: (Score:3, Interesting)
Great point. People who bought their machines when the processors were at 65-nm won't need to replace them until about 2011. By then, according to Intel's own prediction, we would be in the sub 10-nm range.
This is from an article from mid 2008: full article [crn.com]
Intel debuted its 45nm process late last year and has been ramping its Penryn line of 45nm processors steadily throughout this year. The next die shrink milestone will be the 32nm process, set to kick off next year, followed by 14nm a few years after that and then sub-10nm, if all goes according to plan.
Alternatively (Score:4, Funny)
Or at least, if the economy *doesn't* turn around by 2010, that the shitstorm will be so bad at that point they don't care.
Pug
Not even (Score:2)
bet (Score:5, Funny)
a 7 billion dollar bet? thats peanuts! wake me up when someone makes a 1.5 trillion dollar bet on the economy.
Re:bet (Score:4, Informative)
Intel's investment strategy (Score:5, Insightful)
NEWSFLASH: Intel has been dumping 10 BILLION dollars a year into R&D since at least 1995. Did not RTFA, but if the blurb is to be taken at face value, the reporter obviously did no real research on the topic.
Re:Intel's investment strategy (Score:5, Insightful)
Intel had a Fourth-Quarter Revenue of $10.7 Billion [intel.com] , so it isn't quite an insignificant amount, but if it were to completely disappear it wouldn't be a catastrophic problem.
Intel plans US Plants to Manufacture 32nm Chips (Score:4, Informative)
Intel announced today that it was investing $7bln to build new manufacturing facilities in the US to manufacture these chips.
The new facilities will be built at existing manufacturing plants in New Mexico, Oregon, and Arizona. Intel is estimating 7,000 new jobs will be created. BizJournals.com [bizjournals.com]
Re:Intel plans US Plants to Manufacture 32nm Chips (Score:5, Interesting)
Yeah, I noticed that this morning when I read about the investment. They closed a bunch of older facilities in Asia, laying off the workers, and are building the new fancy fabs in the US (and creating high paying jobs in the process).
Of course, the next thing that came to my mind is whether Slashdot would cover that aspect of the story. Sure enough, Slashdot's summary completely disregards that Intel is creating jobs in America. I suspect there are two reasons for this: 1. It hurts Slashdot's agenda if they report about companies insourcing, readers should only know about outsourcing by "the evil corporations". 2. Because Intel is the big bad wolf and we can't report anything good they do.
Re: (Score:2)
Sure, with the US economy going in the toilet, it's going to be affordable to pay US workers (in US dollars.)
The simple truth is that companies are currently seeking low power consumption, they're using virtualization and buying servers with lower TDP. 32nm increases yields and reduces power consumption so it can save everyone some money. These businesses are not going to suddenly stop needing an upgrade path because of the economy, although the refresh cycle is sure to slow.
First CPU with integrated graphics (Score:3, Informative)
That would be the Cyrix MediaGX circa 1997.
Re: (Score:3, Insightful)
Re: (Score:2)
Despite the doomsayers, counting on the economy turning around by 2010 is a pretty safe bet. It's already being demonstrated that the housing bubble burst around September was not nearly as bad as the media/politicians made it out to be.
The housing market by me: Four years ago, 3 bedroom house 2.5 bath $600,000. last Sept same place: $240,000. Last week same place $230,000.
The prices were over valued four years ago. The only thing is that people who bought four years ago are still in the hole. They still need to pay down as fast as they can before they sell or they still owe after selling. Many people are totally screwed. I kind of wish the housing stimulus/fix bill would give some back to single home owners who homes dropped in value (or
Re:Safe Bet (Score:5, Insightful)
The prices were over valued four years ago. The only thing is that people who bought four years ago are still in the hole. They still need to pay down as fast as they can before they sell or they still owe after selling.
What about those of us who made good decisions and didn't buy a house which was tremendously overpriced? Why is it our responsibility to bail out the greedy and the stupid? Enough is enough. Without consequences, this crap will continue forever, in all industries. You'll have to excuse those of us who live within our means and don't buy overpriced crap if we're more than a little pissed at having to carry all the dead weight.
Re:Safe Bet (Score:5, Insightful)
That's the problem with being intelligent: you'll always be in the minority and thus always at the mercy of the tyranny of the masses.
Re:Safe Bet (Score:5, Interesting)
FYI, poor people don't disappear when you stop looking at them.
Having large amounts of poverty in the nation will breed crime, reduce sales, cause layoffs, and generally decrease the quality of life for those of us who planned ahead.
Sometimes it sucks to be one of the responsible ones. If you didn't learn that throughout grade school and college, then I don't know what more to tell you.
Being responsible != being doormat (Score:3, Insightful)
If you didn't learn that throughout grade school and college, then I don't know what more to tell you.
Nothing the R's did will help the situation. It was all just a final golden hand job from the government to the bankers.
Nothing the D's will do will help the situation. It is all just a final golden hand job from the government to the usual dependents.
Between them the currency is fucked.
Europe is no safe bet, neither is Asia.
Arguing over blame is pointless. They are mostly long dead anyhow. Those
Re: (Score:2)
>>Nothing the R's did will help the situation. It was all just a final golden hand job from the government to the bankers.
Yep.
>>Nothing the D's will do will help the situation. It is all just a final golden hand job from the government to the usual dependents.
Yep.
It was depressing enough that I "threw my vote away" this year and voted Libertarian. Even though I don't agree with them on the drug legalization issue (which is half their platform), they seem like the only party in America that actua
Re:Safe Bet (Score:5, Insightful)
The home owners just mail the bank their keys.
In most states the bank has no recourse beyond the value of the house. It the states that they do have recourse the left over debt can be discharged in bankruptcy.
Why should be GIVE real estate speculators back their losses? ALL real estate buyers in 2004 were speculators. Anybody who is buying into a market that is 'evaporating up' (jargon for maintaining no inventory with raising prices) is speculating.
Would they have given us a share of their profit if things had turned out differently (not even taxes, CG are sheltered if you live there).
They made a bet, they lost. They can already dump most of the loss onto the bank. Screw them. They bid real estate up to insane prices. They are not without fault.
Any fix like you suggest will only make things worse in the long run. Foolish investors should lose money or there is no incentive to invest wisely.
Should we make the Enron investors whole too? Madoff? Netscape? Tulip Bulbs?
This is the real estate buying opportunity of a lifetime.
We shouldn't have bailed out the banks ether.
We shouldn't call a Trillion dollars of pork a stimulus. If Obama is correct and Stimulus == spending then we could just print money, buy the cellars of France dry, have a party and viola the problem is solved. Not gonna happen, spending has both stimulative and depressive affects. The money has to come from somewhere. Newly printed moneys value is extracted from the rest of the money in circulation. In my simplistic example France's wine industry would see the stimulation while the rest of the US economy would see the depressive affect.
Too bad the vast majority of the leaches stuck on the government tit don't produce anything like the good wine.
Re: (Score:3, Interesting)
"We shouldn't call a Trillion dollars of pork a stimulus"
A whole hell of a lot of the stimulus package is tax cuts. Over 250 billion dollars worth. Another $350 billion is going to education, healthcare (like medicaid), and food stamps. You can't call any of that pork. That money isn't going to special projects in congresspeople's districts. All of it goes to the states, or relieves the tax burden on individuals and employers.
There's also things like highway maintenance, energy investment, and some tel
Re: (Score:2)
Yeah, if the government starts to take all the money from all the people who made money from holding or flipping real estate, then sure, it will have money to give to people who invested badly. But the former is not going to happen. Of course, something even less fair did happen, as you mention: The banks who made terrible investments did get paid off. I don't think that was right, but compounding wrongs won't make it right.
I should mention that I have some sympathies for Scandinavian-style democratic socia
Re: (Score:3, Insightful)
Nope. Bad bet. (Score:4, Insightful)
Despite the doomsayers, counting on the economy turning around by 2010 is a pretty safe bet.
Nope. Very bad bet.
If it were just a housing bubble it would have been a couple years of recession and we'd be coming out of it about then. The people and institutions who wrote bad mortgages and the people who bought houses too high would be hurt or bankrupted, the housing prices would drop to something sane, construction would slow (or stop for a while) until the unsold inventory and foreclosures had been sold off (or destroyed by neglect or arson for insurance) then pick up, and the capital now tied up in housing construction would be moved (again at a reduced price) to other productive uses. We're seeing a bit of that now.
This time they "securitized" the bum mortgages and "bought insurance" - "credit default swaps" - to the tune of MORE than the Gross World Product, in order to get multi-A ratings on the paper backed by baskets of subprime mortgages. When the housing prices started down a bunch of people defaulted all at once. So those who "wrote the insurance" had to dump a whole bunch of commodities on the market (depressing the prices further) to raise funds to "pay off the insurance". Thus when "Fannie Mae and Freddie Mac exploded" there was a lot of collateral damage in other markets. But that also would have sorted itself out after a couple years.
Unfortunately, the governments of the world, especially that of the United States, decided to try to "fix the problem". And now they're replicating EXACTLY the class of mistakes that turned a similar recession into the Great Depression - but more extremely, more rapidly, and without the safety net of the gold standard. The result, IMHO, is that we're probably in for a depression that will make the '30s look mild and short. And hyperinflation seems far more likely than not.
Thus my sigline.
As I see it, too much has been done ALREADY for a proper recovery to get started around 2010. (For starters, we're only about halfway through the underlying housing market collapse: The subprimes are largely crunched. But the teaser rates on a lot of other mortgages are expiring and even the government's billions of unbacked paper can't push the interest rate down far enough to save them - just to stretch out the agony.)
If Intel is betting the farm on a hi-tech recovery in 2010, somebody else will probably own the farm in 2011.
Re: (Score:3, Insightful)
The 90's recession should have been much worse, enough to pull the debt to income ratio back into line. It would have sucked, it may have been nearly as bad as the Great Depression. Instead since then almost every western country has been running their economies on credit cards and home loans leading to stupefying ludicrous bat-shit insane levels of debt. And when they ran out of rational borrowers, they started lending out money to anyone with a pulse with no credit checks and invented all those stupid way
Re: (Score:2)
This time they "securitized" the bum mortgages and "bought insurance" - "credit default swaps" - to the tune of MORE than the Gross World Product
Gross World Product? Try Net world wealth. (How did they calculate that?) Granting that's notional value on those derivatives so they can't all pay off, many are conflicting 'bets'. We can let that unwind by simply letting AIG collapse and flipping off all their counter parties. Fools put down million dollar bets with 'fly by night bookies' and got burned. They