Where's My 10 Ghz PC? 868
An anonymous reader writes "Based on decades of growth in CPU speeds, Santa was supposed to drop off my 10 Ghz PC a few weeks back, but all I got was this lousy 2 Ghz dual processor box -- like it's still 2001...oh please! Dr. Dobbs says the free ride is over, and we now have to come up with some concurrency, but all I have is dollars... What gives?"
Asymptotic (Score:4, Interesting)
And where is my Jetson's car! (Score:2, Interesting)
dual cpu systems (Score:4, Interesting)
This time around I also sprung for a hardware raid card and set up a 10 array. That has helped quite a bit with system responsiveness.
I've also turned off as much eye candy as possible. After a couple days its really not missed and things are much snappier.
yeah it would be great if I could run out and get some 10GHz chips to fry a few eggs on, but I think my dual MP2200's still have a bit of life in them.
I've always wondered (Score:5, Interesting)
Adding Ghz is probably not the best solution (Score:4, Interesting)
Ramping up clock speeds is hitting some serious limitations as far as increasing the work done by a machine is concerned. There are lots of ways to get work done faster. They are just harder to market without some good, popular, and independent benchmarking standards. At some point engine manufacturers realized that increasing the cubic centimeters of displacement in an engine was not the best way to make it faster or more powerful. Now most car reviews include horsepower. Clock speed is analogous to CCs.
Abstract it away... (Score:3, Interesting)
need for speed? (Score:2, Interesting)
Actually this is sort of like competition (Score:3, Interesting)
Re:Engineering within limits brings great results (Score:3, Interesting)
The limits are high enough now to not care. Back in the old days the limits were low enough that it did make a difference...
Not only that but the skills that used to exist in the older days are dissapearing.. "dont need to know that stuff'..
Re:dual cpu systems (Score:2, Interesting)
Re:Heat is the problem (Score:5, Interesting)
So, you think that using multiple iterations of an inherently power-hungry technology will somehow solve the power problem? While, certainly, we could back off clock speeds with multi-processing and reduce heat considerably, but, people always want the cutting edge so the demand to "crank it up" would still be a profitable venture, thus pressuring the price of the lower-end stuff.
Look at page 8 [intel.com]. Processors are approaching the heat density of a nuclear reactor. Silicon is dead. We'll need something else if we want more clock cycles (or perhaps a new computing paradigm... something "non-Von Neumann).
GaAs and Relational Calculus (Score:5, Interesting)
Whenever the government "picks winners" rather than letting nature pick winners, the technologists and therefore technology loses.
(Now that Cray is dead, according to the supercomputing FAQ, "The CCC intellectual property was purchased for a mere $250 thousand by Dasu, LLC - a corporation set up and (AFAIK) wholly owned by Mr. Hub Finkelstein, a Texas oilman. He's owned this stuff for five years and hasn't done anything with it.")
Secondly, as I've discussed before both operating system [slashdot.org] and database [slashdot.org] programming are awaiting the development of relations, most likely via the predicate calculus, as a foundation for software. Both are essentially parallel processing foundations for software.
This feeds into quantum computing quite nicely as well, as relations are not just inherently parallel, but are parallel in such a way that they precisely model quantum software [boundaryinstitute.org].
Instead of asking about where it is (Score:2, Interesting)
1) DNA/Molecular computers
2) Atomic switches
http://www.physicsweb.org/articles/news
3) Betacomputation (Switches made from neutrons and protons that can be on/off by adding/removing electrons bound inside of the hadronic structure)
This makes for good power supply too http://www.betavoltaic.com/
4) Positron/electron photon exchange
(Yes Virginia, antimatter/matter changes the phase of absorbed photons)
5) Integrated silicon/optic chips
6) Black holes (See Sci Am Dec 2004)
Also for all of you aspiring scientists out there do yourself a favor and join the present by reading about nonlinear/nonunitary mechanics
http://www.i-b-r.org/ir00018.htm
You'
Apple CPUs catching up....? WTF? (Score:4, Interesting)
Who would'a ever thought to see that happen?
Concurrent Applications are not The Answer (Score:5, Interesting)
Remember back when users had to wait in line in front of a terminal to run their punchcards through the mainframe? Back then, human time was cheap and computer time expensive. Nowadays the user's time is paramount.
Multithreaded programming breaks this law: It is hard to do multithreaded programming- Humans just don't think that way very well. To do it in a way that an arbitrary program (i.e. not a ray tracer) can see consistent performance gains in a multi-CPU environment is almost PhD-level hard. Making single-threaded software is already a major undertaking and anyone thinking that, in general, they should start designing all their programs as fundamentally concurrent programs is going to fall behind their competition due to other factors (security, features, etc.).
Instead, the only way concurrent programming is going to play a major role for the majority of software, I believe, is at the compiler and OS levels: The OS and compiler designers are going to have to do their utmost to transform single-threaded software to perform optimally in a multi-CPU environment- These folks are going to have to take up the slack that the slow CPUs clockspeeds are causing in terms of limiting the speed of Software- Concurrent programming at the application-level is only going to play a minor role in this, in my opinion.
GaAs??? GaAs is material of the future... (Score:5, Interesting)
superconductors is the way to go for highest speeds/most concentrated processing power, due to extremely small power dissipation and extremely high clock frequencies (60 GHz for logic is relatively easy right now), but the problem is that after someone invests $3B in a modern semiconductor fab they do NOT want to build a $30M top-of the line superconductor fab to compete with it. IBM would be a good candidate for this, but they got burned on superconductor computer project back in 80s and would not touch it with 10 foot pole now, though both logic and fab has changed dramatically since then.
Disclosure: on my day job I do design III-V chips, and I used to design superconductor chips up until recently, now trying to push that technology forward is more of a night job for me...
Paul B.
Re:Moore's Law isn't Speed Doubling, it's Transist (Score:2, Interesting)
Photorealistic (or at least much better than the current high-end) rendering in real-time, I have some database apps that do a whole lot of number crunching, I have plenty of large projects that take 20 minutes to compile on a 3.06 P4 - CPU speed is the bottleneck on all of these.
A 10 Ghz CPU would probably bring with it 2GHz+ BUS and RAM.
You can never have too fast a CPU or GPU, too much RAM or too much HDD space.
A multicore CPU is great, but no substitute for raw speed. It's like comparing a bullet train to a fleet of honda escorts. The cars can move the same group of 1000 people, but the train does it so much faster and more efficiently.
Re:Heat is the problem (Score:3, Interesting)
Here comes the hertz gang again :\ (Score:3, Interesting)
1a)486-25SX
1b)486-25DX
2a)PIII - 450
2b)G4 - 450
3a)G3 - 300
3b)Playstation 2 - 300
Moral of the story : there are far, far more important performance measurements than clock frequency. If you think otherwise, you might as well slap a VTEC sticker on your case.
P.S. As other's have pointed out, Moors law has nothing to do cpu frequency.
Re:Engineering within limits brings great results (Score:5, Interesting)
The throuble is that this is assumption is wrong. The computers would in theory be fast enough to not care about optimization all over the place, the throuble is that a lot of bad programming doesn't result in just linear decrease of speed. If I use linear lookup instead of a hash-table, speed will go down, quite a bit more down then the amount of speed of the CPU increases over time.
Simple example, Gedit, an extremly basic text-editor takes 4-5 seconds to load on a 1Ghz Athlon, MSDOS edit on the other side on 386er started in a fraction of a second. From a feature point of view both do basically the same. Gedit for sure has some more advanced rendering and GUI and isn't a text-mode application like MSDOS edit, however shouldn't it be possible with todays CPUs which are quite a bit faster then back then to have an application that has better rendering then text-mode, but still be at least as fast or faster then back then?
Intel is to blame for this absurdity (Score:5, Interesting)
"First, by switching to the Pentium 4 architecture, Intel can drastically boost the clock speed. The old server Xeon topped out at 1.4GHz. The new one debuts at 1.8GHz, 2GHz and 2.2GHz, and will eventually pass 10GHz, she said."
http://news.com.com/2100-1001-843879.html [com.com]
I can't find the exact quote and article, but another Intel exec/rep stated that this goal would be achieved by 2006.
Well, it's 2005, the P4 has topped out at 3.6ghz and has been discontinued because Intel has determined that the P4 arcitecture is streached to the limit.
Bottom line is that we should be expecting a 10ghz processor soon because Intel brazenly stated that they would produce one. Whenever they do make these statements the AP drools over the story, stock prices jump and I'm sure investors get excited.
Instead, their next gen processor is a 2ghz Pentuim M dothan. Intel should be ashamed of themselves for lying to the public and should be investigated for inflating their stock value though fictional claims about their processor technology.
Embedded Systems (Score:5, Interesting)
If you crave the challenge of making tight, efficient code, sometimes with very little under you but the bare chip itself, then embedded systems might be the place for you.
cue the grumpy old man voice: "Why back in my day, we didn't have 64-bit multi-core chips with gigabytes of memory to waste, no sir, we had to write in assembly code for 8-bit processors, and WE LOVED IT!"
-paul
Re:Heat is the problem (Score:3, Interesting)
Yeah, the flow of electrons in wire is extremely slow [amasci.com], but the work is really done by the electrical field generated, so that as one electron is pushed into the wire, it "pushes" the sea of electrons forward so that an electron at the other end of the wire is shifted forward. This "shift" occurs pretty close to c [madsci.org]. I
Re:Abstract it away... (Score:3, Interesting)
In a word, no. At least not with current languages. There's a reason we don't do this already, after all. Provably correct concurrency is very hard to generate, and almost impossible with pure machine code - you either end up with deadlocks and race conditions or very poor performance because you serialize too much stuff. Or incorrect results because data is transparently copied instead of shared. Etc. There do exist languages designed to accomodate and encourage both implicit and explicit concurrency, like Erlang, and I think we'll see more of them in the future, but it's not going to happen by simply ignoring it.
Re:Engineering within limits brings great results (Score:1, Interesting)
Actually most programmers value efficiency very highly (if they are actual programmers and not posers). So do the shops.
The issue you are referring to manifests when the client makes unrealistic and unreasonable demands to have software released before it is ready, because an uninformed manager was forced into a poorly researched estimate. All because the RFP had to be out before the other guy's so the competition wouldn't get the job. Proper R&D for the estimate is rarely possible. That estimate is basically a contract.
Economics makes them value efficiency and graceful code far less. The minute you get something to work, in the real world, is usually the final state of the code. In reality you should then start looking at ways to make it work as efficiently as possible and go through a peer review process. The response I always get when I suggest this is, "I have to get this into QA by tomorrow, we don't have time for this luxury". It's always do or die and 2 months behind schedule.
They'd rather get the code out by an unrealistic deadline, half assed, then patch it into crap oblivion, than do it right the first time. I see it daily. Trust me it's the business people, not the programmers and engineers.
Of course if we did things correctly it would take too long. It isn't poor programming, most code is simply in an alpha state because of time and money constraints. If it does the job as it needs to be done right now, this minute, it's good enough in the mind of the business man. Forget extendability and efficiency.
Re:A Good Thing? (Score:2, Interesting)
Microsoft's problem is memory usage, every unnecessary byte is processed at least once and is wasted cpu time. Ofcourse, reducing memory usage with huffman crunching won't make anything faster but the relation is clear: inefficient memory usage is a result of codebloat, non-streamlined datastructures, too many protocols/technologies and so on.
Re:You obviously haven't studied chip design (Score:3, Interesting)
Perhaps that's why I am able to come up with a novel idea? Because nobody told me it's impossible, it just might work. But, of course, I welcome constructive criticism.
> How do you deposit another fresh layer of
> uncorrupted substrate on top of a processed layer?
With this [64.233.161.104] technology it is already possible to do exactly that. It just needs a bigger nozzle.
> Chemical vapor deposition? It's not as easy as it sounds.
Neither was putting the man on the moon. But we did it anyway. Sure there will be engineering challenges here, but I see no theoretical problems with using CVD for this.
> What about thermal expansion/contraction?
Thermal effects in the sphere are no different from the ones in a flat plate. Also, there have been recent advances in painting transistors on flexible substrates, which could help on the surface layers.
> thermal effects on timing
How will they be any different from the ones in a flat CPU? Besides, you need to remember that with the clock in the center, timing is going to be far easier to implement.
> IR drop of a sandwich layer of
> substrate-oxide-metal-oxide-substrate-oxide-metal
Perhaps you could explain this problem to those of us who don't understand the reference?
> How do you analyze process defects on the lower layers?
Just as you analyze process defects on flat CPUs: by testing them. I don't think chip manufacturers actually look at each chip under the microscope to see if something went wrong.
> If you want to do 3D, just make alot of chips and stack them together.
I don't see how that helps with anything. If you have flat chips anyway, why not just spread them out?
Re:Asymptotic (Score:3, Interesting)
And until someone somes up with another must-have reason (a "killer app"), the demand for higher speeds simply isn't there. Somewhere around 200-500 Mhz, machines simply got "fast enough"--I remember the bad old days before then, when everyone I knew got a new machine every couple of years (or even every year). And it actually helped you with your everyday word processing, music listening, web surfing, spreadsheets, etc. But the last thing I needed a faster CPU for was DVD playback, and that hasn't been a problem for years.
Seriously, I'm a full-time programmer who does real-time music visualization as a major hobby,I'm enough of a geek to have run Linux exclusively for (literally) over a decade now on my desktop, and even I don't see a reason to upgrade my machine's CPU. For the majority of the public, the ever-faster CPU craze has been replaced by other needs. Lower power consumption, wireless, better peripherals/displays, handheld/music devices, etc.
I went from a 4 mhz 8088-> 20 mhz 386 -> 66 mhz 486 -> 200 mhz PPro. And you know what? I don't remember how fast the machines I've had since then are--my current one is a 1.3 or 1.4 ghz P4, I honestly don't know--because that's when I stopped caring. It just doesn't matter any more.
I think 1997 was the last time I bought a machine where I gave much thought at all to CPU speed. I haven't bought a new desktop machine in 4 years, and I don't foresee getting one in the next couple--but I have gotten handhelds, mp3 devices, etc. Indeed, the only reason I bought the last one was that my old one was very noisy, so I built a silent PC.
Re:Hertz don't put you in no drivers seat (Score:3, Interesting)
But they didn't catch on for one simple reason: motherboards are a commodity. The pressure on price is enormous, so the only way you can turn a profit making them is to make a lot of identical motherboards, so you don't spend a lot of money on multiple assembly lines, or on retooling the lines you have between runs. So your cheap motherboards are a one-size-fits-all design -- and that means legacy ports.
Re:Lying??? (Score:3, Interesting)
But, I can assure you (I am part of the industry) that back then, the technology roadmap outlook was drastically different than today. It was impossible back then to understand the massive leakage issues at those speeds in 65nm and beyond since at the time, the warning signs were not unusual (i.e. they were overcome many times before on larger geometries). And believe me, the entire industry was practically blindsided by this. I think Intel was hit hardest simply because they were among the first to get there and were therefore aggressive on its adoption.
To say they were lying, and hold them accountable some kind of liability due to their confidence in 65nm would stifle future growth of the entire technology industry.
To single them out among all others who did the same would be unfair.
To even try to assign a dollar amount to this would be absurd. The entire industry took a beating at the same time. How much of Intel's stock plunge can be attributed to the failure of 65nm and frequency scaling promises? Is Intel not free to achieve these performance gains through other means such as core parallelism, memory architecture, higher levels of integration, and i/o architecture? Does this mitigate these dollar amounts?
The only stupid question (Score:2, Interesting)
Could we do a multi-processor system that splits the tasks according to their horsepower needs? The OS splits the tasks the processor that can handle it, an no more. Multilevel, Trickle UP CPU power. Say, for sake of argument, set up arbitrary CPU usage levels
relegate the slow, stupid stuff to a lesser processor, like Notepad, some sniffers... look at your Task Manager processes (or equivalents) for stuff you could be running on a 80286 and shove them to the CPU slums. This level keeps the lights on, controls the heat. It's the oil and the water pump on your Ferrari.
Runs the OS functions themselves, if they can. File transfer, TCP/IP, simple Multimedia like MP3 & CD, Virus, Spyware detection... This level replaces the Pentium ][ in your kid's room running Limewire. It's the Stereo on your Ferrari
The GUI, the heavy multimedia, like video, CD burning, the bloated web rendering... this level makes the UI responsive. It's the suspension on the Ferrari
The Engine of your Ferrari! The monster throughput... The Doom VII, the Celestia: Andromeda, SETI:Alpha 6, Climate Prediction, and of course the rendering of the perfect sig-other on Maya. When yer not using this part, shut it down, despin the fan and listen to the sound of silence.
The offshoot of this is that there can be a Level 5, if one were to plug into a cluster. From there, you can plug into BIGGER clusters until you either reach Blue Gene/X or you ARE part of the cluster.
Why not?
Re:GaAs??? GaAs is material of the future... (Score:3, Interesting)
I worked at Cray Computer, so I know something about this... The Cray-3 (also GaAs) was working OK when the bankruptcy hit, and the Cray-4 would probably have worked.
The failure of Cray Computer was due to competition and missing market windows, not due to the choice of technology per se. (Admittedly the late deliveries were due to difficulties with getting the processors working, but much of that was due to aggressive circuit-board designs that led to problems with open contacts, and the difficulty of repairing them).
btw, the 10th anniversary of the CCC bankruptcy is coming up on March 24th. My, how the years fly by sometimes.
Re:Hardware resources and software design (Score:3, Interesting)
Quality does not nessessarily mean optimised code. For many customers it is more important to get code that works, doesn't crash and gets there yesterday. And if it's slow they'll either wait for an update or their next 10GHz PC!
Efficient code is a part of quality, but how important it is depends greatly on the customer. For video games and email servers, very important, for kernels... well we'd all rather a secure kernel that doesn't crash over an dodgy uber effient one. At least.... I would anyway.
Like all things it depends on what you want and what the programmer can realistically deliver.