Where's My 10 Ghz PC? 868
An anonymous reader writes "Based on decades of growth in CPU speeds, Santa was supposed to drop off my 10 Ghz PC a few weeks back, but all I got was this lousy 2 Ghz dual processor box -- like it's still 2001...oh please! Dr. Dobbs says the free ride is over, and we now have to come up with some concurrency, but all I have is dollars... What gives?"
Asymptotic (Score:4, Interesting)
Re:Asymptotic (Score:4, Insightful)
Re:Asymptotic (Score:3, Funny)
Re:Asymptotic (Score:5, Funny)
Re:Asymptotic (Score:5, Funny)
Re:Asymptotic (Score:4, Funny)
Re:Asymptotic (Score:4, Funny)
Probably because it has nothing to do with Communism, old people, Beowulf clusters or setting up bombs.
Re:Asymptotic (Score:5, Funny)
Re:Asymptotic (Score:3, Funny)
That humor isn't so far off (Score:3, Insightful)
I'd like to note that the average 3Ghz PC can do MORE than the eqivalant of a 10Ghz 5Mhz 8086. Don't forget that it's not just your CPU doing math now days, it's that fancy $400 super-computer rivaling video card you've got too.
Re:Asymptotic (Score:5, Funny)
Re:Asymptotic (Score:5, Funny)
Re:Asymptotic (Score:4, Funny)
Re:Asymptotic (Score:5, Funny)
But then... (Score:4, Funny)
For God's sake, please stop the business-speak!
But then how are we supposed to leverage our synergies going forward to create a win-win situation? You are generating negative ROI in this incumbent conversation, and have become a cromulent addition to the team. You will be capsized^W rightsized immediately.
Re:Asymptotic (Score:4, Funny)
Re:Asymptotic (Score:3, Funny)
Re:Asymptotic (Score:3, Interesting)
And until someone somes up with another must-have reason (a "killer app"), the demand for higher speeds simply isn't there. Somewhere aroun
Re:Asymptotic (Score:4, Informative)
Hard drives however? Some of the areal densities that are working in R&D labs are significantly denser than what we have now and will allow for plenty of capacity growth if they can be mass produced cheaply enough. Sure, we're approaching a point where it's not going to be viable to go any further, but we're not going to arrive there for a while yet. There is also the option of making the platters sit closer together so you can fit more of them into a drive of course. If you really want or need >1TB on a single spindle then I think you'll need to wait just a few more years.
Re:Asymptotic (Score:5, Insightful)
Remember when 9600 baud was close to the limit of copper? Then 33.6. Then they changed how the pair was used, and made 128K ISDN. Then they changed it again and we're getting 7-10 MB DSL....sometimes even faster depending.
I find it hard to say the we're close to the limits of any technology in the computer/telecom field. Someone always seems to find a new way around it.
Re:Asymptotic (Score:3, Informative)
perhaps not, but things are getting really dicey WRT silicon processes. The lates process shrink to 90nm really hurt, and required bunches of tricks to make it work. Specifically, thermal dissipation is a big problem, as when you shrink chips, they get hotter, and require more idle power to make them work. This increases the total thermal power you've got
Re:Asymptotic (Score:3, Insightful)
My other reasons are a little more subjective, but are largely to do with the fact that both
Re:Asymptotic (Score:5, Insightful)
The lack of breakthrough will be due to something entirely different.
So far we have been exploiting the fruits of fundamental material science, physics and chemistry research done in the 60-es (if not earlier), 70-es and to a small extent in the 80-es. There has been nothing fundamentally new done in the 90-es. A lot of nice engineering - yes. A lot of clever manufacturing techniques silicon of insulator being a prime example - yes. But nothing as far as the underlying science is concerned.
This is not just the semiconductor industry. The situation is the same across the board. The charitable foundations and the state which used to be the prime source of fundamental research funding now require a project plan and a date when the supposed product will deliver a result (thinly disguised words for profit). They also do not invest into projects longer then 3 years.
As a result noone looks at things that may bring a breakthrough and there shall be no breakthroughs until this situation changes
Re:Asymptotic (Score:4, Insightful)
I might also throw in the possibility that, since the end of the Cold War, there has been very little incentive for governments, etc, to back fundamental research that might (a decade later) lead to radically new technologies. Governments like the status quo, they like the future to be predictable. Fundamental research (except perhaps in really esoteric areas like cosmology or areas with practical benefits for them like medicine) scares the willies out of the people in power -- it might upset their apple cart.
Re:Asymptotic (Score:4, Insightful)
The government pumped over a half billion a year into the Human Genome project, and spent $1.6 billion on nanotechnology last year. The government is still willing to spend money on basic research, but I doubt they are willing to create a whole new agency, such as NASA. They would rather have private companies do the work (even if federally funded), then create a new class of federal employees.
I also think you are assuming malice on the part of the government, when instead you should be assuming stupidity. And, since it is a democracy, you don't have to look far to find the root of that stupidity.
Re:Asymptotic (Score:5, Informative)
That was never the limit of copper. It was the limit of voiceband phone lines, which have artificially constrained bandwidth. Since voiceband is now transmitted digitally at 64Kbs, that's the hard theoretical limit, and 56K analog modems are already asymptotically close to that.
If you hook different equipment to the phone wires without the self-imposed bandwidth filters, then it's easy to get higher bandwidth. Ethernet and its predecessors has been pushing megabits or more over twisted pair for decades.
Actually, 56k is the hard limit (Score:5, Informative)
That's also why IDSL is 144k. The total bandwidth of an ISDN line is 144k, but 16k is used for circut switching data. DSL is point-to-point, so that's unnecessary and the D channel's bandwidth can be used for signal.
So 56k is as good as it will ever get for single analogue modems. I suppose, in theory, this could be changed in the future, I suppose, but I find that rather unlikely given that any new technology is likely to be digital end to end.
Re:Asymptotic (Score:3, Informative)
Just to expand a bit on this. Not much - I'm going to grossly oversimplify this. Each "baud" is merely a change in signal. However, it is an analog change, not a digital change. These signals do not need to be either "0" or "1". They can be "2", "3", "4", etc. (there is a limit here, too, I'm sure). 33.6k is merely 3.5 times 9.6k, so we have amplitudes of 0 through 3 (4 discrete values, one of every two signals has an extra parity bit). Using 6 amplitudes (0-5), we get 57.6k, or, minus the parity, 56
GaAs??? GaAs is material of the future... (Score:5, Interesting)
superconductors is the way to go for highest speeds/most concentrated processing power, due to extremely small power dissipation and extremely high clock frequencies (60 GHz for logic is relatively easy right now), but the problem is that after someone invests $3B in a modern semiconductor fab they do NOT want to build a $30M top-of the line superconductor fab to compete with it. IBM would be a good candidate for this, but they got burned on superconductor computer project back in 80s and would not touch it with 10 foot pole now, though both logic and fab has changed dramatically since then.
Disclosure: on my day job I do design III-V chips, and I used to design superconductor chips up until recently, now trying to push that technology forward is more of a night job for me...
Paul B.
Re:GaAs??? GaAs is material of the future... (Score:3, Insightful)
I'd think the more likely reasons would have to do, for starters, with consumers not wanting or being able to afford a computer
Re:GaAs??? GaAs is material of the future... (Score:3, Insightful)
I haven't been in the superconducter field for ten years now... what's the technology being used for the switches/logic gates?
As for GaAs, it's alive and well in the world of RF (analog) amplifiers going up to 100 GHz - I think the current technology uses a 6" wafer. (see, for example, WIN Semiconductor [winfoundry.com])
Re:GaAs??? GaAs is material of the future... (Score:3, Informative)
Hmm, I am wondering what kind of logic were you using 10 years ago!
Yes, it is SFQ/RSFQ (Single Flux Quantum) logic, counting individual magnetic flux quanta, but no, it has nothing to do with now over-
Re:GaAs??? GaAs is material of the future... (Score:4, Informative)
Re:GaAs??? GaAs is material of the future... (Score:3, Insightful)
Re:Asymptotic (Score:3, Funny)
Heh, reminds me of those reports of possible antigravity effects with spinning superconductor magnets. Do you suppose if you manage to write the right bit pattern to every sector on the drive you could get it to lift off?
Re:Asymptotic (Score:3, Insightful)
Also, the linear speed might be too high to read without interleaving (which pretty much negates the advantage of the higher speed)
Some quick calculations:
Assuming that a 3.5" drive has 2.75" platters, which would have a circumferance of 8.64", would have a speed of 129,590 in/min at 15,000 RPM, which equals 122.7 MPH.
If we assume
Re:Hertz don't put you in no drivers seat (Score:3, Interesting)
I think designers had the same idea a couple of years ago: you saw a lot of "legacy-free" systems with an emphasis on cheapness and tiny form factor.
But they didn't catch on for one simple reason: motherboards are a commodity. The pressure on price is enormous, so
Heat is the problem (Score:4, Insightful)
Re:Heat is the problem (Score:5, Insightful)
Re:Heat is the problem (Score:3, Informative)
1) 75% idle time is nonsense. Where did you get that number? With SPECfp on an Athlon or P4 it's more like 20-30% idle. Just look at how spec scores scale with frequency to figure out the memory-idle time.
2) Increasing switching speed with optical technology increases bandwidth but does nothing for latency since nothing travels faster than the speed of light and electrons flowing along a wire can acheive close to 80% of the speed of light already. To reduce latency, what
Re:Heat is the problem (Score:3, Interesting)
Re:Heat is the problem (Score:3, Interesting)
Yeah, the flow of electrons in wire is extremely slow [amasci.com], but the work is really done by the electrical field generated, so that as one electron is pushed into the wire, it "pushes" the sea of electrons forward so that an electron at the other end of the wire is shifted forward. This "shift" occurs pretty close to c [madsci.org]. I
Re:Heat is the problem (Score:5, Interesting)
So, you think that using multiple iterations of an inherently power-hungry technology will somehow solve the power problem? While, certainly, we could back off clock speeds with multi-processing and reduce heat considerably, but, people always want the cutting edge so the demand to "crank it up" would still be a profitable venture, thus pressuring the price of the lower-end stuff.
Look at page 8 [intel.com]. Processors are approaching the heat density of a nuclear reactor. Silicon is dead. We'll need something else if we want more clock cycles (or perhaps a new computing paradigm... something "non-Von Neumann).
Don't complain. (Score:5, Funny)
Well Moore's Law is not a law... (Score:4, Informative)
Re:Well Moore's Law is not a law... (Score:5, Informative)
From webopedia [webopedia.com]
(môrz lâ) (n.) The observation made in 1965 by Gordon Moore, co-founder of Intel, that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented. Moore predicted that this trend would continue for the foreseeable future.
Leave Moore's law out of this, please (Score:5, Informative)
Re:Well Moore's Law is not a law... (Score:5, Insightful)
But now even you cheapest PC covers most users needs. So the CPU designers will continue to inovate but they will find that people will be able to keep their PCs and other electronics longer. Fundementally, the CPU business will start loosing steam and slow down. When people don't need to get new machines, they won't. The precieved premium for the high end products is getting less and less.
Engineering within limits brings great results (Score:5, Insightful)
single bit of speed and capability out of the machines they had.
When computer engineers, faced with limits, still made magic
happen.
I hope this ushers that habit back into the profession. We have a lot of great technology, right now, let's find a better way to use it and make it more ubiquitous.
Re:Engineering within limits brings great results (Score:3, Funny)
Viva la VM-Ware
Re:Engineering within limits brings great results (Score:3, Interesting)
The limits are high enough now to not care. Back in the old days the limits were low enough that it did make a difference...
Not only that but the skills that used to exist in the older days are dissapearing.. "dont need to know that stuff'..
Re:Engineering within limits brings great results (Score:5, Interesting)
The throuble is that this is assumption is wrong. The computers would in theory be fast enough to not care about optimization all over the place, the throuble is that a lot of bad programming doesn't result in just linear decrease of speed. If I use linear lookup instead of a hash-table, speed will go down, quite a bit more down then the amount of speed of the CPU increases over time.
Simple example, Gedit, an extremly basic text-editor takes 4-5 seconds to load on a 1Ghz Athlon, MSDOS edit on the other side on 386er started in a fraction of a second. From a feature point of view both do basically the same. Gedit for sure has some more advanced rendering and GUI and isn't a text-mode application like MSDOS edit, however shouldn't it be possible with todays CPUs which are quite a bit faster then back then to have an application that has better rendering then text-mode, but still be at least as fast or faster then back then?
Embedded Systems (Score:5, Interesting)
If you crave the challenge of making tight, efficient code, sometimes with very little under you but the bare chip itself, then embedded systems might be the place for you.
cue the grumpy old man voice: "Why back in my day, we didn't have 64-bit multi-core chips with gigabytes of memory to waste, no sir, we had to write in assembly code for 8-bit processors, and WE LOVED IT!"
-paul
Re:Engineering within limits brings great results (Score:5, Informative)
It seems that we need to review
The Story of Mel.
I'll post it here from several places,
So that the good people of
(and the other people of
Don't wipe out a single server (yeah, right!)
http://www.cs.utah.edu/~elb/folklore/mel.html [utah.edu]
http://www.wizzy.com/andyr/Mel.html [wizzy.com]
http://www.science.uva.nl/~mes/jargon/t/thestoryo
http://www.outpost9.com/reference/jargon/jargon_4
and, of course, many other places.
Re:Engineering within limits brings great results (Score:5, Insightful)
Please, captain... (Score:3, Funny)
Least of your worries (Score:5, Funny)
Two birds, one stone (Score:5, Funny)
Judging from these pictures of the Intel retail boxed heatsink [impress.co.jp] for the Pentium 4 560J (3.6 GHz), by the time we get 10 GHz PCs, the hovercar problem will take care of itself.
Hardware resources and software design (Score:3, Insightful)
Re:Hardware resources and software design (Score:3)
I would also observe that programmer can be a lot of fun.
Re:Hardware resources and software design (Score:4, Insightful)
What about knowing how to use the libraries that have these functions built in, such as the stl? You might not be 100% as efficient with the libraries, but you can be sure that those libraries are tested and optimized, and if you write these functions yourself, they might be buggy and will most likely be slower than the what comes with the compiler.
Re:Hardware resources and software design (Score:4, Insightful)
Hogwash! Write first, optimize later...or in the real world: write first, optimize if the customer complains. Even then, what are the chances that I can write a better sorting algorithm than one included in a standard library that was written by some who studied sorting algorithms? Close to zero.
Re:Hardware resources and software design (Score:4, Insightful)
Maybe so, but it can (and should) be done in specific cases. For example, I maintain a library of binary tree functions, and I do use them frequently. They are well tested and perform beautifully. However, a project I completed recently required a large amount of data to be traversed in a specific manner, so we designed and built our own BTA--specifically optimized for the task.
As you know, poorly designed code will bubble up through the code and bite you in the end... and your project will suffer from it.
Re:Hardware resources and software design (Score:4, Informative)
Supposing that you need that first sale of your system to a customer, and when they demo your software, they see it's so slow that they dismiss it and buy the competitor's product. You don't have a second chance. This actually happened with a company I know of. The company pretty much went tits up because the architect neglected performance.
Even then, what are the chances that I can write a better sorting algorithm than one included in a standard library that was written by some who studied sorting algorithms?
I don't necessarily need to write the sort algorithm, but I need to be concerned with the effect of using the various algorthms on my system and select the corrrect one accordingly.
Again, that company that failed went with using a standard library for some functionality in the product instead of rolling their own and this had disasterous results. After the customer complained about performance, they found that they'ld need to completely redesign a significant portion of the product to correct the problem. It wasn't a two or three day fix. The fix would have taken 1-2 months. Try eating that cost when you're a small company.
A Good Thing? (Score:5, Insightful)
Consider:
We might get some return to efficient coding being the norm, instead of writing systems anyhow and throwing more/faster hardware at it until it runs acceptably (Microsoft; its you I'm looking at!)
Your (and your business') desktop machine might _not_ become obsolete in no more than 2 years, and mmight continue in useful service as something more sensible than a whole PC doing the job of a router...
Processor designers might spend more time (i know they already spend some) on innovating new ideas, rather than solving the problems with just ramping up clock speeds.
Cooling/Quietening technology might have a snowball's chance in hell of catching up with heat output?
(and the wild dreaming one)
Games writers might remember about gameplay, rather than better coloured lighting...
Re:A Good Thing? (Score:5, Insightful)
These both relate to a trend in the market that I believe we're seeing. Consumers are finding that their "old" computers from 2 years ago are still doing their jobs. When I have a 2Ghz Dell that I use for web surfing, word-processing, and e-mail, there's no benefit to upgrading to the newest 3.4 Ghz Dell. Though there's a hefty speed bump in there, most users will never know the difference.
Therefore, developers/manufacturers are being forced to focus on things like usability and features. They're making their products smaller and more efficient, easier to use, and making them fit transparently into the user's life better. They're focusing on the whole "convergence" idea.
Instead of people spending money on RAM upgrades, the money is going to smaller/lighter/better digital cameras, iPods, and home theater technology. In short, instead of seeing the same box being rolled out every year with better stats, we're seeing new boxes coming out every year with pretty much the same stats, but better designed boxes-- boxes that are actually more useful than last year's model, and not just faster.
I, for one, hope the trend continues.
Thanks to AMD, no (Score:3, Insightful)
Dude, that is what Intel was doing until AMD came along and forced them to get into this "keeping up with the Joneses" routine.
I can't decide whether to put a smiley face on this or not. I was being sarcastic, but for all we know it might be partially true!
Re:A Good Thing? (Score:3, Insightful)
>> We might get some return to efficient coding being the norm, instead of writing systems anyhow and throwing more/faster hardware at it until it runs acceptably (Microsoft; its you I'm looking at!)
Efficient coding is only useful if there is a return on your investment for efficiency. Exponentially increasing hardware capability over time at the same cost point makes this tradeoff obvious. The article is saying the hardware capability will still increase, but the programme
dual cpu systems (Score:4, Interesting)
This time around I also sprung for a hardware raid card and set up a 10 array. That has helped quite a bit with system responsiveness.
I've also turned off as much eye candy as possible. After a couple days its really not missed and things are much snappier.
yeah it would be great if I could run out and get some 10GHz chips to fry a few eggs on, but I think my dual MP2200's still have a bit of life in them.
In the backseat of my... (Score:3, Funny)
Where else would it be?
I've always wondered (Score:5, Interesting)
Re:I've always wondered (Score:5, Informative)
This is why your CPU runs at a faster speed than your L2 cache (which is bigger), which runs at a faster speed than your main memory (which is bigger), which runs at a faster speed than memory in the adjacent NUMA-node (which is bigger), which runs faster than the network (which is bigger),...
Note that I'm talking about latency/clock-rate here; you can get arbitrarily high bandwidth in a big system, but there are times when you have to have low latency and there's no substitute for smallness then; light just isn't that fast!
Re:You obviously haven't studied chip design (Score:3, Interesting)
Perhaps that's why I am able to come up with a novel idea? Because nobody told me it's impossible, it just might work. But, of course, I welcome constructive criticism.
> How do you deposit another fresh layer of
> uncorrupted substrate on top of a processed layer?
With this [64.233.161.104] technology it is already possible to do exactly that. It just needs a bigger nozzle.
> Chemical vapor deposition? It's not as easy as it sounds.
Neither was putting the man on the
Re:I've always wondered (Score:5, Informative)
Another problem, of course, is heat - if your 1cm^2 CPU outputs 100w of heat, a 10cm^2 CPU is going to dump 1000w of heat. That's a hell of a lot of heat.
A third problem is reliability. Yields are bad enough with the current core sizes, tripling the core sizes will drop yield even further.
And a fourth problem is what exactly to *do* with the extra space.
Re:I've always wondered (Score:4, Informative)
Light speed is a big issue, but so is stray capacitence and inductance. A capacitor tends to short out a high frequency signal, and it takes very little capacitence to look like a dead short to a 10 GHz signal. Similarly, the stray inductance of a straight piece of wire has a high reactance at 10 GHz. That's why they run the processor at high speed internally, but have to slow down the signal before sending it out to the real world. If they sent it out over an optical fiber, things would work much better.
And I don't even know if electricity travels at true lightspeed or at something below that.
Under ideal conditions, electric signals can travel at light speed. In real circuits, it is more like
--Tacky the BSEE
Re:Should always specify North or South. (Score:4, Insightful)
That's not true at all. At a mere 2GHz, light can only travel 15cm (6in) through free space in one cycle -- hardly a long distance. Add in modulation and switching delays, and you really can't ignore the board-level latency even with optical interconnect. On the other hand, even on-chip communication takes multiple clock cycles these days, so maybe it wouldn't be that much worse..?
Adding Ghz is probably not the best solution (Score:4, Interesting)
Ramping up clock speeds is hitting some serious limitations as far as increasing the work done by a machine is concerned. There are lots of ways to get work done faster. They are just harder to market without some good, popular, and independent benchmarking standards. At some point engine manufacturers realized that increasing the cubic centimeters of displacement in an engine was not the best way to make it faster or more powerful. Now most car reviews include horsepower. Clock speed is analogous to CCs.
Get over it (Score:3, Insightful)
Re:Get over it (Score:4, Insightful)
My Athlon64 3200, which isn't top-of-the-line but it's pretty close, still takes quite a bit of time to convert a DVD to divx. It takes a few minutes (because IO needs to get faster) to copy large volumes of files. Photoshop filters on huge, detailed files can take a few minutes to run. Machines only slightly slower choke on playback of HDTV. I can't imagine how long it takes to encode.
When I can do all those things instantly, do accurate global weather predictions in realtime and have my true-to-life recreation of the voyager doctor realize his sentience, THEN computers will be fast enough. Until the next killer app comes, of course.
Abstract it away... (Score:3, Interesting)
Re:Abstract it away... (Score:3, Interesting)
In a word, no. At least not with current languages. There's a reason we don't do this already, after all. Provably correct concurrency is very hard to generate, and almost impossible with pure machine code - you either end up with deadlocks and race conditions or very poor performance because you serialize too much stuff. Or incorrect results because data is transparently copied instead of sha
bring on the diamond wafers (Score:3, Informative)
The main problem - our largest producer (Intel) said they would not stop utilizing silicon until they made more money from it...We know that the industry likes to stagger upgrades. Instead of giving us the latest and greatest - they give us everything in between in nice "slow" steps so we spend more money. Personally, I wouldn't mind seeing the jumps of 1ghz at a time. This year 2.0 ghz, next year 3.0, following year 4.0, etc...and then eventually increase it further so its 5ghz at a time, etc. et al.
Actually this is sort of like competition (Score:3, Interesting)
where is it? (Score:3, Funny)
Santa was unable to deliver your 10Ghz system this year for the following reasons:
1) Santa's Flying Car has not arrived
2) Santa could not use his sleigh because it failed the new FCC saftey requirements for subobital ships (something about flaming reindeer poo falling from the sky).
3) The OS for the new 10Ghz computer is Duke Nukem Forever which isn't currently available - maybe next year or decade.
Yeah (Score:3, Funny)
Your 10ghz is waiting.. (Score:3, Funny)
Free shipping if you act in 24hours..
But wait.. theres more..
Moore's Law isn't Speed Doubling, it's Transistors (Score:4, Insightful)
Intel has just caved on the speed doubling in particular, by knocking the clock speed off their product designations, mainly because the Pentium M chips were running significantly faster than the same-speed P4's. AMD's Athlons have been 'fudging' their numbers by having the product number match not their clock speed, but that of the roughly equivalent P4 chip.
Meanwhile, cache sizes are up, instruction pipes are up, hyperthreading has been here a while, multi-core chips are coming down the pike... we're still getting speed gains, just not in raw clocks.
At the same time, the Amiga philosphy of offloading to other processors is truth, with more transistors on the high-end graphics processors than there are on the CPUs!
I hate to say it, but what do you think you need 10GHz for anyway? Unless you've got a REALLY fat pipe, there's a limit on how much pr0n you can process
The high-end machines do make good foot-warmers in cold climes.
What I need 10 ghz for (Score:4, Informative)
authoring a DVD in less than an 4 hours from the dv-avi source?
my own CGI production in my lifetime?
GaAs and Relational Calculus (Score:5, Interesting)
Whenever the government "picks winners" rather than letting nature pick winners, the technologists and therefore technology loses.
(Now that Cray is dead, according to the supercomputing FAQ, "The CCC intellectual property was purchased for a mere $250 thousand by Dasu, LLC - a corporation set up and (AFAIK) wholly owned by Mr. Hub Finkelstein, a Texas oilman. He's owned this stuff for five years and hasn't done anything with it.")
Secondly, as I've discussed before both operating system [slashdot.org] and database [slashdot.org] programming are awaiting the development of relations, most likely via the predicate calculus, as a foundation for software. Both are essentially parallel processing foundations for software.
This feeds into quantum computing quite nicely as well, as relations are not just inherently parallel, but are parallel in such a way that they precisely model quantum software [boundaryinstitute.org].
Concurrency ... again ... (Score:3, Insightful)
...as we've been saying for, oh, at least the last 20 years, which is about the time I was writing up my Ph.D. thesis on concurrent languages and hardware.
As far as I can see (being slightly out of the language/computer design area these days), concurrent machines and languages aren't taking off for the same reasons they didn't take off in the 1980's:
There's more than a handful of generalisations there, but in short: Moore's Law means that nobody is going to buy a highly concurrent computer when consumer PC's are still getting faster, and the people who really need high parallelism (modellers and the like) have their own special-purpose toys to work with.
Apple CPUs catching up....? WTF? (Score:4, Interesting)
Who would'a ever thought to see that happen?
Concurrent Applications are not The Answer (Score:5, Interesting)
Remember back when users had to wait in line in front of a terminal to run their punchcards through the mainframe? Back then, human time was cheap and computer time expensive. Nowadays the user's time is paramount.
Multithreaded programming breaks this law: It is hard to do multithreaded programming- Humans just don't think that way very well. To do it in a way that an arbitrary program (i.e. not a ray tracer) can see consistent performance gains in a multi-CPU environment is almost PhD-level hard. Making single-threaded software is already a major undertaking and anyone thinking that, in general, they should start designing all their programs as fundamentally concurrent programs is going to fall behind their competition due to other factors (security, features, etc.).
Instead, the only way concurrent programming is going to play a major role for the majority of software, I believe, is at the compiler and OS levels: The OS and compiler designers are going to have to do their utmost to transform single-threaded software to perform optimally in a multi-CPU environment- These folks are going to have to take up the slack that the slow CPUs clockspeeds are causing in terms of limiting the speed of Software- Concurrent programming at the application-level is only going to play a minor role in this, in my opinion.
Longhorn Screwed? (Score:4, Informative)
Another Flawed Law. (Score:3, Insightful)
?Andy giveth, and Bill taketh away.?
That's only half right, because you don't have to let Bill take away. KDE3 runs well on a 233MHz PII and 64MB of RAM, almost a whole order of magnitude less of hardware than it takes to make XP happy. The picture is more drastic when you consider the virus load most XP setups must endure. You need a 2GHz processor just to keep running while your computer serves out spam and kiddie porn.
The changes Dr. Dobbs so wants are already happening in free so
Here comes the hertz gang again :\ (Score:3, Interesting)
1a)486-25SX
1b)486-25DX
2a)PIII - 450
2b)G4 - 450
3a)G3 - 300
3b)Playstation 2 - 300
Moral of the story : there are far, far more important performance measurements than clock frequency. If you think otherwise, you might as well slap a VTEC sticker on your case.
P.S. As other's have pointed out, Moors law has nothing to do cpu frequency.
Intel is to blame for this absurdity (Score:5, Interesting)
"First, by switching to the Pentium 4 architecture, Intel can drastically boost the clock speed. The old server Xeon topped out at 1.4GHz. The new one debuts at 1.8GHz, 2GHz and 2.2GHz, and will eventually pass 10GHz, she said."
http://news.com.com/2100-1001-843879.html [com.com]
I can't find the exact quote and article, but another Intel exec/rep stated that this goal would be achieved by 2006.
Well, it's 2005, the P4 has topped out at 3.6ghz and has been discontinued because Intel has determined that the P4 arcitecture is streached to the limit.
Bottom line is that we should be expecting a 10ghz processor soon because Intel brazenly stated that they would produce one. Whenever they do make these statements the AP drools over the story, stock prices jump and I'm sure investors get excited.
Instead, their next gen processor is a 2ghz Pentuim M dothan. Intel should be ashamed of themselves for lying to the public and should be investigated for inflating their stock value though fictional claims about their processor technology.
Re:Lying??? (Score:3, Insightful)
Re:Lying??? (Score:3, Interesting)
But, I can assure you (I am part of the industry) that back then, the technology roadmap outlook was drastically
Re:We need a faster bus (Score:5, Funny)
Re:need for speed? (Score:4, Insightful)
As long as there are games and a large number of computer users who want to play them, there will be a need for faster CPUs. While on the graphic side the main work is already done by the GPU, the physics and AI are still done by the CPU. And oposed to the graphics, where games are already quite advanced, AI and physics tends still to be rather primitive in games and will for sure need a lot of additional CPU.