CPU Wars 177
msolnik writes: "Whether you say "0.13-micron" as most of us do, or "130-nanometer" as PR flacks prefer, the phrase is weighing heavily on both Intel's and AMD's minds. Indeed, each company's timeline in reaching that mark may determine who calls the CPU shots in 2002. Read more here at Hardware Central." Other submitters noted that AMD and Motorola have both updated their development roadmaps.
Intel 4004 anno 1971 (Score:5, Informative)
This was the news of 1971
Re:Intel 4004 anno 1971 (Score:1)
Re:Intel 4004 anno 1971 (Score:2, Informative)
Re:Intel 4004 anno 1971 (Score:1)
At least, things are less serious as for the CPU clock : from four kHz to two GHz, it's only one generation backwards.
Re:Intel 4004 anno 1971 (Score:1)
Moore's Law - (moo-urhz lah) n. 1.Famous statement by Intel founder Gordon Moore. Moore predicted that the number of transistors per integrated circuit would double every 18 months. See here. [intel.com]
Moore's law says nothing about the number of transistors per unit area doubling every 18 months. Only transistors per IC; and a Pentium IV is a hell of a lot bigger than a 4004.
Re:Intel 4004 anno 1971 (Score:1)
If you really want to count by transistor, we come from 2300 transistors on the 4004. We should thus expect 2.3 milliards transistors on the last generation, but instead we stick with a lousy 50 millions last time I checked. That's 20 times too few, or about 7 moore law generations. 10 fucking years, even worse than I told you. Thanks helping me making my point.
Re:Intel 4004 anno 1971 (Score:1)
Re:Intel 4004 anno 1971 (Score:2)
Your error: "#@$@#$ @#$@#!4 fuc@#@# years!!!"
My error: "oops... sorry"
One thing's for sure. (Score:1)
No way, I'm supersticious. (Score:5, Funny)
They should switch to Angstroms.
Oh wait a minute, my calculator tells me that 0.13 Microns equals 666 Angstroms. Holy Ess, The end is Nigh.
Mac Hype (Score:2, Insightful)
Re:Mac Hype (Score:3, Informative)
bloody hell, they really are hyping the G5, and they haven't got any confirmation of what technologies it will use, they simply assume that motorola's latest chip will be the basis, how much would you have to pay for a mac for them to make returns on their production process?
Well, actually.. A G5 powermac will cost almost the same as the current G4 machines.
The PPC8500 is a 64 bits processor which is 100% backwards compatible.
I've seen some preliminary SpecFP and SpecINT figures and if those are correct a PPC8500 running at 1,6 Ghz is equal to a P4 running at 3 Ghz.
It is twice as fast as a Itanium running at 80 Mhz. and uses only 15 watts peak.
Compare that with around 60 watts for a P4 running at 2 Ghz..
The difference with this chip is, that most of design work was done by Apple itself.
This chip uses 0.13 micron technology and SOI.
So
Don't forget that the PowerPC chip is based on the Power architecture of IBM.
ehhh..looks cool (Score:1)
Motorola tends to remain very vague about new PPC products to the public. They wait until they are actually in other vendors devices before the start to really talk about them. Motorola does this so other companies, like anal Apple, can have first dibs on telling the public about the new toys they are going to ship.
If you hunt you actually can find interesting info on the G5. Variations of the "g5" have already begun to ship within certain routers, and as usual, a lot of Apple's hardware beta testers have been breaking their NDA and telling sites like The Register and MOSR what's in the beige test boxes.
Who know s what this thing will really be like. But we know for sure now that it is 64/32bit,
Ya, this is very little info, but I do believe that this this is going to be quite sick. Obviously the G4 was a dud. It happens...hell, it did happen. Moto had an awful time trying to get the stupid CPU off the damn die, the thing didn't scale for beens, and it was seemingly aimed at bumping heads wit the last generation of CPUs. However now Apple has stepped in, the CPU seems to work well if you believe the rumors, moto seems to have a lot of buyers and potential buyers, they can actually produce and scale this next gen chip (thank god), yada yada yada. Moto and Apple have stepped back and collected their thoughts for a looooong time now. Apple/Moto practically skipped a generation (or half generation) of CPUs and motherboards. It makes sense that they would just come out with a product that is going to bump heads with hammer and itanium. They have had more development time since it didn't make sense to try and save the G4.
god that was a big mac geek post...sorry
Nanometers ahoy! (Score:3, Interesting)
Industry question (Score:1)
Nothing in that article tells me whether what they are doing (constructing really fast chips) is really that hard - in a scientific sense. Is it simply an engineering challenge? What spin-off technologies are likely to result? What's going to come 'next' from all this, apart from more chips?
Re:Industry question (Score:1)
Photolithography is how IC's are made. The process is kind of similar to silk-screening. Masks of the various layers of the chips are made. Chemicals are deposited on the surface of the chip. Light is shone through one of the masks, and focused with lenses onto the chip. The chemicals react to exposure to light, so the portions of the layer of chemicals on the chip that were exposed to light through the mask are now different from the dark masked sections. Depending on the process, now, either the light- or dark-exposed chemicals can be etched away with acid, leaving the oppositely-exposed regions intact. This is done repeatedly to lay out components and interconnections on chips.
The hard part in reducing feature sizes is that the wavelength of the light being used becomes a limiting factor in size reduction. Decreasing the wavelength to x-ray scales can do funny things to previously effective techniques of masking and focusing, due to refraction and other effects. These are the areas currently under research by chip makers, using techniques like xray and electron-beam lithography to allow further decreases in feature size.
Next year (Score:4, Funny)
Next year looks like the best time ever to buy a new performance PC.
Well, duh. Just exactly like every year since they were invented.. And just like every computer magazine pundit has said since day one
The Gardener
Re:Next year (Score:2)
--Blair
Smaller die == less heat? (Score:2, Interesting)
Ok, I admit it. I'm confused. I thought a smaller die size increased heat. Less surface area to radiate from.
Gotta love the last line:
Next year is always the best time to buy a new PC.
Re:Smaller die == less heat? (Score:1, Informative)
Smaller circuits use less power and generate less heat.
Re:Smaller die == less heat? (Score:1)
Re:Smaller die == less heat? (Score:2)
You're confusing temperature with heat. : )
Both points of view are actually right. In the ideal world, having smaller transistors lets them operate at a lower voltage, so you use less power, and generate less heat. And if the die is smaller because of a decrease in transistors, you still use less power.
In the real world, though, you don't see Intel shrinking their dies and then leaving them at 650 MHz. When they shrink the manufacturing process, they also increase the frequency, offsetting any decrease in power usage. And, over the long run, they also increase the NUMBER of transistors, making it use even more power. So while the textbooks say that the new chips will use less power, they're likely to use MORE power, especially when they've had time to ramp them up to the higher clock speeds.
steve
Re:Smaller die == less heat? (Score:1)
Pat
get your terms correct (Score:1)
The term micron has been deprecated for over 20 years. The correct term for millionth of a meter is micrometer, symbol .
Re:get your terms correct (Score:1)
Re:get your terms correct (Score:1)
Now, Angstrom is definitely on its way out... unless it isn't.
Re:get your terms correct (Score:1)
I knew I should have previewed...
Re:get your terms correct (Score:3, Insightful)
No, but we do use the micrometre. The same way we use microfarads, microseconds and microvolts. I guess in the US you still use microns, but then you still use feet, inches, pounds and ounces, too. You have a perfectly good system of SI units, so why not use them? At least micron is just another name for a valid SI unit. Unlike Angstroms, which are just an abomination against nature (they should have just used nm or pm as appropriate).
My Packard Bell P75 (Score:2, Insightful)
Go ahead. Laugh. If you told me you actually paid money for a PB, I'd laugh, too.
PB actually used good motherboards in their systems. It was the components that sucked.
Anyway, to this day, I *still* have and use my PB computer. Yes, it went from a P75 -> P133 -> P200 MMX, and went from 8MB -> 32MB -> 64MB -> 128MB and the hard drive went from 1GB -> 4GB -> 20GB, but it's stll in use.
Admittedly, I've bought other computers since and I no longer use it as my main machine, but I *could* if I wanted to. I only bought faster machines because I wanted to, not because I needed to.
It runs Win98 like a charm and runs Linux even better. It has always been stable and still is, 6 years later.
If people would cater to their needs instead of their wants, the CPU industry would either wither, or they would start offering REAL improvements. These 100MHz increases are BS.
They need to start with a minimum 1GHz jump and better internal architecture. Everything else is just them going wallet fishing.
Knunov
Re:My Packard Bell P75 (Score:1)
instead of 2 minutes on my Athlon.
Yes, I'm impatient
Re:My Packard Bell P75 (Score:1)
Re:My Packard Bell P75 (Score:1)
Got to love those Macs.. quite upgradable.. who'd known? And a PCI-slot to spare..
It didn't help VIA/Cyrix. (Score:1)
While we'd like a 0.13 micron chip (that's faster than 700mhz), a lot of people don't know what a micron is, and they're the ones buying P4's.
I noticed Hammer has been moved back (Score:1)
Does this indicate unanticipated troubles with x86-64?
CPU speed Nuts... (Score:2)
Re:CPU speed Nuts... (Score:1)
Re:CPU speed Nuts... (Score:2)
If you have the speed you might as well use it.
Also you may have lowered the stability of your machine by slowing it down that much. Certain parts of the logic need to 'refresh' to maintain their state, and when the designer assumes that the minimum speed a CPU will be sold at is 1.something Ghz, they might not make sure the charge sticks around long enough to work at less then half of the intended clock speed. But you're so smart...
Re:CPU speed Nuts... (Score:2)
Actually, I underclocked my machine because I was trying to insure stability (not that I had any anyway but then it never hurt). I am told that a lot of servers are underclocked for the same purpose. I also had the idea that my cpu would run cooler thus reducing any chances of overheating and at the same time saving electricity since I keep my main machine on 24/7. As for instability, I have had none of that and I would not expect to. That is usually an OVER CLOCKER problem (which I believe AMD and Intel do which is indicated by the need for huge heat sinks and fans).
Re:CPU speed Nuts... (Score:2)
Re:CPU speed Nuts... (Score:1)
ooh, special. OBVIOUSLY windows doesn't need more than that... XP is indistinguishable between my roomie's tbird 1.4 and my tbird 700. However... fire up some UT. His framerates are always over 60 (as in, smooth), whereas mine drop down to about 45 sometimes. Same video card. So, why don't you do some tests like that? While you're at it, find a high polygon demo like the one in 3dmark2k1, and compare the smoothness. YOU NEED a fast cpu to come even close to smoothness.
Regarding clock throttling... K6-2+ and K6-3+ can do it, and hopefully standard modern processors will too. However, since they don't why not let your CPU do something productive [stanford.edu] with those idle cycles?
However, for people who REALLY NEED more power (all of the time) *couph* *couph*... SMP looks to be the far better alternative than these monster single cpu solutions.
SMP requires multi-threaded apps for any benefit.
and... it is MHz, not mghz. cough is spelled with a g.
Re:CPU speed Nuts... (Score:1)
oh wait, my 3d studio max rendering took WAY too long. it is noticeably faster at a higher clock speed.
Re:CPU speed Nuts... (Score:1)
"His framerates are always over 60 (as in, smooth), whereas mine drop down to about 45 sometimes. [snip] YOU NEED a fast cpu to come even close to smoothness."
News flash: PAL framerate is 25 fps. Even the blazingly fast NTSC framerate is only 30 fps. You're claiming 45 isn't smooth, yet I've never heard anyone complain about the smoothness of television....
Re:CPU speed Nuts... (Score:1)
30FPS in video games isn't smooth for me.
Re:CPU speed Nuts... (Score:1)
war (Score:2, Insightful)
But reading the article, I find that the again go after the ghz number,
Of these, the P4 Northwood could be the most compelling CPU release of 2002
Their reasoning, the p4 well be unto the 4 GHz barrier in a few months. The Athlon is planning to make some jumps as well which, makes this sounds to me like the article is written by someone leaning towards the users who love big GHz numbers and not real speed.
What makes this even funnier is the fact that most users could buy a 1 GHz and still play the latest games and the other things in 2 or 3 years.
my 2 cents plus 2 more
CPU is not problem anymore (Score:3, Insightful)
It is funny, Xp Pro runs the exact same on my PII 400 with 384 meg of ram as it does on my PIII dual 1 gig with a gig of ram machine. The 400 actually boots faster!. So what does processor speed to for you in every day apps? everyone here knows exactly what i am saying. I am just complaining becaue we always hear about the new processor that is supposed to be so great that is coming out next year or whatever. WHEN AM I GOING TO SEE A SOLID STATE HARD DRIVE? Sure Serial ATA is coming up but the transfer rate on that is only starting at 166MB/s. ok. show me a harddrive that actually needs anything better than ata 100 first.
The bottleneck in every modern computer is still the hd, and the bus, we should fix those first and then jack up the mhz..
Re:CPU is not problem anymore (Score:1)
I do a lot of compiling, and write numerical analysis software. Both my athlons get plenty of use. But you make a good point-- most people just don't need it.
Anyone notice that you pretty much have the same Harddrive as you did with your pentium 1 120, the size has increased but if you go IDE it is still 7200rpm and the data transfer rate isn't any faster.
The drive in my Pentium 133 was not 7200 rpm. Also, IDE technology has improved since the Pentium I days. And the new drives are much quieter. For the same noise and cost as an old IDE drive, one could get a SCSI disk at 15k RPM nowadays.
Re:CPU is not problem anymore (Score:1)
Re:CPU is not problem anymore (Score:1)
However, I definitely agree with you about improving hard drive / memory
Re:CPU is not problem anymore (Score:1)
Maybe if you dumbass consumers weren't busy jerking off to the latest IDE spec, you'd have noticed 2 or 3 alternative technologies that are quite a bit faster.
Re:CPU is not problem anymore (Score:2)
Actually, these MHz wars benefit me in a very nice way. I'm still using a PIII-650 at home, but my servers have much more substantial hardware - dual CPU's are the *smallest* machines in the stack. And these MHz wars have made desktop machines that can best high-end servers of only two years ago.
Two years ago, I spent $4,000 on a chassis and motherboard that would use quad Xeons. Add in $2800 for the processors, and that's a lot of money. Today, I can spend $200 on a dual Athlon motherboard and $500 on two chips, and have a machine that will either rival the quad Xeon or beat it in almost any situation.
I remember when I was amazed that opening up an 800x600 JPEG took less than three seconds on a new machine. The funny thing is, at the time, I didn't mind waiting three seconds, and really hadn't even noticed that it was a wait. But once I'd used a faster chip, going back to the three-second wait really cramped. Even though you don't *need* a faster procesor, chances are that the next time you upgrade, you'll start noticing little things like that, and say "Wow... this is nice."
As an interesting side note, if you're looking for longevity out of a computer, go dual CPU's. I had a dual Pentium-133 w/ 64 megs sitting around that I bought for $40. For fun, I put NT4 on it, and in nearly every situation, it was almost or more responsive than a P3-650 with Windows 98. Yes, computationally-bound processes took a while, but in sheer responsiveness, it really impressed me. I think that a dual 1.6 GHz Athlon would have a tremendously long usable life span.
steve
AMD Link (Score:1)
Stop the train! (Score:3, Interesting)
Like New Optical DSPs With Tera-ops Performanc [slashdot.org]
Or Intel Cites Breakthrough In Transistor Design [slashdot.org]
Perhaps Clockless Chips [slashdot.org]
Not forgetting Intel Promises A Cool Billion (Transistors) [slashdot.org]
Notwithstanding Intel Claims Smallest, Fastest Transistor [slashdot.org]
But who could forget Intel Claims 10Ghz Transistor [slashdot.org]
Which looks a lot like Intel Says 10GHz By 2005 [slashdot.org]
But is just as vapor as Intel Creates 30-Nanometer Transistors [slashdot.org]
or my personal favorite: Intel Goes for Display Encryption [slashdot.org]
How can they get any work done when they're too busy telling us what they predict in a bajillion years?
Your personal favorite (Score:2)
BTW, its not vapor, Apparently, a ten thousand bux 42 inch rear-projection TV from JVC actually is using the piece of digital control crap.
Could faster processors lead to better programs? (Score:4, Interesting)
Faster processors and more memory would make higher languages such as Lisp or Python viable for applications (such as Browsers, Desktop environments etc.), which in turn would result in less bugs and increased stability when applied correctly. The current state with software makes me sick. I don't blame it on C per se, but on programmers using the wrong tool for the wrong job.
Writing in such a higher language would probably even increase portability (which C can't fulfill by a far shot) as you would program at a higher abstraction level. No need for autoconf/automake or ugly defines scattered throughout the code, making maintainance more difficult.
I hope that more coders switch to some better suited language than C/C++ for application development. I've switched to Lisp myself.
Please don't take it personally... (Score:1, Insightful)
Re:Please don't take it personally... (Score:3, Insightful)
You're in a position to write a thesis involving AI, so I assume that you already have a bachelor's in CompSci.
Did you learn nothing from it?
I mean, do you honestly believe that you can increase the speed of your programs enough by abandoning an easier-to-maintain language to make it worthwhile? What happens when the next generation of compilers comes around that's faster than your hand-tooled assembler, and you have to re-write your code yet again to squeeze out those extra cycles? What if your code gets executed on a modern processor with deep pipelining, advanced branch prediction, and out-of-order execution? Are you that confident that your manual re-write will take full advantage of the hardware it's running on, moreso than a computer-optimized version?
I'm sorry, but unless your AI consists of a very few tightly-rolled loops that you can super-optimize, I just can't see the benefit of throwing away 30+ years of compiler design experience for a theoretical gain that may or may not appear.
Re:Please don't take it personally... (Score:2)
The issue that I see is that even a 40% performance increase is still well below what I would consider to be the threshold of useful optimization. I mean, that's less than one year's hardware progress. Sure, it may be tempting now, but wouldn't it be better to leave your research in a form that's more accessible to a large audience? Granted, C may not be the lingua franca of AI research, but I'd be willing to bet money that more AI researchers can read C than AMD assembler. Wouldn't you at least like to have the option to have your code peer-reviewed by researchers who don't know 3dnow?
I'm not the moron you are thinking of.
If you're implementing ANNs, then I'll give you the benefit of the doubt and assume that you're not.
Premature optimization (Score:2)
My code deals with building massive 3d arrays containing tens of millions of cells and manipulating them. Obviously, the inner loops of the manipulation would be the bottleneck.
So I ran my trusty profiler.... And found out that 90% of my time was being spent READING THE DATA IN.
It took two lines of code to make that three times faster, making my program 2.5x faster.
Interesting... Then, a couple of weeks later, I took a large deployed system with an active developer community, www.squeak.org, and ran that through a profiler, and found out how changing one line of code lead to 4% speedup in the core intererpreter, and lead to other simple changes that were just as valuable. I also ran the benchmarking in the interpreter, and sped up syntax hilighting by 40%.
If I was doing something like what you were, I'd probably go all-out at using a more dynamic language (Smalltalk) for the extreme flexibility.
Only devolve into C/assembly for the critical parts.
Many times, the bottlenecks aren't where you think or might predict they are. Why spend weeks guessing incorrectly and optimizing code that won't help you go faster when the profiler will tell you exactly what magic bits to re-examine.
It can also find O(n^2) artifacts and all the rest.
If your code is currently running, run it through gprof and see where the CPU time is really going.
Re:Please don't take it personally... (Score:2)
but I'm sick of this "C for kernel, bloatlang for everything else" BS. I'm writing a ALife sim of language evolution program (for my thesis) in C and I'm thinking about writing agent ai code in *asm*, because compiler generated code won't be fast enough.
I assume you registered at the university instead of attending as an "Anonymous Coward"--otherwise, the diploma isn't going to do you much good.
-- MarkusQ
Re:Could faster processors lead to better programs (Score:1)
Re:Could faster processors lead to better programs (Score:1)
Further, I was not claiming that C prohibits laziness - It's just that laziness can produce disastrous result (buffer overflows are a good example). Laziness when writing code is _generally_ bad and should be tackled more.
You raise some valid points, but I'd prefer a little slower, but more stable and bugfree program over an slightly faster, instable and probably insecure one.
Re:Could faster processors lead to better programs (Score:2)
It could be argued that a language that lets you express yourself tersely is less prone to laziness problems. If there aren't a million t's to cross and i's to dot, then there's no way to have a million uncrossed t's and undotted i's laying around at the end of the day.
Re:Could faster processors lead to better programs (Score:2)
It's especially good when it keeps them from writing code in the first place
Re:Could faster processors lead to better programs (Score:2)
screw ease of programming and code readability. I want someone who is smart enough to figure it out to be coding, not some high level coder wanna-be.
Already happened... (Score:2)
The Pentium 4 is now just a puppy with big feet. (Score:3, Insightful)
I consider the Northwood to be the "real" Pentium 4, just as other second-generation products like the 100MHz Pentium and "Coppermine" Pentium III have proven to be the "real" versions of Intel processors in the past.
I agree with this. The Pentium 4s we see today are just puppies with very big feet. They will grow up and become something much more impressive.
Will this help Redhat Bloatware? (Score:2)
I'm currently awaiting my first new PC in a long-time: Soyo Dragon+ mobo, AMD Athlon XP 1600+ with 512 MB DDR RAM, ATA/100 WD 100GB disk (yes, /me likes SCSI, but likes $$$ more), generic DVDROM, and Netstream2000 H/W MPEG2 board. In preparation of it's arrival I downloaded a copy of Red Hat Linux 7.2 with the intention of installing it on an old spare 1.5 GB drive I had free in my old, ailing PC (Intel P200, 80 MB RAM), just to give it a whirl.
Well, things were real tight with the small drive, and my on-board IDE controllers were acting flaky anyway, so I ended up getting a "spare" 20 GB ATA/100 Maxtor drive and Maxtor (re-labeled Promise) ATA/100 PCI controller. The 20 GB Maxtor was now UDAM5 hde on ide2.
The point of all this history is to illustrate that I now have a "soon to be spare" computer where the limiting factor is CPU and to a lesser extent RAM. I go ahead an install RH Linux 7.2 on the new drive.
After a bit of farkling around with kernel boot options (ide2=d000,c802 is your friend!) I boot into RH Linux 7.2, in all it's X 4.0.1 glory.
I'm fairly sure that the new box would make the speed difference between RH 6.2 and 7.2 imperceptible, but the experience left me wondering about the extent of bloat in RH Linux releases, not that I'd want to run anything significant on the P200 anymore, but I might want to use it as some type of low-duty server, with an up-to-date kernel. In a nutshell, what got slower?
No doubt, the new machine will be welcome.
Re:Will this help Redhat Bloatware? (Score:1)
I've been using linux since '95 and I am amazed at the fact that it runs so slowly now. I'm sure there are things I could do to speed it up. But that won't change the fact that Win2k smokes it on lesser hardware. Woe is me. I wish things were different.
Re:Will this help Redhat Bloatware? (Score:2)
In my case, unless I'm doing some long-term processing, the key isn't "fastest", but rather fast enough. I wouldn't spend much $$$ to get a kernel build down to 30 seconds from a minute, for example -- a minute is fine for me for the few times that I build a kernel. 30 minutes, of course, is annoying.
Re:Will this help Redhat Bloatware? (Score:2, Insightful)
Everyone who wants to see Linux succeed on the desktop (including myself) needs to recognize that all those bad words people hurl at MS won't change the fact that Linux + XF4.0 runs significantly slower on the same hardware.
A lot of the advantages of Linux on the desktop start to disappear when you realize that it takes a lot of power to run it. It's not agonizingly slow on my computer, but it's pretty frustrating. Especially when Win2k just hums along on a slower disk with an "inferior" interface.
Re:Will this help Redhat Bloatware? (Score:2)
True enough. I remember the days, probably up to RH 6.2 when [GNU/]Linux distros were generally snappier than bloated Microsoft offerings, even as the user productivity apps were less mature. It would be a sad day indeed, when standard GUI and productivity apps available under a [GNU/]Linux distro were slower, just to get more features "out there" -- stick to the tried, true, and efficient, until the polished can compete with it's peers on performance.
CPU != today's bottleneck (Score:1)
It's a pity; whilst there is nothing wrong with spending your time to compile that new Linux kernel every three days or so, it is plain right stupid to scrap a 1400MHz cpu for, say, a 1800MHz cpu. The discrepancy in cost vs work efficiency is minimal in this example.
I have asked myself the question: What advantage will a new CPU give me? Will it make that windowed os which I love so much boot faster? Will it make my email download faster? As funny as it sounds, that's what Intel is advertizing their p4 chips with in my country.
When I now look at how I could possibly speed this already incredibly fast FreeBSD toy of mine even more, in terms of effective result, which steps do I need to take? First off, I need to get rid of this old and awkward IDE harddisk. Preferably I'd tune in a SCSI raid, with lots of cache on those harddisks. That would probably give me a serious advantage, probably the highest I could achieve this easily; though, that would be redudant, because my X starts in less than two seconds ( with enlightenment and gnome) when I start it the second time anyway.
Cyrix and Transmeta (Score:2, Informative)
Did the 64-bit AMD technology slip? (Score:3, Informative)
That's too late. They need it sooner to compete with the Inanium.
Intel ads on slashdot? (Score:2)
--Blair
Re:Fast CPUs might be bad. (Score:1, Interesting)
It makes scientific research incredibly simpler/cheaper, and that is worthwhile on its own. If you've much patience, a 486 running windows 98se and Office '97 is still fully functional, I use one as my backup desktop system.
No... (Score:1)
No, because some of us get laid.
Re:Fast CPUs might be bad. (Score:5, Insightful)
You're only naming two games, both using the same engine, that are now approx. 5 years old. These days all games are trying to be as immersive as possible, using 3D graphics and sound, enhanced with special FX, and playing against an army of bots trying to mimic our behaviour. They are already using dedicated coprocessors (called GPU's these days).
GUI's have evolved from crappy crammed black and white boxes with hourglasses to 24-bit 1280x1024 alpha-blending anti-aliasing semi-intelligent "interfaces". This all takes memory, memory bandwidth and CPU cycles.
I find myself amazed, even as a software developer, that these days I can take pictures with my digital camera and send them to my mom using e-mail. I predicted this could be done a long time ago. But now I'm doing it I have to stop at moments and find myself simply stunned by the world we live in. We're ordering pizza's from our PCs using broadband network connections. My audio software (Propellerhead's Reason) can emulate a jampackked rack of synths and samplers, and the sound is generated in realtime. I don't have a digital camcorder, but if I owned one I'd spent my nights making my own movies. Picture this 10 years ago.
If you think OO is what makes softwar bloatware then you don't understand OO, in my opinion. OO is one of the ways to achieve true code reuse, which is what we're all striving to do because we are all lazy asses. Code reuse means you get a lot more done in less time, and if done right it should take less space all at the same time.
What really makes software 'bloatware' is the addition of functionality beyond what is needed by the majority of the users. But then again the markets have widened and software has become one of the biggest business in the world today. More users want to find software useful and software vendors respond with more and more features which will always sound like bloatware in the eyes of a few geeks who like to hack together their own kernel and run it on your average pocket 'PC'.
Sure games were fun 20 years ago just as they are fun today. I like to play tetris myself a lot of times but if you really think about it, same now as back then, only 5% of all games are classics and 95% are crap. We're all just spoiled now and the only reason we'll play pong is because it makes us feel nostalgic.
In 10 years you'll say that you don't need the latest AMD XP 22000+ (16Ghz nominal) with 512GB of battery-backed-RAM and a semi-optical harddisc of 600TB
I say, keep 'm coming.
Dave
Personally I'd never go back to the days where i had to wait
Re:Fast CPUs might be bad. (Score:1)
I *was* doing it 5 years ago, and it took way less hardware than you might expect. I was watching TV on my less than state-of-the-art 386sx with 4 megabytes of RAM, using a 15 year(at the time) old video capture card and a VCR. I took pictures using a camcorder which gave sharper images than digital cameras have been able to dream of until recently.
I wrote MIDIs using some anonymous shareware program and made them sound great using WinGroove, in realtime.
The best part of all this is that my computer was 6 years old at the time. I could have easily done any of these things when I first bought the computer, running Windows 3.1 in 1990. The video capture card existed, the internet existed, the sound card existed, and it would all run on a 386. Perhaps expensive for the time, but considering how many tens of thousands of dollars many have spent on upgrading to get these abilities over the years, it would have been money well spent.
In 10 years you'll say that you don't need the latest AMD XP 22000+ (16Ghz nominal) with 512GB of battery-backed-RAM and a semi-optical harddisc of 600TB
The only time I haven't said this was when my PC wasn't capable of doing what I told it to do. The 8088 was too slow for my needs. The 286 was close. The 386 was a rocket. It's all just extra layers of junk from there.
Believe it or not, for most peoples needs, a suprisingly old computer will do the job.
Re:Fast CPUs might be bad. (Score:1)
Re:Fast CPUs might be bad. (Score:1)
Besides, waiting for the huge slashdot pages to *load* will take pretty long on my dialup connection.
Re:Fast CPUs might be bad. (Score:3, Interesting)
I think it would be interesting to see the effect of CPU power on software pricing. With faster CPUs software might be less optimised thus costing less programmer time. It's just a thought...
Rule: Fast CPUs make life good (Score:1)
So.. short version: Leave Santa's factory alone! I want neat toys, and 2 GHz processors are definitely on the list!
Re:Fast CPUs might be bad. (Score:1)
Re:Fast CPUs might be bad. (Score:1)
I've pointed people at PII machines in the 400-700MHz range. These can be picked up for about 250-300UKP and do everything most people need. If they need a faster machine in a couple of years they can buy a 1.6GHz machine for about the same money most likely.
The alternative that PC world is putting forward costs significantly more, for minimal noticable additional performance. 750-900UKP is the price point the retailers seem to have picked for a Xmas PC for the family. For this you get:
-- AMD 1600XP Processor
-- 256Mb DDR Ram
-- 80Gb Hard Drive
-- DVD & CD Rewriter
-- 64Mb F3 200 Titanium Graphics
-- 56K Modem
-- Windows XP Home Edition
-- Lotus Smartsuite
-- 17" Monitor (15.7" viewable screen)
Barring the CD cutter all you are getting for your additional 500UKP is a greater excess of speed, HD, etc...
For the average family PC yu really don't need the speed.
Re:Fast CPUs might be bad. (Score:1)
Re:Fast CPUs might be bad. (Score:1)
Re:Fast CPUs might be bad. (Score:2)
Yes, yes, yes, yes, yes!
For those of you who have a nearby surplus store, go there.
Sample upgrades: I saw a $30 P166MMX system that happened to have an Asus TX97 motherboard. A free upgrade to the beta 0112 BIOS from the 'net. A $30 K6-2+-450 laptop chip, or a K6-3-333 are drop-in replacements for the P166MMX, and offer performance comparable to a PII in the same speed range. Such a system is a great place to toss that stick of 64M PC100 SDRAM you're not using anymore, as well as that 8.4G hard drive you just replaced.
I did that upgrade for my own box and it's capable of doing all-software DVD on a cheap-azz 4M ATI TV-out card from 1996 with no DVD hardware support.
Want monitors? Surplus stores rule. I was in one yesterday and picked up a 19" Sony true-flat CRT for $120. (Pricewatch: $400-500). They had 17" Sony flat-CRTs for $70 (Pricewatch: ~$300). There were also several 21" monitors (Viewsonic P815, Pricewatch $700 new, $325 refurb) for ~$200.
Re:Fast CPUs might be bad. (Score:1)
Re:Fast CPUs might be bad. (Score:1)
Re:Fast CPUs might be bad. (Score:1)
The whole tech thing above a gigahert is just posturing and such, especially if it's for games. The only thing I could think of which would need as much processing power as we have would be either a server or some scientific problem, but even then, the space shuttle was landed on the moon with technology so feeble compared to todays, I'm not entirely sure of that.
I'm not complaining about this whole chip war driving down prices on the CPUs I actually want to buy though
Re:Fast CPUs might be bad. (Score:1)
it's cooler (Score:2)
Re:it's cooler (Score:2)
Re:Fast CPUs might be bad. (Score:2)
MMORPGs are more dependent on *bandwidth* than anything else. You're just talking about the 3D side of things.
look at a few new less eppic games such as Giant: Citizen Kabuto from early last year, people couldnt run that with all the widgets and gizmos cranked up to the maximum level, I could barely even play it on my PIII 500, and that was almost a year agao.
That's because 90% of the time in games like that is spend inside of a 3D driver. Switch the game to pre-assembled display lists (i.e. "use the transformation capabilities of the card") and you'll get a 10x speedup on the same machine. The trouble is that game developers can't assume such a card, as there are lots of entry level machines that are shipping with bare-bones 3D capabilities.
Re:Half a mil? (Score:4, Funny)
You get that dividing a "mile" (1609 m) by "e" (2.7183).
Re:Half a mil? (Score:1)
This is one of those times I really wish I wasn't out of moderator points...
Re:it's a bird, it's a plane... (Score:1)
BTW, most of us (the rest of the world) prefer nanometre. Personally I prefer the attoparsec [tuxedo.org].
Re:Ripoff (Score:1, Funny)