Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware

CPU Wars 177

msolnik writes: "Whether you say "0.13-micron" as most of us do, or "130-nanometer" as PR flacks prefer, the phrase is weighing heavily on both Intel's and AMD's minds. Indeed, each company's timeline in reaching that mark may determine who calls the CPU shots in 2002. Read more here at Hardware Central." Other submitters noted that AMD and Motorola have both updated their development roadmaps.
This discussion has been archived. No new comments can be posted.

CPU Wars

Comments Filter:
  • Intel 4004 anno 1971 (Score:5, Informative)

    by Hougaard ( 163563 ) on Monday December 03, 2001 @06:52AM (#2646961) Homepage Journal
    The 4004 consists of 2,300 transistors based on 10 micron technology fitting on a 12mm2 area.The microprocessor has 46 instructions. The 4040 is an enhanced version of the 4004, adding 14 instructions, larger stack (8 levels) and 8K program space. It can address 640 bytes. Documentation is written by Adam Osborne. The chip is introduced to the public in Las Vegas by Wayne Pickette. The sales price will be US$ 200 per piece.


    This was the news of 1971

  • Now matter who hits 0.13 first, WE win.
    • by snatchitup ( 466222 ) on Monday December 03, 2001 @08:53AM (#2647293) Homepage Journal
      No friggen way I'll ever own a 13 nanoM chip. I'm just too supersticious. I've got enough to worry about with my data, and (jpg)'s to trust them to an unlucky number. It's worse than a hat on the bed!

      They should switch to Angstroms.

      Oh wait a minute, my calculator tells me that 0.13 Microns equals 666 Angstroms. Holy Ess, The end is Nigh.
  • Mac Hype (Score:2, Insightful)

    by BiggyP ( 466507 )
    bloody hell, they really are hyping the G5, and they haven't got any confirmation of what technologies it will use, they simply assume that motorola's latest chip will be the basis, how much would you have to pay for a mac for them to make returns on their production process?
    • Re:Mac Hype (Score:3, Informative)

      by Spruitje ( 15331 )

      bloody hell, they really are hyping the G5, and they haven't got any confirmation of what technologies it will use, they simply assume that motorola's latest chip will be the basis, how much would you have to pay for a mac for them to make returns on their production process?


      Well, actually.. A G5 powermac will cost almost the same as the current G4 machines.
      The PPC8500 is a 64 bits processor which is 100% backwards compatible.
      I've seen some preliminary SpecFP and SpecINT figures and if those are correct a PPC8500 running at 1,6 Ghz is equal to a P4 running at 3 Ghz.
      It is twice as fast as a Itanium running at 80 Mhz. and uses only 15 watts peak.
      Compare that with around 60 watts for a P4 running at 2 Ghz..
      The difference with this chip is, that most of design work was done by Apple itself.
      This chip uses 0.13 micron technology and SOI.
      So ,actually the dye size is almost equal to that of the G4 which means that it will cost almost the same.
      Don't forget that the PowerPC chip is based on the Power architecture of IBM.
    • I hate to be the Mac geek around here... but whatever.

      Motorola tends to remain very vague about new PPC products to the public. They wait until they are actually in other vendors devices before the start to really talk about them. Motorola does this so other companies, like anal Apple, can have first dibs on telling the public about the new toys they are going to ship.

      If you hunt you actually can find interesting info on the G5. Variations of the "g5" have already begun to ship within certain routers, and as usual, a lot of Apple's hardware beta testers have been breaking their NDA and telling sites like The Register and MOSR what's in the beige test boxes.

      Who know s what this thing will really be like. But we know for sure now that it is 64/32bit, .13micron, uses moto's new SOI technology that came out a while ago, and was developed with a lot of "help" from Apple this time.

      Ya, this is very little info, but I do believe that this this is going to be quite sick. Obviously the G4 was a dud. It happens...hell, it did happen. Moto had an awful time trying to get the stupid CPU off the damn die, the thing didn't scale for beens, and it was seemingly aimed at bumping heads wit the last generation of CPUs. However now Apple has stepped in, the CPU seems to work well if you believe the rumors, moto seems to have a lot of buyers and potential buyers, they can actually produce and scale this next gen chip (thank god), yada yada yada. Moto and Apple have stepped back and collected their thoughts for a looooong time now. Apple/Moto practically skipped a generation (or half generation) of CPUs and motherboards. It makes sense that they would just come out with a product that is going to bump heads with hammer and itanium. They have had more development time since it didn't make sense to try and save the G4.

      god that was a big mac geek post...sorry ;)
  • Nanometers ahoy! (Score:3, Interesting)

    by Visoblast ( 15851 ) on Monday December 03, 2001 @07:01AM (#2646982) Homepage
    Before we get to 0.09 microns, lets start using nanometers to get rid of those preceding decimal places. Plus, unlike micron, a nanometer is an accepted SI unit (see http://physics.nist.gov/cuu/Units/prefixes.html). Strange the PR people should use it first -- could this be a sign of the Apocalypse?
  • I know nothing about the chip manufacturing industry so I'll put my newbie propeller-cap on for a moment.

    Nothing in that article tells me whether what they are doing (constructing really fast chips) is really that hard - in a scientific sense. Is it simply an engineering challenge? What spin-off technologies are likely to result? What's going to come 'next' from all this, apart from more chips?
    • The challenge of producing chips with smaller and smaller feature sizes is the difficulty in using photolithographic techniques effectively.

      Photolithography is how IC's are made. The process is kind of similar to silk-screening. Masks of the various layers of the chips are made. Chemicals are deposited on the surface of the chip. Light is shone through one of the masks, and focused with lenses onto the chip. The chemicals react to exposure to light, so the portions of the layer of chemicals on the chip that were exposed to light through the mask are now different from the dark masked sections. Depending on the process, now, either the light- or dark-exposed chemicals can be etched away with acid, leaving the oppositely-exposed regions intact. This is done repeatedly to lay out components and interconnections on chips.

      The hard part in reducing feature sizes is that the wavelength of the light being used becomes a limiting factor in size reduction. Decreasing the wavelength to x-ray scales can do funny things to previously effective techniques of masking and focusing, due to refraction and other effects. These are the areas currently under research by chip makers, using techniques like xray and electron-beam lithography to allow further decreases in feature size.

  • Next year (Score:4, Funny)

    by The Gardener ( 519078 ) on Monday December 03, 2001 @07:05AM (#2646987) Homepage

    Next year looks like the best time ever to buy a new performance PC.

    Well, duh. Just exactly like every year since they were invented.. And just like every computer magazine pundit has said since day one

    The Gardener

  • by tnak ( 163802 )

    Ok, I admit it. I'm confused. I thought a smaller die size increased heat. Less surface area to radiate from.

    Gotta love the last line:

    Next year looks like the best time ever to buy a new performance PC.


    Next year is always the best time to buy a new PC.

    • by Anonymous Coward
      Smaller die = smaller circuits
      Smaller circuits use less power and generate less heat.
    • What they say (about less power) and the fact there is less friction in the cpu for the power. Less friction = less heat as well. (thats why they need to get to a smaller die size for more mhz)
    • Ok, I admit it. I'm confused. I thought a smaller die size increased heat. Less surface area to radiate fromm

      You're confusing temperature with heat. : )

      Both points of view are actually right. In the ideal world, having smaller transistors lets them operate at a lower voltage, so you use less power, and generate less heat. And if the die is smaller because of a decrease in transistors, you still use less power.

      In the real world, though, you don't see Intel shrinking their dies and then leaving them at 650 MHz. When they shrink the manufacturing process, they also increase the frequency, offsetting any decrease in power usage. And, over the long run, they also increase the NUMBER of transistors, making it use even more power. So while the textbooks say that the new chips will use less power, they're likely to use MORE power, especially when they've had time to ramp them up to the higher clock speeds.

      steve
  • 0.13-micron...

    The term micron has been deprecated for over 20 years. The correct term for millionth of a meter is micrometer, symbol .
    • Sorry, the symbol's m.
    • Except that no one that I know ever uses micron. The only time that I ever had a problem with the micron/micrometer bit is when I wrote a scientific paper that had to be reviewed by NIST (National Institute of Standards and Technology).

      Now, Angstrom is definitely on its way out... unless it isn't.
      • Whoops-- I meant that no one ever uses MICROMETER.

        I knew I should have previewed...
        • by Tet ( 2721 )
          I meant that no one ever uses MICROMETER

          No, but we do use the micrometre. The same way we use microfarads, microseconds and microvolts. I guess in the US you still use microns, but then you still use feet, inches, pounds and ounces, too. You have a perfectly good system of SI units, so why not use them? At least micron is just another name for a valid SI unit. Unlike Angstroms, which are just an abomination against nature (they should have just used nm or pm as appropriate).

  • by Knunov ( 158076 )
    I bought my first computer in 1995. It was a Packard Bell P75.

    Go ahead. Laugh. If you told me you actually paid money for a PB, I'd laugh, too.

    PB actually used good motherboards in their systems. It was the components that sucked.

    Anyway, to this day, I *still* have and use my PB computer. Yes, it went from a P75 -> P133 -> P200 MMX, and went from 8MB -> 32MB -> 64MB -> 128MB and the hard drive went from 1GB -> 4GB -> 20GB, but it's stll in use.

    Admittedly, I've bought other computers since and I no longer use it as my main machine, but I *could* if I wanted to. I only bought faster machines because I wanted to, not because I needed to.

    It runs Win98 like a charm and runs Linux even better. It has always been stable and still is, 6 years later.

    If people would cater to their needs instead of their wants, the CPU industry would either wither, or they would start offering REAL improvements. These 100MHz increases are BS.

    They need to start with a minimum 1GHz jump and better internal architecture. Everything else is just them going wallet fishing.

    Knunov
    • Yes, I can still edit my project's source in vim as I could with my old P133 but I dislike waiting (about) 12minutes to compile the affected subtree
      instead of 2 minutes on my Athlon.

      Yes, I'm impatient ;)
    • I am not going to laugh. I bought a PB 75 too. My niece is using it today. Never a problem with the thing. I now own a Mac. OS X rocks! :)
    • Well.. I have Power Mac 7500. Originaly a 100 MHz computer.. but now it has a 400 MHz G4-processor, a USB/FireWire-card, twin 9 GB SCSI-drives, two graphics adapters and a 10/100 ethernet-adapters (and of cource the standard 10 Mbps internal). And.. 512 MB RAM. It runs Mac OS X just perfectly, and Linux, and BeOS and Mac OS 9.. bought in the fall 1995.

      Got to love those Macs.. quite upgradable.. who'd known? And a PCI-slot to spare..
  • While 0.13 micron/130 nanometer will help the heat on these chips, VIA has had 0.13 micron chips for about 6 months, and their sales aren't to great :-)
    While we'd like a 0.13 micron chip (that's faster than 700mhz), a lot of people don't know what a micron is, and they're the ones buying P4's.
  • AMD's previous roadmap had the ClawHammer's debut in 2H 2002, while the new one has it stradding 2H 2002 - 1H 2003, with the multiprocessor versions definitely not coming out until 2003.

    Does this indicate unanticipated troubles with x86-64?

  • I just don't get the desire for machines faster than 600 mghz. The CPU is going at least twice as fast as any other component on a PC machine. What I did recently was buy a DDR motherboard to get ram that ran at 133 mghz (advertised at 266 mghz) and so I got a AMD 1.4 gigahertz cpu with it. One of the nice features of the motherboard was the ability to change the clock rate of cpu and bus. I LOWERED the clock rate of my CPU to 800 mghz and my machine is as responsive as I would ever want it to be. When I hear that Intel is charging twice the price for thier 2.0 gigahertz CPU as thier 1.8 ghz and people go out in droves to buy the 2 ghz anyway boggles me! Most of them don't need the speed of either CPU AND people are willing to pay 100% more money for a measly 10% performance boost. Ten years ago, Most PC's came with a "turbo button" on the case with the idea that only when you really had to use the the cycles would you press turbo and the CPU would go twice as fast. Back then, the button was pointless because when computers were going at 66 mgz, processors would regularly be very busy. But today the Turbo Button would actually be a nice feature. When doing word-processing or surfing the web, have the machine go slow but then when playing quake 18 (Revenge of the killer CPU), press the turbo button so the bloatware can look sweet. However, for people who REALLY NEED more power (all of the time) *couph* *couph*... SMP looks to be the far better alternative than these monster single cpu solutions.
    • The turbo button defaulted to ON. You could switch it off if you wanted to play old bad designed game which would be unplayably fast ;-)
    • Let me guess, you slowed the chip down to 800 Mhz just so you could tell everyone that you don't need such a fast machine, and that they dont need it either. You're just like one of those obnoxious people that feels the need to tell everybody that they don't have a TV, like that makes them better then you or something.

      If you have the speed you might as well use it.

      Also you may have lowered the stability of your machine by slowing it down that much. Certain parts of the logic need to 'refresh' to maintain their state, and when the designer assumes that the minimum speed a CPU will be sold at is 1.something Ghz, they might not make sure the charge sticks around long enough to work at less then half of the intended clock speed. But you're so smart...
      • Also you may have lowered the stability of your machine by slowing it down that much. Certain parts of the logic need to 'refresh' to maintain their state, and when the designer assumes that the minimum speed a CPU will be sold at is 1.something Ghz, they might not make sure the charge sticks around long enough to work at less then half of the intended clock speed.

        Actually, I underclocked my machine because I was trying to insure stability (not that I had any anyway but then it never hurt). I am told that a lot of servers are underclocked for the same purpose. I also had the idea that my cpu would run cooler thus reducing any chances of overheating and at the same time saving electricity since I keep my main machine on 24/7. As for instability, I have had none of that and I would not expect to. That is usually an OVER CLOCKER problem (which I believe AMD and Intel do which is indicated by the need for huge heat sinks and fans).

        • Underclocking may increase stability in small doses. 10% is probably fine, but I have worked with CPUs in the past that don't work as documented when underclocked more then 25%. Don't trust your data with an asumption. Manufacturers publish specs for a reason.
    • I LOWERED the clock rate of my CPU to 800 mghz and my machine is as responsive as I would ever want it to be.

      ooh, special. OBVIOUSLY windows doesn't need more than that... XP is indistinguishable between my roomie's tbird 1.4 and my tbird 700. However... fire up some UT. His framerates are always over 60 (as in, smooth), whereas mine drop down to about 45 sometimes. Same video card. So, why don't you do some tests like that? While you're at it, find a high polygon demo like the one in 3dmark2k1, and compare the smoothness. YOU NEED a fast cpu to come even close to smoothness.

      Regarding clock throttling... K6-2+ and K6-3+ can do it, and hopefully standard modern processors will too. However, since they don't why not let your CPU do something productive [stanford.edu] with those idle cycles?

      However, for people who REALLY NEED more power (all of the time) *couph* *couph*... SMP looks to be the far better alternative than these monster single cpu solutions.

      SMP requires multi-threaded apps for any benefit.

      and... it is MHz, not mghz. cough is spelled with a g.
      • Oh, last night I was playing around with fans... and incredibly, everything is responsive underclocked to 500MHz, running fanless :-)

        oh wait, my 3d studio max rendering took WAY too long. it is noticeably faster at a higher clock speed.
      • "His framerates are always over 60 (as in, smooth), whereas mine drop down to about 45 sometimes. [snip] YOU NEED a fast cpu to come even close to smoothness."

        News flash: PAL framerate is 25 fps. Even the blazingly fast NTSC framerate is only 30 fps. You're claiming 45 isn't smooth, yet I've never heard anyone complain about the smoothness of television....

        • Yeah, but nobody said the two interlaced images had to be for the same moment at time either. IOW the effective framerate for PAL can be 50 FPS and 60FPS for NTSC.

          30FPS in video games isn't smooth for me.
    • For most applications and uses you are perfectly correct that most people don't need more than 600Mhz or so. However for the gamers out there the increased cpu speed does help, just check out benchmarks on someplace like sharkyextreme.com. The same system with a faster CPU gets faster FPS, of course the FPS it is getting are well beyond anything that matters. Scientific modeling and high-end graphics work is really where these cpus come in handy. Even though they are running at those immense speed, remember that some x86 instructions take 4 cycles to complete, some might take more, don't have my reference handy, the inefficiency of x86 helps even out the fact the cpu runs faster than everything else I guess...
  • war (Score:2, Insightful)

    by VEGETA_GT ( 255721 )
    Well to be honest, I have been watching this sine the first Athlons came out and proved that Pentium was not all that. So far AMD has been able to bask down Intel at every turn. The 2 ghz p4 is still slower then amd's athlon 1900 (1.6 GHz). Its not the speed, its what you can do per clock cycle, and amd chips dose do a lot more.

    But reading the article, I find that the again go after the ghz number,

    Of these, the P4 Northwood could be the most compelling CPU release of 2002

    Their reasoning, the p4 well be unto the 4 GHz barrier in a few months. The Athlon is planning to make some jumps as well which, makes this sounds to me like the article is written by someone leaning towards the users who love big GHz numbers and not real speed.

    What makes this even funnier is the fact that most users could buy a 1 GHz and still play the latest games and the other things in 2 or 3 years.

    my 2 cents plus 2 more
  • by El_Nofx ( 514455 ) on Monday December 03, 2001 @09:44AM (#2647526)
    Unless you are ripping Divx movies left and right or a Seti@home freak you don't need a faster cpu, It will do nothing for you. Anyone notice that you pretty much have the same Harddrive as you did with your pentium 1 120, the size has increased but if you go IDE it is still 7200rpm and the data transfer rate isn't any faster.
    It is funny, Xp Pro runs the exact same on my PII 400 with 384 meg of ram as it does on my PIII dual 1 gig with a gig of ram machine. The 400 actually boots faster!. So what does processor speed to for you in every day apps? everyone here knows exactly what i am saying. I am just complaining becaue we always hear about the new processor that is supposed to be so great that is coming out next year or whatever. WHEN AM I GOING TO SEE A SOLID STATE HARD DRIVE? Sure Serial ATA is coming up but the transfer rate on that is only starting at 166MB/s. ok. show me a harddrive that actually needs anything better than ata 100 first.
    The bottleneck in every modern computer is still the hd, and the bus, we should fix those first and then jack up the mhz..
    • Unless you are ripping Divx movies left and right or a Seti@home freak you don't need a faster cpu, It will do nothing for you.

      I do a lot of compiling, and write numerical analysis software. Both my athlons get plenty of use. But you make a good point-- most people just don't need it.

      Anyone notice that you pretty much have the same Harddrive as you did with your pentium 1 120, the size has increased but if you go IDE it is still 7200rpm and the data transfer rate isn't any faster.

      The drive in my Pentium 133 was not 7200 rpm. Also, IDE technology has improved since the Pentium I days. And the new drives are much quieter. For the same noise and cost as an old IDE drive, one could get a SCSI disk at 15k RPM nowadays.

    • Ever compiled a linux kernel? Done 3d rendering? I do some 3d stuff (just for entertainment, pretty crappy stuff [cmu.edu] but the rendering is much faster on faster computers.
    • Ever done any 3d work? I do some pretty crappy stuff [cmu.edu] but faster processors = faster render time. Same with compiling stuff.

      However, I definitely agree with you about improving hard drive / memory
    • Actually, I'm moving up to fibre channel with software raid 5. I fully expect to be able to a constant data transfer of over 100mps. I'm afraid I won't be using XP though.

      Maybe if you dumbass consumers weren't busy jerking off to the latest IDE spec, you'd have noticed 2 or 3 alternative technologies that are quite a bit faster.
    • Unless you are ripping Divx movies left and right or a Seti@home freak you don't need a faster cpu

      Actually, these MHz wars benefit me in a very nice way. I'm still using a PIII-650 at home, but my servers have much more substantial hardware - dual CPU's are the *smallest* machines in the stack. And these MHz wars have made desktop machines that can best high-end servers of only two years ago.

      Two years ago, I spent $4,000 on a chassis and motherboard that would use quad Xeons. Add in $2800 for the processors, and that's a lot of money. Today, I can spend $200 on a dual Athlon motherboard and $500 on two chips, and have a machine that will either rival the quad Xeon or beat it in almost any situation.

      I remember when I was amazed that opening up an 800x600 JPEG took less than three seconds on a new machine. The funny thing is, at the time, I didn't mind waiting three seconds, and really hadn't even noticed that it was a wait. But once I'd used a faster chip, going back to the three-second wait really cramped. Even though you don't *need* a faster procesor, chances are that the next time you upgrade, you'll start noticing little things like that, and say "Wow... this is nice."

      As an interesting side note, if you're looking for longevity out of a computer, go dual CPU's. I had a dual Pentium-133 w/ 64 megs sitting around that I bought for $40. For fun, I put NT4 on it, and in nearly every situation, it was almost or more responsive than a P3-650 with Windows 98. Yes, computationally-bound processes took a while, but in sheer responsiveness, it really impressed me. I think that a dual 1.6 GHz Athlon would have a tremendously long usable life span.

      steve
  • Anyone else notice that MSOfficesque error on the AMD link? Where it says "ClawHammer". I just thought that was funny.
  • Stop the train! (Score:3, Interesting)

    by niekze ( 96793 ) on Monday December 03, 2001 @11:15AM (#2648025) Homepage
    Am I the only one who notices that every week /. posts a news article about Intel or someone coming up with supar-dupar-mega-fantabulous technology that we never hear about again?

    Like New Optical DSPs With Tera-ops Performanc [slashdot.org]
    Or Intel Cites Breakthrough In Transistor Design [slashdot.org]
    Perhaps Clockless Chips [slashdot.org]
    Not forgetting Intel Promises A Cool Billion (Transistors) [slashdot.org]
    Notwithstanding Intel Claims Smallest, Fastest Transistor [slashdot.org]
    But who could forget Intel Claims 10Ghz Transistor [slashdot.org]
    Which looks a lot like Intel Says 10GHz By 2005 [slashdot.org]
    But is just as vapor as Intel Creates 30-Nanometer Transistors [slashdot.org]
    or my personal favorite: Intel Goes for Display Encryption [slashdot.org]

    How can they get any work done when they're too busy telling us what they predict in a bajillion years?
    • The display encryption idea from intel, that is the much ballyhooed HDCP, that was the subject of Niels Ferguson a few months ago. (also on slashdot), and one week ago, slashdot posted a news story announcing that it is completely broken.

      BTW, its not vapor, Apparently, a ten thousand bux 42 inch rear-projection TV from JVC actually is using the piece of digital control crap.
  • by entrox ( 266621 ) <slashdot@@@entrox...org> on Monday December 03, 2001 @11:26AM (#2648133) Homepage
    I'm looking forward to ever-increasing clockspeeds, as this could get us away from programming applications in a low-level language like C/C++. Let's face it: Most of the bugs in current programs stem from the fact that C was not designed to handle sloppy or lazy coding. Dangling pointers, buffer overflows, memory leaks etc. result from the low-levelness of C (that's OK - for it to be efficient it needs to have the ability to do all kinds of things with the hardware directly). C should only be used for developing operating system kernels and device drivers, as no other higher language would handle the task well.

    Faster processors and more memory would make higher languages such as Lisp or Python viable for applications (such as Browsers, Desktop environments etc.), which in turn would result in less bugs and increased stability when applied correctly. The current state with software makes me sick. I don't blame it on C per se, but on programmers using the wrong tool for the wrong job.

    Writing in such a higher language would probably even increase portability (which C can't fulfill by a far shot) as you would program at a higher abstraction level. No need for autoconf/automake or ugly defines scattered throughout the code, making maintainance more difficult.

    I hope that more coders switch to some better suited language than C/C++ for application development. I've switched to Lisp myself.
    • by Anonymous Coward
      but I'm sick of this "C for kernel, bloatlang for everything else" BS. I'm writing a ALife sim of language evolution program (for my thesis) in C and I'm thinking about writing agent ai code in *asm*, because compiler generated code won't be fast enough. I wrote simple AI apps for fun, multivariable optimization programs and finite element solvers for modelling at work, ran sound and video editing programs and ALL could use some more optimization at a lower level. C, asm, fortran aren't going anywhere until compiler technology advances to the point that they can produce better code from a problem description than a human can.
      • I'm thinking about writing agent ai code in *asm*, because compiler generated code won't be fast enough.

        You're in a position to write a thesis involving AI, so I assume that you already have a bachelor's in CompSci.

        Did you learn nothing from it?

        I mean, do you honestly believe that you can increase the speed of your programs enough by abandoning an easier-to-maintain language to make it worthwhile? What happens when the next generation of compilers comes around that's faster than your hand-tooled assembler, and you have to re-write your code yet again to squeeze out those extra cycles? What if your code gets executed on a modern processor with deep pipelining, advanced branch prediction, and out-of-order execution? Are you that confident that your manual re-write will take full advantage of the hardware it's running on, moreso than a computer-optimized version?

        I'm sorry, but unless your AI consists of a very few tightly-rolled loops that you can super-optimize, I just can't see the benefit of throwing away 30+ years of compiler design experience for a theoretical gain that may or may not appear.
      • by Anonymous Coward on 10:57 03 December 2001 (#2648773)

        but I'm sick of this "C for kernel, bloatlang for everything else" BS. I'm writing a ALife sim of language evolution program (for my thesis) in C and I'm thinking about writing agent ai code in *asm*, because compiler generated code won't be fast enough.

        I assume you registered at the university instead of attending as an "Anonymous Coward"--otherwise, the diploma isn't going to do you much good.

        -- MarkusQ

    • Unfortunately your wishes have already come true, and I (and many in industry) would draw the exact opposite conclusions you have done. Faster CPUs encourage laziness (which you seem to be advocating by claiming that C[++] is bad because it prohibits laziness - which in itself is not strictly true either). Laziness is bad; look at all the bloated, useless Visual Basic code out there. Business critical enterprise level software should almost never be written in a scripting langauge (even a good one, like Perl). Faster CPUs together with easily abused and easily learnt (at least to a basic level) scripting langauges produce undisciplined programmers . These, especially MS VB script hackers (in the correct, though non-complimentary sense of the term), are the scourge of the industry - churning out buggy, insecure, and monstrously inefficent code. That's not to say that all VB/Lisp/Perl is bad (I've seen good VB coders take bad production code and speed up business processes by orders of magnitude), just that every lanuage has its place and scripting languages are of little use for hard computing and business tasks.
      • Don't confuse so called "scripting languages" with high-level languages. Lisp is by no means a scripting language and can be compiled to native machine code. Perhaps Python was a bad example, so substitute Java if you like.

        Further, I was not claiming that C prohibits laziness - It's just that laziness can produce disastrous result (buffer overflows are a good example). Laziness when writing code is _generally_ bad and should be tackled more.

        You raise some valid points, but I'd prefer a little slower, but more stable and bugfree program over an slightly faster, instable and probably insecure one.
        • C is *so* not a cure for laziness. Some of the most lazy and careless code I've ever seen has been C.

          It could be argued that a language that lets you express yourself tersely is less prone to laziness problems. If there aren't a million t's to cross and i's to dot, then there's no way to have a million uncrossed t's and undotted i's laying around at the end of the day.
      • On the contrary, laziness is one of the three chief virtues of a programmer.

        It's especially good when it keeps them from writing code in the first place ... less code means fewer bugs!
    • personally I would rether drum all the sloppy and lazy coders out of the business.
      screw ease of programming and code readability. I want someone who is smart enough to figure it out to be coding, not some high level coder wanna-be.
    • Look at the transition from processor specific assembly to (possibly portable) C code. (Were computers not originally programmed with direct binary instructions?) More software is being written in interpretive languages (such as Perl and Java) than ever before. The transition you are asking for is occuring, whether you realize it or not. It has been occurring since computers came to be.

  • I consider the Northwood to be the "real" Pentium 4, just as other second-generation products like the 100MHz Pentium and "Coppermine" Pentium III have proven to be the "real" versions of Intel processors in the past.

    I agree with this. The Pentium 4s we see today are just puppies with very big feet. They will grow up and become something much more impressive.
  • Yeah that was a bit of a troll.

    I'm currently awaiting my first new PC in a long-time: Soyo Dragon+ mobo, AMD Athlon XP 1600+ with 512 MB DDR RAM, ATA/100 WD 100GB disk (yes, /me likes SCSI, but likes $$$ more), generic DVDROM, and Netstream2000 H/W MPEG2 board. In preparation of it's arrival I downloaded a copy of Red Hat Linux 7.2 with the intention of installing it on an old spare 1.5 GB drive I had free in my old, ailing PC (Intel P200, 80 MB RAM), just to give it a whirl.

    Well, things were real tight with the small drive, and my on-board IDE controllers were acting flaky anyway, so I ended up getting a "spare" 20 GB ATA/100 Maxtor drive and Maxtor (re-labeled Promise) ATA/100 PCI controller. The 20 GB Maxtor was now UDAM5 hde on ide2.

    The point of all this history is to illustrate that I now have a "soon to be spare" computer where the limiting factor is CPU and to a lesser extent RAM. I go ahead an install RH Linux 7.2 on the new drive.

    After a bit of farkling around with kernel boot options (ide2=d000,c802 is your friend!) I boot into RH Linux 7.2, in all it's X 4.0.1 glory.

    .... and it struck me as slower than RH 6.2 on the same box, running from a slower drive on a slower, flaky mobo IDE interface (prolly not even ATA/33). Not much slower, but slower nevertheless.

    I'm fairly sure that the new box would make the speed difference between RH 6.2 and 7.2 imperceptible, but the experience left me wondering about the extent of bloat in RH Linux releases, not that I'd want to run anything significant on the P200 anymore, but I might want to use it as some type of low-duty server, with an up-to-date kernel. In a nutshell, what got slower?

    No doubt, the new machine will be welcome.

    • You're absolutely right. I recently did a clean install of RH7.2 on my Athlon 700 and it runs significantly slower than Win2k on the same machine. And to rub it in even more, I'm letting Linux run on a 10000RPM IBM UWSCSI drive while Win2k is relegated to a 5400RPM IDE drive that isn't even working in DMA because of driver issues.

      I've been using linux since '95 and I am amazed at the fact that it runs so slowly now. I'm sure there are things I could do to speed it up. But that won't change the fact that Win2k smokes it on lesser hardware. Woe is me. I wish things were different.
      • Point noted, except you're comparing apples to oranges and I compared apples to apples.

        In my case, unless I'm doing some long-term processing, the key isn't "fastest", but rather fast enough. I wouldn't spend much $$$ to get a kernel build down to 30 seconds from a minute, for example -- a minute is fine for me for the few times that I build a kernel. 30 minutes, of course, is annoying.

        • I was just giving another example. RH6.2 on my computer was faster than Win98SE. I recently wiped everything and went Win2k, RH7.2 without changing any hardware.

          Everyone who wants to see Linux succeed on the desktop (including myself) needs to recognize that all those bad words people hurl at MS won't change the fact that Linux + XF4.0 runs significantly slower on the same hardware.

          A lot of the advantages of Linux on the desktop start to disappear when you realize that it takes a lot of power to run it. It's not agonizingly slow on my computer, but it's pretty frustrating. Especially when Win2k just hums along on a slower disk with an "inferior" interface.
          • A lot of the advantages of Linux on the desktop start to disappear when you realize that it takes a lot of power to run it.

            True enough. I remember the days, probably up to RH 6.2 when [GNU/]Linux distros were generally snappier than bloated Microsoft offerings, even as the user productivity apps were less mature. It would be a sad day indeed, when standard GUI and productivity apps available under a [GNU/]Linux distro were slower, just to get more features "out there" -- stick to the tried, true, and efficient, until the polished can compete with it's peers on performance.

  • When it comes to newer, better and faster technologies even geeks happen to be the ones that throw away their knowledge against better knowledge and buy whatever shows the higher measure unit/product number/version count.

    It's a pity; whilst there is nothing wrong with spending your time to compile that new Linux kernel every three days or so, it is plain right stupid to scrap a 1400MHz cpu for, say, a 1800MHz cpu. The discrepancy in cost vs work efficiency is minimal in this example.

    I have asked myself the question: What advantage will a new CPU give me? Will it make that windowed os which I love so much boot faster? Will it make my email download faster? As funny as it sounds, that's what Intel is advertizing their p4 chips with in my country.

    When I now look at how I could possibly speed this already incredibly fast FreeBSD toy of mine even more, in terms of effective result, which steps do I need to take? First off, I need to get rid of this old and awkward IDE harddisk. Preferably I'd tune in a SCSI raid, with lots of cache on those harddisks. That would probably give me a serious advantage, probably the highest I could achieve this easily; though, that would be redudant, because my X starts in less than two seconds ( with enlightenment and gnome) when I start it the second time anyway.
  • Cyrix and Transmeta (Score:2, Informative)

    by Anonymous Coward
    Yawn. Another article purely pushing Intel's and AMD's chips going another notch in clock speed. In the meantime, Cyrix and Transmeta have both shipped CPUs based on new cores, the Cyrix one at .13 micron, and no one bothers to mention it.
  • by Animats ( 122034 ) on Monday December 03, 2001 @01:22PM (#2648955) Homepage
    The AMD roadmap [amd.com] shows their 64-bit CPU in late 2002. Is that a delay from previous announcements?

    That's too late. They need it sooner to compete with the Inanium.

  • Should I be surprised that I've never noticed the P4 ads until I saw the one popping up at the top of this thread?

    --Blair

"Conversion, fastidious Goddess, loves blood better than brick, and feasts most subtly on the human will." -- Virginia Woolf, "Mrs. Dalloway"

Working...