Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Hardware

Pentium 4 Under Linux 110

A reader writes "I just ran across this article over at LinuxHardware.org that reviews the Pentium 4 under Linux. It gives a lot of insite as to why anyone would want to buy a Pentium 4 and has some great clips from Alan Cox and Jan Hubicka (from the GCC team). Very thorough job."
This discussion has been archived. No new comments can be posted.

Pentium 4 Under Linux

Comments Filter:
  • by Anonymous Coward
    GCC is hardly the appropriate tool to compare processor performance.

    You want to use the best-available compiler for each processor. One that has optimisations for the specific hardware. Often it's one supplied by the vendor.

    Sorry, GCC doesn't qualify there, except for specific vendor-tweaked versions. GCC is a swiss army knife. Good enough in most cases but almost never the best.

  • by Anonymous Coward
    I'm beginning to think that many sites who are linked on Slashdot willingly shut their sites down (or at least pull the pages that were linked to and replace them with light-duty static HTML content) to avoid the potential problems. This includes break-in attempts (you think lame script kiddies don't read this site?) and the high cost of having even just a T1 connection to the 'net. Data ain't cheap, y'know. The more bandwidth they use, the more they pay through the nose.

    Slashdot ought to consider local mirrors of the pages that they link to. They're already set up to handle the bandwidth; it would be courteous and not too difficult for them to do so. (Or haven't you ever heard of `wget -r`?)

  • I presume you're not talking about Sprint's Integrated Network System Interface Terminal Equipment [acronymfinder.com], eh? I'd recommend "insight [dictionary.com]".

    Alex Bischoff
  • Actually, most applications of today are, if you are lucky, optimized for the Pentium Pro. Getting most software to support P4 will take about three to five years. Until then, you have to live with the fact that other processors perform better.
  • It appears the power of Linux Hardware is no match for the power of a good, old-fashioned Slashdotting. ;)


    Chas - The one, the only.
    THANK GOD!!!
  • Idiot savants, as I understand it, are mentally
    retarded yet can perform some skill extremely
    well. So, I supposed that it was spelling
    in this case (essentially memorizing letter
    sequences). Is this not a correct view of the
    disorder?

    And hey if my C-64 would have had net access
    instead of a 300 baud modem I would never have
    upgraded.

    -Kevin
  • how the hell is that a wasteful design? It means that there is no support for the P4 arch. Get real.
  • It was a good troll until this part: You may wonder why I know this. Let's just say I have inside knowledge of Intel products. :-) All of your "facts" are public information.
  • Actually, most applications of today are, if you are lucky, optimized for the Pentium Pro.

    The same thing happens on Suns. People complain that they didn't see a big jump in speed from UltraSPARC-II based machines to US-III based... when the vast majority of programs are still compiled for simply SPARC32 i.e. not even Ultra 1 optimizations!

    Vendors take forever to optimize their products on the newer architectures.

    To stay a little more on topic:

    I did some tests encoding the same wavs to mp3s on a P2, P4 and Athlon system, and the 1.4GHz Athlon was a good deal faster (about 16%).. the Athlon had DDR RAM and the P4 has RDRAM (and 1Gig vs the AThlon's 512 Meg). What's even sadder is the Athlon was reading the wav file via NFS and the P4 was using local disk... Ultra-160 SCSI disk (IDE on the Athlon). Of course, CPU is more important then I/O in such a situation.

  • Rate this up. Let the test results fall where they may, but if they're going to compare to a P4 with RDRAM they need to use DDR 2100 RAM on their 266Mhz FSB Athlon.
  • Until AMD starts sticking the extremely popular processor ID#, Intel may be the only chip "allowed" to run the "reliable" Micro$oft OS's.

    We're all looking forward to that day, aren't we?

    Hmmm: Intel == Windows.
    AMD == Free O/S ?
  • by Webmonger ( 24302 ) on Sunday July 15, 2001 @09:10PM (#83695) Homepage
    As Ace's Hardware discovered [aceshardware.com], the best way to optimize is to use Intel's latest beta compiler. But you can't use this compiler to compile Linux, because Linux uses gcc-specific extensions to C that the Intel compiler does not support.
  • It is for this exact reason that libc (and other key libraries) and kernel modules on Solaris can have platform specific optimized versions.

    eg: My program links against /usr/lib/libc.so.1 but at run time some of the functions actually get run out of: /usr/platform/`arch`/lib/libc_psr.so.1

    The `arch` in this case isn't limited to sun4c,sun4m,sun4u,sun4u-us3
    but can actually be a fully platform spec like SUNW,Ultra-Enterprise-10000
  • Yours was not the only one, i noticed SEVERAL posts that were modded down when they shouldn't have been. It looks to me that some moderator was simply on crack or something and marked a lot of posts as offtopic when they weren't, redundant when it was only one, and other unfairly moded posts. Thankfully there are a lot of moderators to conteract the idoit ones.

    I think what must happen (probably alot) is that someone will recieve moderator access, not understand what it is and end up clicking the nifty boxes to see what will happen. Other times, I'm sure people get bored with moderating and simply burn the points of to no good purpose.

  • Except the PIII is a PII with a faster clock. The PIII should of been a lot faster on the same opimizaitions, since the core did not change at all. Guess what, it was faster. Are you sure you weren't think of the Pentium and the Pentium Pro / PII?
  • That's corporate, marketing English, used to confound any effort to draw real, litigable meaning from an advertisement. :)

    mefus
    --
    um, er... eh -- *click*
  • There's a really long thread in the archives (some of it is still going on), but this message [gnu.org] starts in the middle. The 16 byte stack alignment is on by default.

  • Hemos is American and in American and he wrote 'insite', bzzt next please.

    The best thing to do with spelling mistakes is to silently update your opinion of the writer's level of education and intelligence, and then move on to something more worthwhile
  • Did you actually run benchmarks on a P4?
  • I didn't know "price/performance ratio" and "real-world application" were in the consumer's heads. hence the whole 1.7GHz P4 we're talking about ;)
  • That's actually a Dell problem. Check out how much crap they've loaded up by default. When I bought my Inspiron 8000 laptop from them (PIII-850), it took WinME a full 35+ seconds to load from start to finish. For kicks, I did a fresh install with WinME before I installed Mandrake, and it loaded from start to finish in under 10 seconds. Who leaves the default install from a manufacturer, anyways?
  • What is wrong with the chipset? We can use the same one that Apple use(IE: The one made by motorola. It is a long time ago apple made their own chipset)
  • Intel is not dead, nore will they ever be.

    When it comes to large corporations, buying large scale servers they ONLY BUY INTEL. Intel has the coorporate market monopolized and will continue to. AMD is JUST NOW breaking into the Multiple CPU market, and it takes some time to optimize that.

    That said, I am an AMD fan. I own a P4 1.7 and an AMD 1.3... I love both machines, they both are faster than I will ever need. Theres no reason to say that AMD will kill intel because the cold hard facts are that they not only wont, but they cant. Intel is in other markets, not just CPUS.
  • The Macquarie, on the other hand prefers 'organise,'meaning that any Australian who writes 'optimise' ... will appear ... to be educationally challenged.

    I presume you meant to write any Australian who writes 'optimize' will appear challenged, since you said that organise was preferred.

  • The Quake 3 "source" excludes the renderer, Quake VM, and networking code -- the most "interesting" parts of the game. It's just enough for you to write a mod with, but Quake 3 engine itself is hardly "open source".
  • Maybe now, but as the article indicates,: "Keep an eye on what the kernel and GCC teams produce though. A couple of releases here and there could really turn the tides on AMD."

    If the P4 is a great system for an avid gamer it will be THE system for desktop usage...games can and, I think often, dictate what hardware people buy for their home systems.
  • I'll agree that Intel's strategy of going after clock cycles above all else is tragic. Similarly, A/I/M is chasing after that integer peformance with great ferocity; their claims that their systems are fastest (which they properly scratch down to only an integer benchmark in the fine-print disclaimer, of course) depend on it.

    Now, if only I could buy a G4-based system from someone other than Apple, who presumes -- apparently with moderate accuracy -- that nifty translucent plastic will make the phrases "price/performance ratio" and "real-world application" disappear from consumers' heads... =)

    Is there someone out there selling G4 motherboards with standard form factors and accessory support at a competitive price point? Otherwise, there's no basis for comparison.
  • I seem to remember similar statements and conversations when it came to the benefits/detriments of the PIII over the PII.

    ---------------
    [Darth]Snowbeam
  • So all that fancy, schmanchy spelling I learnt in college was just a waste? Dangit. I hate it when they change the rules just after I finished something!
  • Yes, Intel does the very same thing. I should probably have made that clear. I was just refuting the claim that AMD chips do not do this.
  • I'm not going to waste my time explaining everything about the article, but I'll summarize it: Who do you think knows more about how to design a processor: A guy who makes emulators for a living, or Intel?

    Wow. I guess I'd better buy lots of RDRAM then, since Intel says it's great. I guess I'd better stop buying Athlons, since Intel says Pentium 4 is better.

    The emulators guy explains in detail why the Pentium 4 sucks, with examples, so we don't just have to take his word for it. Could you summarize those examples in one sentence for us too?

    Did you know if the L1 cache on the Pentium 4 was increased, the latency also increases? Did you know that the higher latency would hurt performance more than the additional cache?

    The Athlon has a much larger cache than the Pentium 4 and it out-performs the Pentium 4 at equivalent clock speeds... and I'm sure you don't want to waste your time explaining how this could be true.

    steveha

  • The Intel engineers looked at the pros and cons and decided on the lowest figure for L1 cache. There's no reason why they wouldn't include the extra space if there was a noticable performance delta with it.

    The Pentium 4 is huge, which makes it more expensive to produce. I'm sure Intel was trying to shrink the die size a bit when they pared down the trace cache to 8K, and thus keep costs more under control. That's not "no reason".

    From the preliminary benchmarks, RDRAM /is/ better than SDRAM now that t he frontsidebus is fast enough for the extra bandwidth to matter.

    For certain problems, RDRAM is better. In particular, for cranking through lots of data in a sequential order (e.g. encoding or decoding compressed audio or video!) RDRAM is faster. But for random access to data, DDR SDRAM will crush RDRAM due to much lower latency.

    The emulators.com guy is just pissed off because the Pentium 4's core doesn't work as well with emulators than the P6 core did. It's more for multimedia, not for heavy logic programs like emulators are.

    This is just another way of saying that the Pentium 4 is broken except for multimedia, which is pretty much what I have been saying all along. The Athlon has all-around good performance, and if you look at price/performance ratios, the Athlon totally wins.

    steveha

  • Again, with lack of execution units he's focusing primarily on the weak FPU, and ignores the very fast SSE stuff. With the release of ICL5 which is smart enough to parallelize loops for SSE2 by itself, there's no excuse for that.

    I disagree completely. SSE2 is not the solution to all problems, and besides one of his big points was that the Pentium 4 loses on code that ran fast on earlier chips. Code that runs fast on a Pentium Pro runs even faster on a Pentium II, for example, but with the Pentium 4 that is no longer true. But the Athlon runs existing code very quickly. It's just not good enough to say that because SSE2 can run fast, and there exists a compiler that takes advantage of SSE2, that the Pentium 4 isn't broken.

    And he didn't so much blast a "lack of execution units" as the lack of ability to keep them all working. The Pentium 4 can only feed RISC micro-ops to 3 execution units in one clock cycle. Also bad, the Pentium 4 can only decode a single x86 instruction per clock, so instructions that aren't already in the trace cache are unduly expensive.

    steveha

  • The Pentium 4 is still broken. It's not what it could have been, and the Athlon is better. But to quote myself from the top of this thread:

    the chips are so fast these days that few people will really notice any difference between a good AMD system and a good Intel system.

    You and I seem to agree on what the situation is. The difference is that I hold the Pentium 4 in contempt for being broken, and you seem to think it is a good-enough design. I don't think either of us will convince the other.

    steveha

  • by steveha ( 103154 ) on Sunday July 15, 2001 @07:08PM (#83718) Homepage
    The article talked a bit about how future versions of gcc and the kernel will be working to take better advantage of the Pentium 4. That's sort of nice, but it doesn't really matter because the Pentium 4 is still broken.

    The Pentium 4 has several glaring faults that cripple it.

    the level 1 cache is way too small

    it can only pass the decoded micro-ops to 3 of its internal execution units per clock, so it can only execute 3 micro-ops per clock (compare to the Athlon, with up to 9 micro-ops executed per clock)

    instructions that execute very quickly on other Pentium chips now execute slowly (in particular, anything involving bit-shifting)

    These faults and more are discussed here [emulators.com].

    Unlike the Pentium 4, the Athlon executes exisiting x86 code very quickly. You don't need fancy optimization tricks to get code to run fast on an Athlon; it has no major faults to work around.

    A Pentium 4 system, with its expensive high-speed RDRAM, will be very fast for certain uses. And it has the lead in raw clock speed. If Intel can crank the clock speed way up, say to double what AMD can do, it won't matter that the Pentium 4 is broken; it will still be the fastest chip you can get. I predict this will not happen; AMD will continue to make ever-faster Athlon chips, which will remain competitive with anything Intel can make. (And of course if you look at the performance-over-price ratio, the AMD chips totally crush the Intel chips.)

    Of course, it must be said that the chips are so fast these days that few people will really notice any difference between a good AMD system and a good Intel system. The AMD may out-benchmark the P4, but if both of them can run Quake 3 nice and fast, few people will actually care about the differences.

    steveha

  • by steveha ( 103154 ) on Sunday July 15, 2001 @06:43PM (#83719) Homepage
    The AMD core is primarily x86, where the P3's and P4's are more RISC-like.

    This is so wrong. The AMD core breaks up an x86 instruction into RISC-like "micro-ops" or ROPs, and then various RISC-like execution units go to work executing the ROPs. Up to 9 ROPs can be executed at the same time! This is why the Athlon so thoroughly stomps all over the Intel chips at equivalent clock rates--the AMD chips can get more done per clock. This is especially true for floating point, where the Athlon can execute 3 floating point instructions at once.

    Full details here [anandtech.com] in the AnandTech [anandtech.com] article. I linked to page 8, the one that has the discussion of how instructions get executed.

    This is the reason why Pentiums cost more than AMD's

    Total nonsense. Intel chips cost more because Intel charges more. The Pentium 4 is expensive because its die size is freaking huge.

    Let's just say I have inside knowledge of Intel products. :-)

    You don't seem to know very much about AMD products.

    steveha

  • by 11thangel ( 103409 ) on Sunday July 15, 2001 @11:46AM (#83720) Homepage
    The P4 has all the 3d optimizations, just like the old p3's. The only thing is, most of the programs (not all, but most) that depend on those optimizations and dont use athlon optimizations are originally designed as wintel programs, like quake 3. Those programs are also available as binary only, not source. While the P4 is apparently a great system for an avid gamer, for developers the AMD line will probably remain cheaper and more useful to *nix developers like myself.
  • Pentium 4 SUCKS! (Score:2, Insightful)

    Does anyone else find that funny?

    Anyway, I didn't even know there was a 1.2GHz Pentium 4.

  • I really wish people would stop linking to the emulators.com article, it's akin to linking to the Weekly World News.

    I'm not going to waste my time explaining everything about the article, but I'll summarize it: Who do you think knows more about how to design a processor: A guy who makes emulators for a living, or Intel?

    Example: The L1 cache thing. Did you know if the L1 cache on the Pentium 4 was increased, the latency also increases? Did you know that the higher latency would hurt performance more than the additional cache? Probably not, but then again, neither did this emulators.com guy. Why? Because he designs emulators for a living, not microprocessors.

  • Oh please. The situations are identical.

    Intel is Nvidia, and AMD is ATI. ATI has very promising upcoming chips and some alternative solutions to fixing the problem, just like AMD. Intel and Nvidia are both the dominant makers, but ATi/AMD are gaining on them.

    Anyway, AMD is adopting SSE/SSE2 now too, so why WOULDN'T you optimize for it? You're not just optimizing for the Pentium 4, you're optimizing for all future Intel 32-bit processors, and probably upcoming AMD 64-bit processors too.

  • The Pentium 4's L1 cache is significantly different than the Athlon's. The Pentium 4's is tracecache.

    The Intel engineers looked at the pros and cons and decided on the lowest figure for L1 cache. There's no reason why they wouldn't include the extra space if there was a noticable performance delta with it.

    Sometimes you just gotta think things through logically...

    BTW, SDRAM support will be here in 1-2 months. From the preliminary benchmarks, RDRAM /is/ better than SDRAM now that t he frontsidebus is fast enough for the extra bandwidth to matter.

    The emulators.com guy is just pissed off because the Pentium 4's core doesn't work as well with emulators than the P6 core did. It's more for multimedia, not for heavy logic programs like emulators are.

  • "The restrictions on ordering of instructions is again something of a compiler issue, IMHO. For example, if you're dealing with a RISC processor, ordering of instructions is very important. It's something the compiler can do before hand - the job doesn't need to be done by the CPU. Leaving it to the compiler saves transistors which can then be used by something else, like the SSE2 units which the article glosses over.

    Again, with lack of execution units he's focusing primarily on the weak FPU, and ignores the very fast SSE stuff. With the release of ICL5 which is smart enough to parallelize loops for SSE2 by itself, there's no excuse for that.

    The fourth point was the small instruction cache. Intel doesn't use a normal instruction cache on the P4, it uses what it calls a trace cache. P2, P3, P4, Athlon, they all decode instructions into smaller micro-ops, as you know. Unlike the other instructions, the P4 doesn't cache x86 instructions at all. It caches the decoded micro-ops in the trace cache instead, saving the job (and several pipeline stages) of decoding instructions. The theory is that because the P4 works, for the most part, on the level of micro-ops instead of normal instructions as earlier instructions are, it doesn't need as much cache."

  • This is just another way of saying that the Pentium 4 is broken except for multimedia, which is pretty much what I have been saying all along. The Athlon has all-around good performance, and if you look at price/performance ratios, the Athlon totally wins.

    The only apps that the consumer needs that demand raw CPU power are multimedia apps. You do not need a several GHz processor to run business apps. You need it for: Gaming (ala Quake III), Encoding (ala FlasK), Decoding, etc.

  • 25MB/sec. Is that an IDE disk. No shit. 30-35mb is all an top notch IDE-DISK/ATA100 will do. In the beginning of the disk that is. You better use SCSI RAID for your movies.
  • actually, the American spelling is insight, insite is probably an British spelling. .

    Fantastic. Good luck in high school.

  • Until there is software out there written to exploit the p4 archtecture, p5 is already on the market. Do you get today, software writte especially for p3 ?
  • The AMD core breaks up an x86 instruction into RISC-like "micro-ops" or ROPs, and then various RISC-like execution units go to work executing the ROPs. Up to 9 ROPs can be executed at the same time! This is why the Athlon so thoroughly stomps all over the Intel chips at equivalent clock rates

    Bull. Intel does the very same thing. Maybe not as well, but they both use RISC cores, translate x86 to RISC internally and benefit from it by having multiple execution units and deep pipelines.

  • Possibly because it's fairly easy to figure out. Hence, having a post about something that everyone already knows is redundant. Likewise, if there was an article linked to http://www.cnncom, then posting "it should be http://www.cnn.com" would be redundant.

    The only "intuitive" interface is the nipple. After that, it's all learned.
  • Well, the Slash source is available [slashcode.com], nobody's stopping you...

    The only "intuitive" interface is the nipple. After that, it's all learned.
  • The P4 streams memory faster than the K7, but most applications don't only need raw sequential throughput. The problems with the P4:

    1. the long pipeline means if you stall or miss a branch prediction you lose a lot more cycles

    2. the L2/L1/trace caches are too small and programs will wind up going to main memory

    3. RDRAM is great for streaming sequential bits, but it has high latency for random access. The P4 needs a much larger L2 cache to sit in front of the RDRAM to reduce the random access to main memory.

    So that 3.2Gbs figure is not the whole story.

    The P4 Xeons have the potential to be great chips if they get some more cache on them. They're going to get a die shrink and more L3 which will help greatly. They could also use larger L2/L1/trace caches to reduce cache thrashing during context switches. The vanilla P4 will probably always have too little cache and will suck hard, though.

    It looks like Intel made a decision to go after the high end of the market in a few years time and in the mean time to produce crippled chips that just have really high MHz ratings. And I'd guess that they're going to be fucking the consumer market over pretty hard for awhile to come.

    What I'd really like to see: the interleaved DDR SDRAM from the nForce chipset in a multiprocesser server chipset like the 760MP. Ideally something like 4x DDR interleaving with a quad CPU chipset.... *droool*

  • Well the site's slashdotted anyway.
  • i don't think the G4s are 64-bit, are they?
  • Should every vendor make a special version for every modern architechture?

    Why not. If you are shipping your software on CD or even more so with DVD you could put half a dozen optimised versions on there.
    _O_

  • by small_dick ( 127697 ) on Sunday July 15, 2001 @11:49AM (#83737)
    for even more insite, try learning to spell [learntospell.com].


    Treatment, not tyranny. End the drug war and free our American POWs.
  • How was that moded as redundant, it was the second post?

    Now THIS post is redundant.

  • by prog-guru ( 129751 ) on Sunday July 15, 2001 @11:41AM (#83739) Homepage
    I assume http://www.linuxhardware.org%3C/a means http://www.linuxhardware.org ;)
  • Did anyone ever consider that there is no good reason to recompile for P4. THERE ARE OTHER CHIPS IN USE!! Should every vendor make a special version for every modern architechture? Most people I know are still running PII's and III's.... ATM I'm on a 200mhz pentium one because my athlon's in for warranty. Maybe games and renderers could be released in multiple binaries.... but generally it's not worth it. Another good thing about open source though is that my entire system is custom compiled :-D
  • by BadBlood ( 134525 ) on Sunday July 15, 2001 @07:18PM (#83741)
    It seems to me that the P4/Athlon debate has brought out a lot of bashing of the P4, as it benchmarks slower than comparable or even slower Athlon CPU's.

    These same people, however, don't seem to be bashing the GeForce 3, which in many cases benchmarks slower than some GeForce 2 ultra cards. Sure, it's OK for a video card to change its architecture but not the CPU????

    People seem to understand that eventually the GF3 will be the card to get IF games are written to that architectures. The same could be said of the P4 IF APPS are written to the new architecture.

  • You hardly need to be a Savant to know how to spell. And besides, aren't Slashdot editors supposed to actually edit the submissions?
  • Well the fact that a 64-bit processor can do 64-bit calculations in the same time as a 32-bit processor of the same clock speed can do 32-bit calculations, I'd say it's hardly something worth 'getting over'.
  • While you're accurately describing the situation today, it need not stay this way. There are some very interesting projects, out in academia, which might address this very important issue.

    Take a look at "slim binaries" and "Dynamic Code Reoptimizers" here [uci.edu] for a starting point.

    The interesting aspect to this, from a social and economic perspective, is that it is projects like this which could reduce the benefit of any existing monopolistic position on the desktop. Given this, I'm somewhat saddened that these ideas haven't been picked up by companies like SUN or communities like Linux. Perhaps this really isn't ready for 'prime time', but cash and interest from SUN could go a long way to aiding this work.

    Intel would also gain from this. As you've pointed out, software tends to be optimized towards the least common denominator of hardware. That eliminates much of the advantage of newer architectures. Techniques such as these would increase the incentive for hardware upgrades, as existing softwares' performance would be immediately improved.

  • by BiggestPOS ( 139071 ) on Sunday July 15, 2001 @11:54AM (#83745) Homepage
    According to a lot of benchmarks I've run, my pentium 3 850 is FASTER when running at 950 (112 mhz FSB) than a pentium 4 1.2 ghz. Now THATS a wasteful architecture. Sure, the longer pipeline will allow the clock speeds to hit the stratosphere, but until they do, I'd stay away from this paperweight.

  • I was intreaged by this lynk, since my speling is quiet attrocious... But it seems to be a ded link!

    ---

  • by IvyMike ( 178408 ) on Sunday July 15, 2001 @12:44PM (#83747)

    Somebody needs to work on an ispell module for slashcode; in theory it shouldn't be that difficult. Put computers to work for you. Everybody would be happier, and would look smarter to boot!

  • Under Linux, I would not buy a P4. It is just too damn expensive. If you want performance and are running something other win 9x, go with the Dual Athlon. It kicks butt and costs less than a P4. The only thing faster right now is the dual 1.7 P4 Xeons, but you could buy a least couple of dual Althons and cluster them for the price of a Dual P4 Xeon.
  • I agree as my speeling sucks however, aspell is far better at guessing my random spellings than ispell is.

  • This is sooooo true. Besides the better design and better usage of megahertz, the chip is also 64-bit and more modern than the ancient x86 that still has to do an 8088 JMP instruction to get to the boot loader! Now I'm a fan of x86 due to the price/performace ratio... besides, I don't want to buy from Apple, else I'd get a PowerPC myself. :-/

    This comment should be "Score 1; Duh!"...

  • Three to five years?!? What makes you think they won't be a Pentium... let's say "8" by that time? (considering Intel's marketing guys keep the lame naming scheme). Actually, I remember being asked what processor I had by a non-techie, I said "Pentium 100", which obviously mean Pentium generation 1, 100 MHz. He/she didn't get it, having heard the latest generation of Pentium was the 3rd.. I now have a Duron. What's that? 686? 786? "The budget model of a P-III compatible." or... "Pentium II with MMX+, 3DNow and 3DNow+" (according to wCPUid).
  • Sounds a little exaggerated. Which benchmark tests? Often these "runs 80%" better than claims are based on some obscure benchmark that has nothing to do with running real software.

    Of course, those claims do sell computers.

  • If I recalled in the article that the G4 was about %25 faster per magahertz then an equilivant p3.

    Most apple benchmarks do only tests with adobe photoshop sadly, but they are better processors and they have alot less transitors and use less power. I wish I could remember the url to show you. Apple has some multimedia extensions built in the chip that photoshop uses that are far supperior to mmx2 in the p3's which make it run photoshop really well. Anyway Apple pressured Motorolla to make faster G4's to combat the high speed p3 problem. The newer 733mhz G4's are out and should be close to the same speed as a 1 ghz p3 or 1.4 ghz p4 for ordinary unix/app use. I am sure for a photoshop user the results would be even better. Apple should of saw this comming. The G4 and G3 powerpc processors are truly RISC unlike the p3 and p4 which are a combo cisc/risc.

    If I had money to burn I would love an apple powerbook where I can save battery power due to the fact that the powerpc has less transitors and runs equilivantly on less magahertz. Running Linux on it of course.

  • Only available as binary? Better choose a better example than Quake 3 as the following page has a link to "Quake 3: Arena 1.17 Game Source":

    http://www.idsoftware.com/archives/quake3arc.htm l

    Here is a direct link to the source, albeit, I haven't gotten the link to work, but lots of ftp links on ID's site seems to be broken lately:

    ftp://ftp.idsoftware.com/idstuff/quake3/source/q 3a gamesource_117.exe

    Harold
  • A programmer who doesn't understand the architectures he works with won't produce code that gets the best performance.

    This is not quite true - I guess certain things are slow on any architecture (bad algorithms), and compiler/interpreter should be the one to decide what works great on the platform at question. Nowadays people just don't have time to optimize, and it's a bad idea anyway - look at The art of unix programming [tuxedo.org].

    The number of threads is one thing to consider, though... and anyone knows that more processors => better multithreading.

    I think programmers (and geeks in general) know stuff about processors because it's interesting and fun, not because they really need to.

  • The difference is, certain organizations are /.-approved and others are not:

    Approved:

    AMD, nVidia, Transmeta

    Not Approved:

    Intel, Microsoft, ATI

    Regardless of what these companies do, the response on /. is determined by which list they are in.

  • People only look at the clock speed when picking out a machine. Here's a did you know: The 500 MHz G4 processor by Apple performs roughly the same in benchmark tests as the 1 GHz Pentium III.

    Which benchmarks would those be?
  • Not quite true. Not everybody shops for speed - some people (e.g. my non-geek friends) shop for speed+reliability. If treir system freezes when they are trying to sell tumbling stock options they would lose much more than the cost of that frigging Pentium 4.

    By the way, do you think that people buy ECC memory only for servers?

  • Is this (optimise) a new spelling? (It's spelled this way many times in the article.)

  • Take a quick look on pricewatch. Does anyone do their research and form their own oppinions anymore or just automagically adopt the high-scored ones on slashdot? Intel's slashed prices to unprecidented (for them anyway) levels... obviously in response to AMD, but none the less, the _PRICE_ motivation simply isn't as great as it used to be.
  • Actually, spell checkers don't address the more odious problem of people making the "you're" vs. "your" style of mistake which seem to be popular these days...
  • by darkov ( 261309 ) on Sunday July 15, 2001 @12:08PM (#83762)
    Looks like the P4 goes pretty much as fast as anything else unless they turn on the chip specific optimisations, but I don't think that will matter at all, since the average PC purchaser will look at the 2Ghz(ish) ratings and go "Ohmygod - must be weelly fast!" I'm suprised they didn't have a 40 stage pipeline and really get people excited.

    I wonder if they'll consider on-board 802.11b when they hit 2.5Ghz?
  • by ryants ( 310088 ) on Sunday July 15, 2001 @01:42PM (#83763)
    All the talk of SSE and SSE2 was fairly interesting, but for us user space coders it's pretty useless since gcc doesn't properly align stack variables on x86 (see GNATS [gnu.org], problem report 3299, as well as this [gnu.org], this [gnu.org], and this [gnu.org].)

    If any gcc hackers out there are reading, just le me know where to start poking and I'll try and implement a solution.

    Ryan T. Sammartino

  • But if it can't run my current software faster it does suck. I don't buy new processors that require new software to get better performance. I buy what is a better solution for my needs now. Software optimization is almost always a generation behind the processors and that means radical changes are generally a waste. There is no compelling reason for me to purchase a P4 right now. I will reconsider the situation when I do upgrades in a couple of years not the upgrade I am doing this year. But, I'm betting that in a couple of years my decision will be to purchase a Clawhammer or even a Sledgehammer.
  • Duh, where did you get the idea that everyone who reads /. is a programmer? Second point, any really good programmer will know quite a bit about chip architectures and what works best in a given situation. A programmer who doesn't understand the architectures he works with won't produce code that gets the best performance. A real nerd, geek, or hacker depending on your bent and preferance will know quite a bit about software and the hardware it runs on. You can be competent and not know about both but you will never be really good without knowing a lot about both.
  • I don't think that was the processor ID they were talking about. Since only a few intel chips have them. They're probably using the ID that every processor has that tells it's make, model, stepping etc.
  • Ah but was it a spelling mistake or a very subtle pun?
  • I presume you meant to write any Australian who writes 'optimize' will appear challenge

    Oops, My bad! Yes Australians (who accept the authority of the Macquarie) would use the 's', not the 'z' for this kind of word.

  • Is this (optimise) a new spelling?

    Quite the opposite. 'Optimize' is the websterised (sic) version of the older 'optimise' (and so on with the whole lot of 'organise,' 'antagonise' etc etc).

    The use of 'z' in place of the 's' has traditionally been indicative of an American author. AFAIK, the latest edition of the OED lists 'organize' as the primary spelling and 'organise' as the variant, meaning that all right spelling Englishmen, should now write 'optimize.' Though the more sophisticated (and those who won't allow the editorial boards of dictionaries to dictate their spelling to them) will continue with 'optimise,' if only to demonstrate their sophistication. The Macquarie, on the other hand prefers 'organise,' meaning that any Australian who writes 'optimise' (unless for US publication) will appear, in the eyes of more erudite compatriots, to be educationally challenged.

    This, BTW, is the why it is a really stupid idea to automagically spell check submissions. There's more than one way to spell 'colour'.

  • Oh, there WILL be a "pentium 8" arount that time. That isn't the point really.

    The point (more of a dilemma actually) is that the developers at gnu and microsoft are extremely slow at implementing the optimisations the newer processors support. And even if they catch up, the applications need to be recompiled to actually take advantage of it. So there's a lot of ancient software out there, compiled for obsolete P2's and stuff.

    There's really not much you can do about that situation, so we'll just have to take that for granted.

    Now, as hardware developer (intel, amd) you've got the dilemma and basically two choices -

    A) Try and tune that old engine some more. Same core, smaller circuits, higher mhz, etc. Try to run existing 386 code as fast as possible.

    B) Implement those new state-of-the-art designs, instructionsets, etc. At the cost of losing backwards compatibility, and with the knowledge that software needs to be properly optimized to get the most performance.

    Intel chose B. Their P4 is slower then athlons and sometimes even P3's at executing old code. But when software is properly optimized for the P4, the thing woops ass. (See graphs in article)

    However, all that people see now are the lower results, and for that Intel is slaughtered. But, IMHO, in the long term Intel made the right choice.

    Saying that the P4 sucks even before there is software out there that can properly drive it would be unwise. I don't think the P4 will go down in history as a crappy processor - I think it just needs a while to warm up and start kicking ass.

    Don't get me wrong here - I'm not saying it is better then the AMD solutions, I'm just saying it doesn't suck so hard as some people here think it does. It's their software that just can't handle the P4 correctly.
  • You should turn that around. Your software is crappy and needs an upgrade if it can't properly drive the latest generation of processors.

    A ferrari doesn't suck just because you just dont know how to work the stick.
  • The difference is that nVidia holds a large enough share of the high-end gaming market (their main target market) that the emergence of GeForce3-optimized games is guaranteed to happen soon. This is because graphics-intensive 3d games actually require all the raw graphics processing speed they can lay their hands on. Games rely on image far more than apps, so games shops will be scrambling to make use of the GeForce3 optimizations so they can include badass gfx fx in their products for display on the TV's at Software etc.

    With P4, Intel no longer holds enough of the market to force software shops to use their extensions. With a few limited exceptions, the cost of developing a seperate app to take advantage of the P4 optimizations would be too high to justify the expenditure of extra resources. Also, most apps do not require massive speed. Basically the only consumer-level products that actually need huge processing power are games, and the graphics card is more important than the CPU at that point.
    ...
    string* plamenessFilter =

  • The flaw in your example is that AMD's processors, aside from anyone's proprietary optmizations, are just as good (sometimes better, sometimes worse... usually better for my purposes). With ATI vs nVidia, ATI makes budget gfx cards, and it shows. If they make a succesful move into the high-end gfx card market, more power to them, but at the present time they don't have anything that can seriously challenge the GeForce3.
    ...
    string* plamenessFilter =
  • Except the PIII is a PII with a faster clock.
    what about SSE-instructions? what about on-die cache running at the same speed the CPU does? what about...
    The PIII should of been a lot faster on the same opimizations, since the core did not change at all. Guess what, it was faster. Indeed, it compiles a 2.2.14 kernel an average 1.5% faster, if I remember my initial benchmarks correctly (comparing a PII450 and a PIII450). The Coppermine core, and the increase in bus speed from 100 -> 133MHz, was a bigger step forward, despite the fact the Katmai version proudly got a PIII label, while the Coppermine was announced as a "evolution of the PIII". Oh well, it's all about marketing I guess.
    The P4 actually is a completely different architecture. Comparing it to a PIII does it no justice whatsoever. The marketing guys have one major advantage: the increase in speed is a real bonus.
  • by pjgunst ( 452345 ) <`pjgunst' `at' `skynet.be'> on Sunday July 15, 2001 @12:28PM (#83776)
    Errr, I tend to disagree on this.
    1) First of all, the Pentium4 is indeed slower according to some benchmarks. And indeed, it doesn't perform as well as you might have expected. Why? Because of its "revolutionary" design. It's a completely different architecture, you may want to go to the specs on intels website for more detailed info on this (I did).
    2) The P4 outperforms the P3 when it comes to memory-intensive applications. Using the Intel850 chipset, it has far superior memory bandwith. A Intel845 chipset is in the making, which will be able to use more common SDRAM instead of Rambus. Although this solution might be less expensive, it will seriously hurt performance. Intel has finished the design of a similar board using DDR-chips. This will by far be the most cost-effective solution. Don't expect it before christmas though, since they have a deal with Rambus until 2002.
    3) Bandwith from CPU -> northbridge: a stunning 3.2Gb
    4) If you're running an open-source OS, noone's gonna stop you from recompiling the source and optimize programs for your architecture. I would.
    5) The P4 currently sold, as well as the mainboards, don't offer an upgrade path. If you upgrade regularly, I'd stick with AMD for a while. Intel will soon release a different chipset and a new version of the P4.
    6) Needing a little more beef than a uniprocessor platform? I would wait a little. Since AMD designed their multiprocessor chipset to scale beautifully (2 CPUs / northbridge), one would expect some mainboard manufacturer to design a "hot rod" with at least 4 or 8 CPUs in the near future. The P4s future is uncertain. I really don't know what kind of rabbit Intel will pull out of their hat to counter AMD.
  • actually, the American spelling is insight, insite is probably an British spelling.
  • What hardware was your software written for? Probably a Pentium II. Most of today's software doesn't fully support the P4, hence apps written for previous chips may have varying results on the new chip. Until there is software out there written to exploit the architecture of the Pentium 4, most apps may still be better on a P III.
  • oops. I was originally gonna say it was Australian, but then something made me change it to British. Good thing all I do bad at are english classes, not classes on programming ;)
  • i dont think they are. The aren't many 64 bit processors yet that designed for the desktop machine.
  • The ones I did at work with some simple algorithms using linux and os-x. same source code for both programs compiled on each platform using gcc 2.96
  • Check the lenses in your glasses. I was comparing G4s to P3s.
  • That would explain why the most moderated posts are those found near the top of an article. Also moded are the accompanying threads of replies attached to them.
  • by jeffy124 ( 453342 ) on Sunday July 15, 2001 @12:26PM (#83785) Homepage Journal
    You're absolutely correct. People only look at the clock speed when picking out a machine. Here's a did you know: The 500 MHz G4 processor by Apple performs roughly the same in benchmark tests as the 1 GHz Pentium III.

"The following is not for the weak of heart or Fundamentalists." -- Dave Barry

Working...