Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Hardware

x86 vs PPC Linux benchmarks 269

Jay Carlson writes "We've all heard about how Apple's hardware is really fast compared to PCs. ("Supercomputer!" "Twice as fast as a Pentium!" "Most powerful laptop on the planet!") So, if you *aren't* going to use Photoshop and Final Cut Pro, how fast is it? I care more about integer apps like compilers, so I did some careful benchmarking of a few x86 and PPC Linux boxes. Submissions welcome."
This discussion has been archived. No new comments can be posted.

x86 vs PPC Linux benchmarks

Comments Filter:
  • by Anonymous Coward
    I would have thought that, by now, almost all slashdot readers would be aware that benchmarks really ought to come with one of those warnings that you see on certain late-night commercials: "For entertainment purposes only."
  • by Anonymous Coward
    A 500MHz Apple box outperforms a 1.7GHz P4 box by a good margin. The reason these benchmarks don't reflect this is that the compilers for Apple generally aren't as tight as IA-32 compilers. The Apple version of GCC doesn't even make use of half the opcodes!!

    So this really proves nothing. If you want to benchmark two boxes like this what you have to do is time each operation individually. Store on an Apple box is about 3x faster and load is 6x faster. ALU operations are usually about 5x faster. It seems to me that Apple is the Mercedes of computers while IA-32 is like a Kia.
  • by Anonymous Coward
    Comparing *apples* to oranges! (rimshot)
  • by Anonymous Coward
    That is completely false! GCC has known problems mainly with register allocation and instructions scheduling (one of the scheduling passes is not even run on x86 for this reason) on machines with a small number of registers, like x86. Other optimizations also perform suboptimally for x86.
  • by Anonymous Coward
    that, among people of equal age and education, Mac users have better English grammar and non-technical spelling skills?
  • by Anonymous Coward
    Actually gcc does a poor job of optimizing on any platform - compare it to Sun's C compiler on SPARC or Intel's C compiler on x86, and it loses fairly badly. gcc's strenth really is that it's portable and free, not that it produces very good code.
  • by Anonymous Coward
    Considering that was only 1 533mhz g4 processor I am kind of impressed. It held up well with/o altivec support, and a second processor. Rumormill is say'n apple is going to start going back to a fully MP pro line up again now that OS X is out. That is a smart move. Clearly a dual processor g4 could have kept up quite well with the AMD and Dell boxes. Why didn't this guy do his benchmarks with darwin instead of Linux PPC? Darwin is cross platform and MP aware. Sounds like more of a fair fight. He should of at least used Yellow dog linux 2.0. Isn't that PPC native an MP aware. Linux PPC kind of sucks compaired to what others are dishing out.
  • Instructions for Do-It-Yourself x86-Biased Benchmarks

    Step 1) Disable one of the Mac's processors
    Step 2) Run x86-biased benchmark suite
    Step 3) Publish useless benchmarks
    Step 4) Congratulate self yet again for saving money by building your own x86-compatible PC so you can use it to create useless benchmarks (Uses 5x the power or a Mac! Generates 5x the heat! May be faster under some circumstances when plugged into wall power! x86 ... the dream CPU!)
    Step 5) Reward yourself by enjoying music, movies, TV, books, artwork, advertising, and Web sites all produced and encoded on Macs by people who are too busy enjoying their work to have time to pause and make useless benchmarks
  • by Anonymous Coward
    Just wait until you see my 1.33 GHz DDR (double-doodoo-rate) coleon (food)processor
  • by Anonymous Coward
    Intelligent? Huh! Mr Carmack just said what others have already said, with the difference he is famous... That is all there is to his remark.

    Please, be cool about him. His only a demi-god. Nothing more.

    As for the moderators... Brrrrr... Post #292 even has benchmarks posted, with a link to the source, but got Score:0... Mr Carmack had vapourclaims to offer, beyond his persona.

    Do we talk fat facts or do we not?
  • by Anonymous Coward
    Interesting article found below as far as the "supercomputer" argument goes for scientific computing, cost/performance analysis.
    G4 fares quite well even with incomplete Altivec support in the FORTRAN libraries.

    http://developer.apple.com/hardware/ve/acgresear ch .html

    >>>>>>

    Research from outside laboratories/developers/users

    An Evaluation of PowerMac G4 Systems for FORTRAN-based Scientific Computing with Application to Computational Fluid Dynamics Simulation
    by Craig A. Hunter, NASA Langley Research Center

    http://ad-www.larc.nasa.gov/~cah/NASA_G4_Study.p df
  • by Anonymous Coward
    And I've done them myself, and I'm as much of a Mac nut as anyone. And yes, the PC came out faster. But the Mac was a 500 mHz and the PC was 1 gHz, last time, and the PC came out (when I averaged everything together) a tad less than 55% faster. I.e. the 1 gHz PC was the equivalent of a 775 mHz Mac. Cycle for cycle, the Mac wins. All out, the PC wins. In Photoshop (or elsewhere where one can really take advantage of those beautifully-engineered Altivec instructions) the Mac wins, at least sometimes. Why is it so hard for a single person to swallow all three of those conclusions? It's amazing; you get people on the Mac side who just can't stand to think that their computer is slower than their best friend's. You get people on the PC side who can't stand to think that the Mac's processor really IS more efficient than the PC's. (And, given how well the P4 runs X86 code compared to the P3, this gap is widening. :) Grow up, people. Sheesh. Now, with a well-done app on a dual-processor G4, you can see some pretty wild results. Yes, you can get the same thing with a dual Pentium, I hear. I've never seen any hard numbers on this, given different OSes (preferably X on the Mac, since 9 doesn't take advantage of dual processors). It'd be interesting to know how well a well-threaded app can do. --Fred Fnord
  • by Anonymous Coward
    Apple says that the G4 is faster because of its more advanced FPU and its far superior Velocity engine (Mot calls altivec) they make no claims that the interger calcs are twice as fast. Never have. To realise that OS X and its apps can easly be Altivec accellerated (Click chack box in compiler) would crush your silly theory. It's like saying that the PI with MMX is faster than the PI without... but if you don't use MMX apps then its not... well no shit!
  • by Anonymous Coward
    The Athalon with best price/performance. Not too surprising.

    TO THE AUTHOR OF THE BENCSHMARK (Jay): Having the machine names in the comparison boxes is silly. We don't care what you'vr named your machines. Try CPU abbreviations next time. (e.g. P3/733-192, G3450-320, ...)
  • by Anonymous Coward on Sunday June 03, 2001 @10:07PM (#179867)
    From http://fampm201.tu-graz.ac.at/karl/timings40.html.

    Numbers are relative to a G3-300MHz (higher are faster).

    There are more numbers on the homepage.

    Athlon 1.2 GHz, 512MB, Windows 2000 [73]: 4.78993
    Athlon 1.2 GHz, 512MB, Linux [72]: 4.4734
    Gateway Select 1000, AMD Athlon 1000 MHz (1GHz), 512KB L2, 192 MB, Linux [65]: 3.77305
    Kryotech 1GHz AMD Athlon, 512k cache, 512MB, Linux [66]: 3.69674
    Gateway Select 1000, AMD Athlon 1000 MHz (1GHz), 512KB L2, 192 MB, Linux [60]: 3.57748
    Dell Dimension XPS B1000r, 512MB Ram, Win98 SE [64]: 3.38084
    Dell 4100, 933MHz, 128MB, Linux [63]: 3.19988
    COMPAQ AlphaStation XP1000, 2 GB RAM, 4MB L2, Digital Unix 4.0F [50]: 3.16987
    AMD Athlon, 800 MHz, 512 KB L2, 256 MB, Linux [61]: 3.00154
    Dual Xenon 866, 512 MB, Windows 2K [71]: 2.9618
    PenguinComputing, dual 800Mhz PIII, 128Mb, Linux [67]: 2.76745
    Athlon 700, asus k7m, 512 mb, win98 [43]: 2.70199
    Dell 800 Mhz, Pentium III, 512 MB RDRAM, Win 98 SE [57]: 2.69821
    Athlon 700 MHz, 128 MB, Linux [51]: 2.67027
    Athlon 650 MHz, 256 MB, 100 MHz, Win NT 4.0 [42]: 2.60662
    Compaq-Digital Alpha 8200, 625MHz, 1GB RAM, DEC-UNIX [11]: 2.48495
    SONY VAIO F409 notebook, PIII 650 MHz, 128MB RAM, Red Hat LINUX 6.1 [55]: 2.21137
    Athlon 650 Mhz, 32 Mb, Windows 98 [32]: 2.14558
    Athlon 550MHz, 128MB, Red Hat Linux 6.1 [45]: 2.13797
    Dell XPS T600r, P3 Copper wl 256kb, 128MB, Linux-2.0.36 [39]: 2.09939
    Dell Precison 410, 2 PIII 550MHz, 256Mb RAM, Windows NT 4 [28]: 1.8561
    Dell Precision 210, 550 MHz, 128 Mb, NT 4.0 [34]: 1.83606
    Dell XPS T550, PIII-550, 128MB, RedHat-5.2, Linux [24]: 1.777
    Gateway GP7-500, 500 MHz, 192 MB, WinNT 4 [6]: 1.70318
    PowerMac 8500, 500 MHz MACh Carrier G3, 1 MB L2, 256 MB, MacOS 8.6 [35]: 1.68249
  • by Anonymous Coward on Sunday June 03, 2001 @12:44PM (#179868)
    Instead of pitting his machines against one another, he should've assembled them into a BEOWULF CLUSTER so they could WORK TOGETHER.
  • give me a fucking break, you don't optimize for an architecture, you recompile with it as the target. debian probably has a horrible installer for ppc, but it has a horrible installer on x86 too, so it probably doesn't matter.
  • I'm a PC head but I was going to buy myself one of those nice G3/500 iBook because of the killer combo of a superb hardware feature set, Unix core and a nice GUI.

    Now you're telling me that they have a SUCKY performance? I was at least expecting that the G3/500 would be able compete with a P3/750 and a G4/500 with a P3/1Ghz. Damn it!! Why must you do this to me Slashdot? Why must you cast doubt in my heart?
  • Is that like pie a la mode?

    Or more like 3.14159 + 2010?

    --
    Forget Napster. Why not really break the law?

  • I can't say as I care.

    First of all, because my older Mac basically does what I want in a style I like. I won't give Microsoft a penny- we're not talking about that, so that's moot, we're talking about Linux platforms. I also won't give Intel a penny if I have a choice- and am seriously questioning whether it's good to give nVidia money either at this point. They'll only use it to impede progress, they are already doing it, strong-arming vendors.

    Second of all, the CPUs are so different anyway that it seems crazy to try and compare them. It's like the x86 are model airplane engines screaming at 60,000 rpm and the PPC is a truck rumbling at 2000 rpm. It's a difference between top-end horsepower and bottom-end torque. PPC is register-rich and has a relatively shallow pipeline. I daresay the version of GCC used could have been better, but even if it was fully optimised, the 'torque curves' of the chips are DIFFERENT!

    The x86 has been designed for years to scream for doing certain very narrow tasks- the simple repetitive processing of games, the focussed processing of well optimised OS routines. In many ways this is the most common situation (though I tell you, I've seen PCs sag and go unresponsive... admittedly running windows...). However, PPCs are a decent general purpose tool for _broader_ tasks. Anything that can use all the registers at once, slog through really big amounts of data in complicated ways... hence, the way Photoshop filters keep coming into the spotlight.

    I don't know what all this proves, nor do I especially care since any of it is 'good enough'. I guess the bottom line for me is that I can't see performance metrics as being the end of the story. When you look beyond the actual performance and consider what happens as a result of your buying decisions, it becomes clear that the better immediate information people have, and the more prone they are to consider nothing whatever but raw results defined as narrowly as possible, you see the real reason why Microsoft is choking IT to death, why nVidia is currently threatening vendors to cut off the air supply of competing 3D chipmakers, why Intel destroyed other choice for so many years.

    The end result of "X>Y, therefore all your base are belong to X" is cartels, stagnation, and the choking off of true progress. This is the case even when X is indeed >Y. You can't have a market if you're only allowed to buy one thing- and if you're not free to do whatever the hell you want, for any or no reason, you're being restricted by your own anal-retentive perfectionism and playing right into their hands. Two years from now these CPUs will ALL look like crap, but if you can successfully and publically make the case that there is only one greatest choice and nothing else will do- why, you are part of a market force handing complete dominance to that choice (like with Microsoft) and trusting it to keep on deserving your support AFTER it has no competition left and can do what it wants: and when have we EVER seen ANY company deserve our trust after it has controlled its market? It's not in their nature.

    Which is to say- I'd still get a Mac. And people can do as they please, but I really can't have much respect for those who'd denigrate me for my choices- I have enough time and patience to run an older 300Mhz G3 machine in relative comfort, so that counts as 'enough', plus I cannot forget the larger situation, all these companies busily trying as hard as they can to do away with all capitalism and become the single source for whatever it is they do. That disgusts me- it's not what I call a suitable model for society. So rather than whine or write long dissertations about it, I ACT in accordance with my beliefs.

    If any of you guys REALLY believe that PPC should go away- you should be running Windows, not Linux. People have all kinds of motivations for what they do, and in the larger scheme of things, 50% or 80% or even 300% processing speed disparity is pretty insignificant. Have some historical perspective.

    End long, rambling, crotchety ol 'rant ;)

  • by Have Blue ( 616 ) on Sunday June 03, 2001 @12:48PM (#179873) Homepage
    The sole reason Photoshop is faster on Macs than PCs (recently), and the reason Steve always brings it out to show off, is altivec. We need benchmarks of vectorized vs non-vectorized versions of the same task, running on the same G4 or against an x86 box. Unfortunately I can count the programs that use altivec in the real world on one hand; hopefully this will change with OS X arriving (altivec in system libraries).

  • You're overlooking the obvious: solaris is a very slow operating system. Especially its filesystem and disk-access layers are very slow, and the entire OS has been optimized for the MANY-processor cases. Running solaris on a box with less than 8 CPUs and less then 4GB of memory is just silly. And if you can afford a box that big, you can afford hardware RAID and VxFS and so on to cover up Solaris's atrocious performance.
  • Hmm. I'm afraid my experience hasn't matched yours. I ran Solaris 8 on a dual-CPU Ultra 2 for a month or two, and then switched back to Linux. It was my main NFS server, and also the compile box...it was just too slow, even under essentially no load. It's nice and zippy now... I have a number of systems running solaris 8 at work also - I have yet to be impressed; even our 2x450 systems run pretty slow. It's pretty stable on Sun hardware (but then, so is every other OS that runs on it), but it seems universally slow.

    XFS on solaris would indeed be nice; UFS is slooooooow. I'd even settle for ext2 - no journaling but it's a helluva lot faster than UFS. Actually it'd be nice if XFS/linux worked on bigendian systems too.

  • I guess you've never tried Linux on sparc64. I consider this the best-supported of all architectures. I've used m68k, ppc, mips, mips64, sparc, sparc64, and i386 and have used linux for over 7 years in total, including a fair bit of mips and sparc hacking, so I consider myself in a good position to judge. When I install Linux on a sparc64 system it Just Works; the last time (last year) I installed on x86 I had no end of problems - different problems on each of the 40 or so systems I was maintaining at the time. I'll never go back to x86; draw your own conclusions.
  • As I said, Solaris is optimized for very large machines running many processes simultaneously. If you can afford to drop a million bucks or two for 64 CPUs and 64 gigs of memory and lots of striped disks on FC controllers, Solaris performs fairly well. Not as well as IRIX, but fairly well nonetheless. OTOH Solaris is all but useless for the 1 and 2 CPU systems commonly used for workstations and smallish servers. The hardware is great, however, and I highly recommend running Linux instead of Solaris. You don't seem to lose anything in stability but the performance increase is very nice indeed.
  • Actually there are at least 4 variables: The target architecture, the host architecture, the host OS, and the host libraries. As you point out, the first can be const'd by using cross-compilers. The host architecture is the variable of interest. The host OS can be const'd by using either linux on both or solaris on both. Doing that would also const the system libraries (unfortunately since glibc doesn't work on solaris any longer it's impossible to test only this variable).

    Unfortunately unless you only use gcc, it isn't a very good benchmark for CPUs. Unless every CPU ran in the same otherwise identical system, other differences would greatly affect the outcome. The gcc test is in fact a fairly good exercise for the system as a whole - it covers disk i/o, memory bandwidth, cpu power, and the abilities of the OS. Unfortunately the CPU is not usually the bottleneck in this scenario. If you have little memory, it's the disk subsystem. If you have lots of memory, it's either the ability of the OS to use it, or the memory bandwidth itself. I'm actually a believer in the gcc benchmark - but not for CPUs.

  • It's not that the processor sucks. It's just that Apple has put a heck of a markup on them, so that they make a hell of a lot more off of them than the manufacturer does. This is the opposite of the situation in the PC world, where the PC builder makes much less profit than Intel... so what should be a superior processor architecture gets marginalized.

    Thank you, Apple.

  • The comparisons of a G4 to a supercomputer are a bit disingenuous. At one time, 1 Gigaflop served as a criterion for Supercomputing status, and some documents pertaining to munitions sales may have used similar numbers. Theoretically, the G4 is capable of 4 GTOPs (billions of theoretical operations per second), which exceeds this government export standard [doc.gov] Notice that the threshold is theoretical, not "sustained" or "real life."

    In general, though, scientists don't care about integer results-- floating point is more important. I have heard that some Crays were notoriously slow at integer arithmatic.

  • You're right about at least one thing. I did oversimplify.

    However, this whole set of semantics has been one oversimpification after another.

    1. At one time, a supercomputer was extremely helpful in designing nuclear weaponry--the faster the better, whether that was an Eniac or a Beowolf.

    2. So the Commerce department defined a supercomputer in terms of whether one could design a bomb on it-- and failed to recognize that PCs could catch up to that standard.

    3. Apple designs a system with commendable floating point performance-- which approached the theoretical limits of a Cray (don't ask me which one). Much as Intel tried to hype the i860 as a "Cray on a chip" (albeit a very slow one), Apple hyped up the "Banned in Iraq" angle.

    As a student in computational science, I recognize that certain problems demand more that just fast fp units. Some Bioinformatics problems, as you mentioned, are essentially memory intensive-- multiple alignment, etc. Others, such as tertiary structure prediction, may be more floating point intensive.

    Some scientific problems truly stretch the limits of currently available computing power. It would be a waste of time and effort to cobble together a generic i386 or ppc bpinary, and slap them on a dual processor Macintosh with only one working processor, or a overclocked Athlon with a screwy IDE controller.
  • by Jeremy Erwin ( 2054 ) on Sunday June 03, 2001 @02:57PM (#179887) Journal
    Yeah, well, the "Supercomputing" definition is based on whether you can process enough data to support the creation of nuclear weaponry. But it's not always relevant.
    I really was not too impressed with the benchmarks presented here-- and not because they were anti-Apple. The fastest machine had poor disk performance-- but this is really very relevent to compilation and development. Because the benchmark was very specific-- cross compiling for MIPs machines-- (wtf?), I have no idea how far these benchmarks can be stretched.
    I think he should have tried to optimize the binaries-- after all, he was compiling from source. And how do his compilers compare with the latest gcc releases (not snapshots, though)?

    BTW, although I own both a Mac and a PC, my mac is old, and I mostly use my (faster) PC. I am not an advocate of either platform...
  • It's actually not at all silly, since it forces you to read the full configurations of the systems in question.

    If he'd simply put "G4/450" and "P3/733" in the tables, I for one would have been suspicious about the amount of RAM, etc.

  • Doing this today, I got 1,261 patents, but some of them don't apply here.

    Er, that is to say, I got 1,261 search results each representing a patent. I don't have 1,261 patents myself. :-)

    Thanks for clearing that up... with the USPTO's track record, I wouldn't have been much surprised to learn that you actually obtained 1,261 patents in the course of looking up some patents. ;)

  • by dangermouse ( 2242 ) on Sunday June 03, 2001 @02:36PM (#179891) Homepage
    This is just based on my experience with a handful of architectures and a handful of OSes on a few of those, including PowerMacs...

    But it seems to me that GCC builds much slower code on powermacs than whatever it is Apple builds their binaries with. (If Apple's using a modified GCC, which wouldn't much surprise me, I sure wish they'd throw us their patches.)

    Now, if I'm right, and GCC produces relatively unoptimized binaries, shouldn't you compile GCC itself (and maybe even the rest of the system) with another compiler before pitting machines against each other in a compiling race? It seems to me that you're using a slow, badly-compiled GCC binary, otherwise.

    Granted, this seems an excellent real-world test, since nobody does that. But I can't help but feel that we're (by "we", I mean the Community[TM] )currently incapable of exploiting the PowerPC, and it seems unfair to blame the chip.

    Maybe I'm wrong. I would love to see some discussion from the GCC team. I just thought since nobody else seemed to have brought this up...

  • by jht ( 5006 ) on Sunday June 03, 2001 @04:46PM (#179896) Homepage Journal
    Welcome to the wonderful world of vendor-supplied memory. All the big Wintel vendors rape you equally when you buy their RAM, except for the occasional promotion for "FREE 128MB RAM with system purchase, for a limited time!!!" that you see in the mags and in all the mailorder catalogs.

    It's not just Apple.

    However, to give you a bit of good news, all Macs sold in the last several years have user-installable RAM, and with the exception of the original (Rev. A through D) iMac, it's very easy to do - easier than in many PC's.

    The new G4's use standard PC133 SDRAM, all other model desktops use PC100 (the Cube and iMac), and the TiBook uses PC100 SO-DIMMs, the iBook (old and new models) uses PC66 SO-DIMMs, though PC100 works fine, too.

    PC133 SO-DIMMS seem a tad flaky so far - I just got a Gateway 9500 laptop at work and still haven't gotten a 3rd party 256MB SO-DIMM that will work with it (we've tried Samsung, Micron, and Hitachi, with Infineon on the way). Apple will probably start using them with a TiBook revision at some point, after the interoperability issues vanish.

    - -Josh Turiel
  • Right, but just try putting together a Linux system with AltiVec aware code tools. You have two choices: you can either get Yellow Dog Linux, a horrible, nasty Red Hat knock-off that barely installs on most hardware, or you can install some other PPC distribution and patch GCC yourself. Oh, but you'll need GCC 2.95.2. If you want patches for 2.95.3, you'll need to get them off some obscure Japanese web site I can't recall right now. Forget about 2.95.4. Also you can join the AltiVec mailing list at AltiVec.org, but thier archive is useless and infuriating, so you'll also need to just ask the same questions on the list over and over again.

    My point is that Motorola and Apple have really dropped the ball with respect to AltiVec and developer relations.

  • There's absolutely nothing sexier than the titanium powerbook however. (no notebook computer anyhow ;). That's the *real* reason everyone wants one, even if they won't admit it.
  • by dej05093 ( 7707 ) on Sunday June 03, 2001 @01:20PM (#179901) Homepage
    According to an article in the linux journal
    the Altivec unit handles only single precision
    floats
    -> only useful for special tasks or for calculations were you can take care for the
    reduced precision,
    -> not for general scientific numerical calculations
  • As a long-time PC user, I just ordered my first Mac last week. I've used them before at many jobs, but never owned one of my very own. I chose to get the new iBook.

    Why? Because Apple actually has the x86 world beat in the notebook category. The cost of my iBook (I work in education) was $1545 + $237 for the AppleCare warranty. Try and configure a PC laptop the same way for $1545. My iBook has an XGA display (12" yeah, but still XGA) AirPort card, built in ethernet and 56K modem, as well as DVD. I swear by Dell systems, but I couldn't come close to touching the iBook for the same price, and I would have had to tolerate an external wireless antenna, as you can't have ethernet and 802.11b in a laptop yet...

    I think that the G4 desktops are still overpriced, but the iBook line is very reasonable. The iMacs are okay, but the 15" CRT is dead, Apple. If they came out with a 17" version, they'd see a renewed interest in them...
    ---
  • A dual P2-450 running SETI doesn't get you crap because SETI doesn't launch a number cruncher for each processor like d.net does. Unless you're running two different SETI clients in different directories or something your second processor is running idle. I average 10 hours/WU on my dual P3-500 which only half of the processing power gets used. With d.net on the other hand I get 100 utilization.
  • Yeah, there are lots of ways you can "benchmark" a particular processor or operating system. The problem is not all OSes or processors are made the same. I'm not defending Apple or Intel or AMD either. If this was an AMD/Intel benchmark the fact a 733 was headed against a 1200 would get tons of people's panties in an uproar. Secondly, you know marketing is bullshit. How many PC makers tout their boxes as the fastest PC you can buy or some shit. Even between individual chips, Intel claims they're the fastest and so does AMD. Benchmarks are not magic bullets, they're merely magic bullet theories.
    This benchmark in particular is bullshit. What should be looked at is peak work done per clock and total price per clock if you want a price comparrison test. Or work per watt or maybe even cool facter per work/clock. PPC chips have an advantage of not needed an instruction decoder and a larger register space. You can do a bunch of LOAD operations on the same clock so more operations are running on the next clock than with x86 based systems. On the otherhand x86 processors have the advantage of a higher clock speed which begins to negate the lower register space. In general, anything you compile with gcc is going to suck ass. Do you REALLY think Apple uses gcc? Goddamn they use Motorola's compiler! They give you gcc because they don't want to pay the licensing fees on the fucking thing for every copy of OS X they sell. Motorola's compiler is oodles better than gcc ever will be, ever. In the benchmark's preamble he says "well I don't use Photoshop" well goddammit the Apple statement just became invalid as did his entire premise. Since he wasn't benchmarking Photoshop 6 he ought to have been using a 450MHz P3 and Athlon rather than shit that is OBVIOUSLY going to be faster in mere clock speed. If you can't tell from the tests the compilations have alot to do with moving data between the processor and memory. The G3 in the iMac and G4 are already at a disadvantage due to their bus speeds while the Athlon gets to chug along on its EV6 derivitive bus. Testing the processor ought to involve shit that can be held entirely in the chip's cache (which can be considered part of the chip because it afterall it's memory cache). Then the chip's FSB speed and register space size become important. You can then get a good measure of how much the processor is doing on every clock and how many clocks it needs to get a particular job done. At this point you've got a real test. Work done per clock or per second or watts used per work cycle or something. Here is where you say X processor is more better than Y processor. This is why SPEC tests cost so much money. They have LOTS of different tests that test things in different ways and they still don't show a true measure of real world performance.
    In the real work you might be running Final Cut Pro or Photoshop 6 in which case you damn well better have a system they are optimized on (yes I'm aware FCP is only available on Macs). If your real world workload involves compiling go with the system that works best for your particular task at hand. This dude did a compiling benchmark test, he has benchmarked COMPILERS compiling binaries for whatever. Everyone here's been arguing "well such and such is faster than such and such" fuck that. This is not a x86 vs. PPC test or something. This is somebody feeling righteous because they keep getting reeled in by marketing quotes and feel like they've been lied to. Use what works not what a marketing department says works.
  • Okay I see your point then. It's a shame SETI hasn't taken a que from d.net and done a better job handling multi-processor boxes. A majority of the clients they have running are Win32 (which usually means Win9x/Me) but if they launched a number cruncher for every processor or ran two crunchers on the same WU the bigger boxes some people run S@H on would benefit alot.
  • Combined with the fact that the PowerPC has a nice quiet and fairly energy efficient air-cooled chip you might have some nice machines.

    Oh yeah, nothing's worse than those damn noisy Intel chips.

    --

  • You wrote: In general, though, scientists don't care about integer results-- floating point is more important.

    Ugh. This makes some assumptions that are incorrect. Several sciences make extremely heavy use of integer computations versus floating point. Whats more, these are the sciences that are driving large scale computational projects today, consuming cycles at an exponentially growing rate.

    Specifically I am writing about bioinformatics and related information theoretic sciences, where data mining operations, large scale (GB -> TB) database comparisons are the norm and the coming norms.

    A supercomputer has always been a nebulous term. Ask 10 people for a definition, and you will get 11 answers. Fundamentally there is no single quantifyable differentiating factor between a computer and a supercomputer. It is more of a subjective view than an objective firm quantitation.

    My AMD Athlon based system can theoretically hit 3.6 GFLOPs. But can I really pull that much work through it? The G4's can hit about the same amount, theoretically. Does this make them supercomputers?

    IMO, hell no. I have a simple definition (non-quantitative) of a supercomputer. A supercomputer allows you to tackle the large problems you need to handle rapidly, in order for you to effectively do your science. If your calculation runs fast on your pocket calculator, over the whole range of problem sizes you are willing to consider, then for your particular problem, your calculator is a supercomputer. If you need a massive supercluster of > 1000 processors to run your BLAST jobs in a reasonable period of time (this is my domain), then that defines your supercomputer for you. If the Sony PS2 vector units tied together on a network do wonders for your problem, well...

    The definition is subjective. You cannot quantify it in any reasonable way. The work on clusters is giving the folks setting up export restrictions fits on what to do for these.

    And finally, a science is NOT only floating point intensive in simulation codes. In fact the majority of codes using modern methods are more limited by memory latency and bandwidth than they are on the core FP system. The quality of the compilers matter far more than the FP ability of the chips.

    Just my observation as a reformed computational physicist (learning to be a bioinformatics type). Aside from this, gross generalizations tend to be incorrect....

  • Well, *586* optimisations would be pretty stupid.
    Pentium optimisations generally reduce
    performance on P6 or later generation cores.
    Dunno how much difference 686/ K7 optimisations
    make.

    Rob
  • by mihalis ( 28146 ) on Sunday June 03, 2001 @01:07PM (#179926) Homepage

    Thus any scores over .61 and .72 respectively, indicate that the PowerPC is doing more per clock cycle than then PIII. If Motorola can ever get their act together (and that is not a certain), normal code on the PowerPC will run every bit as fast and faster than the x86 processor

    This illustrates a common fallacy - if chip A does more work per cycle than chip B, then chip A is "better" and as soon as those "idiots" who make it get the clockspeeds up there it will perform better. This neglects the fact that B may do less work per cycle precisely because it is designed for extreme clock speeds, and in fact there are plenty of instance where the "speed demon" cpus (high clock speeds, simple instructions) outdo the brainiac chips (lower clock, beefier instructions). The reason Pentium 4 trounces any PowerPC is that it is designed to scale to 2GHz. Of course it does less work per clock, but overall it does more per second, and that is the more important metric.

  • by Jay Carlson ( 28733 ) on Sunday June 03, 2001 @12:34PM (#179927) Homepage
    By the way, the "per bogohertz" comparison was outright dishonest. It doubles the actual cost of the G4, even by these tests, since the G4 is a dual processor.

    No, the Apple Store [apple.com] price I quoted in that table is for a single-processor G4/533. Go price it yourself.

    (I'd give a URL but the Apple Store is too sessional.)

  • by Jay Carlson ( 28733 ) on Sunday June 03, 2001 @01:13PM (#179928) Homepage
    Firstly, the action of compiling on different architectures is very different, even without considering optimisation strategies. To compile code into the CISC code of the x86 architecture is very different from that of a RISC chip such as the PowerPC. For a start, instruction ordering etc. for a RISC chip, even for not really optimised code can take far more processing time. Then, if you add optimisation, which in a RISC architecture is a FAR more complex task.

    All this means is that compiling on a RISC architecture is bound to be a great deal slower.

    I'm aware of that, and I talk about it on that page. See the "Choice of workloads" section.

    The install-egcs test does measure native compilation performance. This is relevant for people who use the box for development.

    The install-glibc and cross-gcc tests are both compiling to a single RISC architecture, little-endian MIPS. The amount of effort required to optimize for PPC or x86 doesn't factor into this.

    If you don't care about development compile times, just look at the cross-only numbers.

  • I disagree. It's easier for a code generator/optimizer to generate code for a simple/orthogonal (RISC) instruction set than for a CISC one. I agree though that compiling native code isn't a good benchmark since it's doing different things on different machines - better just to compile for, say, x86 on all platforms if you want to use it as a benchmark.
  • No... you should test the boxes the way you are expecting to use them. If you expect to write code in C and compile with gcc, then test that. If you are expecting to write hand optimized assembler than test that!
  • Compiler optimizations patented?! :-(

    Do you have any examples?

    Surely they can't have patented emitting certain code though - just the algorithms they use to produce it?
  • by SpinyNorman ( 33776 ) on Sunday June 03, 2001 @03:25PM (#179940)
    Peak work done per clock or price per clock are meaningless. You should simply look at what you can get for your money at a particulat price and see which is faster. If a $1000 PC beats a $1000 Mac running your application, then who cares what clock speed they're running at - it's irrelevant.
  • by Foxman98 ( 37487 ) on Sunday June 03, 2001 @12:33PM (#179943) Homepage
    It seems that the PC comes out on top when compared to apple. However did anyone really expect anything else? It always suprises me when people bring up apple's quotes (super computer, faster than a p3 etc.) What do you expect them to say? "Well our G4 is slower than the other new chips out there but costs more! Please place orders here!" I mean come on. The one thing that apple has going for them is their incredibly loyal userbase. My accounting professor uses Macs. I have gone over the benefits of PC's many many times with him. And on a lot of points he agrees that the PC is better.

    He just bought a titanium powerbook.

    'nuff said.

    Although that cinema display is super slick (as it should be for its price tag...).
  • by Sun Tzu ( 41522 ) on Sunday June 03, 2001 @12:44PM (#179947) Homepage Journal
    Though it might more accurately be described as a GCC/Linux PPC vs. x86 benchmark, it's still interesting to those of us who are committed to GCC and Linux. Those two items, and their spawn, suck up almost all the (used) cycles on all of my machines.
  • by devphil ( 51341 ) on Sunday June 03, 2001 @02:57PM (#179951) Homepage


    Hi. I'm a GCC maintainer. I don't work on this part of the compiler, but I can speak to this point:

    Its opptimized for the x86.

    Nearly all of the optimizations in GCC are not machine-specific. Those kinds of optimizations, ones which are specific to the processor, are called peephole optimizations, and while every little bit helps, they don't make that much of a difference. The big ones are done at an intermediate level, before the compiler "knows" what processor it's using and starts to chunk out the opcodes.

    More specifically, unlike the Linux kernel, glibc, and other major projects, GCC is not designed for and targeted primarily at Intel chips. The x86 is just one more back-end like any other; sometimes it falls behind and sometimes it pulls ahead, development-wise.

    Those chages may not have been rolled back in to the tree yet,

    Some have, some have but won't be in the upcoming 3.0 release in a few weeks, and some are yet to come.

    The biggest problem is that many of the really cool optimizations -- the ones that make a big difference and aren't CPU-specific -- have been patented by IBM and other major players.

  • by devphil ( 51341 ) on Sunday June 03, 2001 @04:38PM (#179952) Homepage


    I can give you over a thousand examples, with the help of our f[r]iendly patent office (my tax dollars at work). Just go to http://www.uspto.gov/ [uspto.gov], look under the green Patent Grants area, follow the Advanced Search link, and search on "compiler and optimization". Doing this today, I got 1,261 patents, but some of them don't apply here.

    Er, that is to say, I got 1,261 search results each representing a patent. I don't have 1,261 patents myself. :-)

  • Count yourself lucky you don't have to use any other UNIX vendors compilers. Sun's compiler is about another factor of 2 faster than HPs aCC or SGIs compiler (obviously running on different, though nominally comprable hardware).

  • by Szynaka ( 65273 ) on Sunday June 03, 2001 @04:15PM (#179960)
    Lies,
    Damn Lies, and
    Benchmarks
  • by Sir Joltalot ( 66097 ) on Sunday June 03, 2001 @02:12PM (#179961) Homepage
    I read through these comments and get the impression that slowly a conclusion is being reached: PPC-type hardware is good for some things, x86 hardware is good for others. Nothing really new there, is there? For running Linux, it seems from this little (and far from in-depth) benchmarking session that PCs are a bit better, especially given costs. You can probably get a 1.2GHz Athlon box for the cost of a 533MHz G4, and it'll be better for Linux, so if you run Linux, why not?

    MacOS X, stuff like Maya, Final Cut Pro, etc. etc. quite obviously runs better on PPCs, barring some strange circumstances. I imagine that with enough "brute force" (RAM, dual processors, etc.) one could get a PC to run this stuff faster than a Mac.. but what's the point? You might as well just keep it simple and buy a Mac that'll run it pretty well outta the box.

    I agree though, that cost is an important consideration. With 760MP around the corner, if it ever does surface in quantities making it available, dual Athlons might give dual G4s a bit of a whippin', especially considering AMDs prices as of late. In general I find you can buy a PC with a much faster proc, more RAM, etc. for the same cost as a Mac from the Apple store.

    Still, even a 1.2GHz Athlon would probably choke on OS X, and the G4 will at most hiccup...
  • > When will this obsession with speed end ?

    When we developers don't have to wait a full hour for a full rebuild when we're compiling today's games. (If you think that's bad, the Windows 2k guys had to wait *12* hours for a full build.)

    Hardware is SLOW SLOW SLOW.
  • in order for RISC to make up for the lack of instructions. it has to execute more, becuase the processor know's less. as for x86, there are ton's of pre set instruction's, enabling it to process more important information. and thus performing faster.

    Since the others who corrected this statement did so in AC mode, I really thought that I should correct this in a way that will get read.

    Most instructions in either a RISC or CISC chip take more than 1 clock cycle to complete. This is why pipelining works. Tyipcally the RISC processor (like MIPS or PPC) will take more machine instructions to accomplish a given task. The advantage here is that the chip can be better optimised for a smaller set of possible instructions and is likely to finish each instruction in a shorter amount of time. The drawback is that the larger number of instructions required to complete a given task is likely to take up a larger amount of space in RAM and on disk.

    When dealing with a CISC processor like that x86 family, you have a larger number of instructions to choose from and can thus accomplish a given task in a fewer number of instructions. Because the length of each instruction can vary, the most used instructions are usually the shortest and can therefore save RAM and disk space. The drawback is that the processor is less optimised for each particular task that it knows how to do and may therefore require more clock cycles to execute each instruction.

    Please note that my comments above are generalizations. There are a large number of tradeoffs involved in creating a processor. Consideration must be given to cost, power consumption, supported instructions, compatability, clock rate, pipeline length, and so forth. This is not an easy task to achieve because there are so many different variables. This also means that when you are evaluating the finished product you must consider more than just the number of instructions (CISC vs. RISC) or the clock rate or the number of instructions executed per second (MIPS).

    Furthermore, the processor in only one part of the whole computing solution. In addition to the processor, you need the supporting motherboard and RAM systems which introduce a whole slew of bottlenecks to the system. You also need a good compiler which optimises appropriately for your priorities (program size vs. execution speed, etc.). Also consider that a system may be well optimised for a problem that you are not trying to solve. Some chips are better at solving floating point problems while others are better at solving mostly interger problems.

    ________________________

  • To be sure, but yet Intel has been the one (along with AMD) that has been pushing the "MegaHurts Wars" full speed.

    Only on desktop. Itanium runs at a mere 800 MHz (and yet has the fastest FP performance in the world).

    Itanium is actually a pretty good argument against the sheer cluelessness of people who insist on doing performance-per-clock comparisons. It is manufactured in P858, the same process as the Pentium 4, yet runs at less than half the clock speed. It uses much more power, has much lower integer performance, has higher FP performance, has lower memory performance, and is more scalable for multiprocessing. What sort of generalization are you going to derive from THAT?

    For what may be hoped to be introduced in the near future, a 1 GHz PPC chip should outperform a 1.1 Intel and have some other potential redeeming characteristics.

    This is meaningless. For starters, Intel has a 1.7 GHz processor out now, so it doesn't make sense to compare it to a 1.1 GHz processor (or various vapor PPC products). Second, by the time a 1 GHz PPC is finally shipping, Intel will have something a lot faster. Third, you are assuming that performance scales linearly with clock speed, which is a horrible assumption (clue: the memory performance is not affected by clock speed changes).

    Having the long pipeline so you can scale past 2 GHz is not all that its cracked up to be in the real-world. Mis-predicts cause too many pipeline flushes with other bad potential side effects. For some stuff its fine, for many things it ain't. The PPC runs with a very short pipe.

    It doesn't matter. If you have two computers, one with double the pipeline length, and the other with half the cycle time, the misprediction penalty will be identical. One will have to recover double the number of stages, but since each stage takes half as long, it's the same.

  • That was interesting, but I'm afraid you permitted two variables in your test. If understand correctly, you compiled for Intel on Intel, and for SPARC on SPARC. The benchmarks under discussion were all compilations for MIPS in order to eliminate the effect of some target architectures being more intensive to compile for.
    Given that your SPARC compilation was slower than your Intel compilation, there are at least three possible explanations:
    1. SPARC architecture is a harder target to compile for, regardless of the compilation platform.
    2. GCC is less efficient on SPARC
    3. The Sun/Solaris machine is less efficient than the Intel/Linux machine.
    Use of a constant target architecture would have eliminated #1.
  • by John Carmack ( 101025 ) on Sunday June 03, 2001 @11:40PM (#179978)
    I wind up doing my own internal PPC vs X86 benchmarks almost every year.

    I'll set up whatever current game I am working on to run with the graphics stubbed out so it is strictly a CPU load. We just did this recently while putting the DOOM demo together for MacWorld Tokyo.

    I'll port long-run time off line utilities.

    I'll sometimes write some synthetic benchmarks.

    Now understand that I LIKE Apple hardware from a systems standpoint (every time I have to open up a stupid PC case, I think about the Apple G3/G4 cases) , and I generally support Apple, but every test I have ever done has had x86 hardware outperforming PPC hardware.

    Not necessarily by huge margins, but pretty conclusively.

    Yes, I have used the Mr. C compiler and tried all the optimization options.

    Altivec is nice and easy to program for, but in most cases it is going to be held up because the memory subsystems on PPC systems aren't as good as on the PC.

    Some operations in Premier or Photoshop are definitely a lot faster on macs, and I would be very curious to see the respective implementations on PPC and X86. They damn sure won't just be the same C code compiled on both platforms, and it may just be a case of lots of hand optimized code competing against poorer implementations. I would like to see a Michael Abrash or Terje Mathison take the x86 SSE implementation head to head with the AltiVec implementation. That would make a great magazine article.

    I'll be right there trumpeting it when I get a Mac that runs my tests faster than any x86 hardware, but it hasn't happened yet. This is about measurements, not tribal identity, but some people always wind up being deeply offended by it...

    John Carmack
  • by MROD ( 101561 ) on Sunday June 03, 2001 @12:48PM (#179981) Homepage
    I'm sure this person THINKS he's testing the same thing on each machine. Unfortunately, he's not.

    Firstly, the action of compiling on different architectures is very different, even without considering optimisation strategies. To compile code into the CISC code of the x86 architecture is very different from that of a RISC chip such as the PowerPC. For a start, instruction ordering etc. for a RISC chip, even for not really optimised code can take far more processing time. Then, if you add optimisation, which in a RISC architecture is a FAR more complex task.

    All this means is that compiling on a RISC architecture is bound to be a great deal slower.

    Basically, this "benchmark" is measuring not only the intrinsic speed differences of the architectures and chips but also degree of optimisation the native compilers used can cope with and the extra processing power needed to generate the code during the comile stages.

    Basically, using compilation as a benchmark is not at all useful, other than to test the difference in speed of two similar peices of equipment using identical software (ie. compilers & OS) or the difference between two versions of the same OS or two versions of the same compiler.. Basically, you can only change one variable to deliver a meaningful benchmark if using the method chosen in this "study."

    The only way to get a half-way meaningful benchmark for the two systems used here would be to write a program which did lots on disk I/O and integer manipulation, worrying about whether it's being biased for or against certain types of architecture or use (eg. loops sitting in cache etc.). This would give you an idea of the real-world speed differences between the two systems. However, you won't be just measuring the intrinsic speed of the machine but also the different ways the kernels have to do things on the two architectures, the degree of optimisation the compilers building the kernel and the program could generate and the speed of the hard disk built into the machine.

    As you can see, it's a tricky thing comparing two types of machine.
  • Doing a world build of anything using gcc is a benchmark of your hard drive, NOT your CPU.

    You mentioned in the article that the test isn't disk intensive, but every stage of GCC feeds the next via a file. Tons of RAM or not, you're still benchmarking your disks.

    And BTW- your hunch that gcc produces shitty PPC code is correct. Run the bytemark tests if you want a more interesting benchmark of CPU performance. Make sure to test using different compilers on the same CPUs to show how much a compiler can affect efficient CPU utilization in the software it's building.
  • by sommere ( 105088 ) on Sunday June 03, 2001 @12:30PM (#179984) Homepage
    I think I should point out that these benchmarks measure both the speed of the hardware and the efficency of the software (such as the compiler.) For instance, if you don't use the cache as efficently, then the computer may have the potential to be much faster. Much more time has been spent on optimizing for the PC hardware, and that could account for at least some of the difference in speed.

    ----
    Althea [sourceforge.net] verified to run quickly on Mac and PC hardware.

  • Yeah, check out this [apple.com] at the Apple Store. It's the pricing breakdown of the powerbook G4.

    The "Faster" model:
    500MHz PowerPC G4
    1MB L2 cache
    256MB SDRAM memory
    20GB Ultra ATA drive
    DVD-ROM w/DVD-Video
    ATI Rage Mobility 128
    10/100BASE-T Ethernet
    56K internal modem
    Two USB ports
    One FireWire port

    The "Fastet" model:
    500MHz PowerPC G4
    1MB L2 cache
    256MB SDRAM memory
    30GB Ultra ATA drive
    DVD-ROM w/DVD-Video
    ATI Rage Mobility 128
    10/100BASE-T Ethernet
    56K internal modem
    Two USB ports
    One FireWire port
    Extra AC adapter
    Extra battery

    So here's my question: Why is the "Fastest" G4 any faster than the "Faster" G4?
    Because the hard drive is 10 gigs Larger?! (they're all 4200RPM).

    Or is is the extra ac adaptor that somehow makes it "fastest"?

    Friggin Apple. Buncha liars.
  • by The_Messenger ( 110966 ) on Sunday June 03, 2001 @03:36PM (#179987) Homepage Journal
    Exactly... Adobe is Apple's bitch, and Photoshop is written to scream on Macs. Also, on any modern Mac with a modern version of Photoshop, Altivec affects the performance to the degree that you aren't getting any useful CPU benchmarks from Photoshop anyway.

    I applaud what Jay did. With the release of OS X Server, it's obvious that Apple is no longer only pandering to the Photoshop market, and plain-Jane int benchmarking is very valuable in evaluating the use of Apple as a server platform; what the hell do I care if my Mac server can run Photoshop?

    But Photoshop continues to be the "benchmark" of choice for Apple. No discussion of its Java compilation speeds, or its applicability in distributed computing, or large-scale simulations, or anything else that would matter to the real computing community. Just... Photoshop. FTN, I'll stick with Solaris and NT.

    --

  • Yeah, well, I want continuous wireless power :) Miniature microwave guns!
  • Most Mac apps are compiled with CodeWarrior, which has an excellent PPC code generator.
  • by ekrout ( 139379 )
    I doubt this guy's benchmarks are very accurate. Here's one of the Pentium II vs. PowerPC from 1997 [byte.com] (kind of old, but still very relevant) which proves to be a much closer battle.

  • So would this mean that EU hackers (where there are no software patents) could distribute patches to give GCC these optimisations (like the crypto people do?)
  • Memory prices on apple.com: [apple.com] (pc100 SDRAM)
    64 megs: $100
    128megs: $200
    256megs: $400

    Prices for PC133 memory on pricewatch.com

    64 megs [pricewatch.com]: $9
    128megs: [pricewatch.com]: $15 256megs [pricewatch.com]: $26

    Unless you think paying $400 for something that you can get for $26 is somehow fair, I think it's you who needs to stop 'beeing a troll'. Looser.
  • by Demonicbunny ( 145834 ) on Sunday June 03, 2001 @12:32PM (#180008)
    Its opptimized for the x86. Sure OSX ships with it, but apple did do heavy optimization for it. Those chages may not have been rolled back in to the tree yet, so I would not trust these bench marks. Your running an os that is optimized for intel, and a compiler that is optimized for intel, and comparing them to a ported copy.

    Most people know that gcc is slower on suns then the sun compiler. That is all about optimization. So why wouldn't it be the same for ppc?

    Try doing this benchmark on darwin, and im sure the macs will do better against the intel boxes running darwin, then running linux. Im not saying that it will be faster, im just saying your comparing apples and oranges.
  • by jmichaelg ( 148257 ) on Sunday June 03, 2001 @12:42PM (#180010) Journal
    Yet another benchmark that reports similar findings. [burgernet.de]

    The comparable data are:

    • Machine -- Work (Seti Blocks/Week)
    • Athlon @ 1200 Mhz -- 25
    • P3 @ 750 Mhz -- 18
    • Mac G4 @ 400 Mhz -- 10

  • But for your accounting prof's needs a mac may still be better.

    I have 5 computers: 2 FreeBSD, 2 WinMe, 1 MacOS. Each is good for one thing in particular. The Mac is my system of choice of productivity -- spreadsheets, publishing, webdev, Photoshop, that stuff.

    The PCs do have some advantages in some areas, but they have plenty of disadvantages too. In the end they are web browsers and game boxes, to me anyway.

    Let that poor guy enjoy his powerbook in peace!

    I'll say it out loud: It's OK to use a Mac!
  • Apple has well and truly lost, their market is now people to whom image matters more than performance.

    What you are not taking into account is how the apps and the OS work -- the "feel" of the computer is MORE IMPORTANT than the raw speed for many users. Performance isn't measured just in how fast a file opens; it's how fast can you get to that file, and say force it to open with some other app than the default, and how much time is setting up the *%(@_! printer going to take?

    I do a lot of pro-level publishing work on a 400MHz PowerBook. You know what? As sick as this sounds, it is fast enough. I experience no delay in any operation long enough to make me think, "Crap, I need a new computer." And this Mac sits on my desk next to an 850MHz PC.

    I think it sucks that Apple's clock speed is lagging, but the fact that they are still in business is a testament to a couple of things:

    1. Speed isn't the most important thing. We've passed some threshold where even a 1-year old computer is just plain fast ENOUGH for a lot of tasks. And that's not complaining or compromising; it's genuinely good performance.

    2. The MacOS continues to remain a lot more useful than Windows to a lot of people... to enough people, anyway, to keep Apple afloat.

    It's OK that raw speed matters to you. But don't make it the center of the debate.

  • My mom had a G3 Powerbook. Towards the end of its warranty period, she had to send it in like 4 times for service. The last time, Apple said "enough is enough" and they sent her a new G4 powerbook.

    That's not a typo. Apple replaced a G3 powerbook with a new G4 powerbook that was much more expensive. And it didn't take a ridiculous amount of bitching at them -- THEY OFFERED IT. She had it in about 1.5 weeks. (had to ship the dud back first, in a freely-provided shipping box.)

    Some aspects of Apple's service are bad, but in my experience they come through when it counts.

  • I had a Toshiba Portege laptop at work. It needed more RAM. We cracked it open and popped a laptop-format memory module in.

    When we turned it on, the display said, "Please remove the non-compatible memory module and replace it with an authentic Toshiba part" or something like that.

    It was the ultimate insult.... Tosh forced you to buy their super expensive memory. At least when Apple was doing that crap they'd make weird arbitrary changes to the SHAPE of the memory board, so you wouldn't feel teased!
  • why would you ever want to build a beowulf cluster? they're difficult to assemble, tough to maintain and hog VAST amounts of power (x86 is, if nothing else, a power hog)... a much better solution is the appleseed cluster [ucla.edu] based entirely on mac hardware. it's fast and easy to set up, a breeze to maintain and cheap to run (oh, and much quieter too).

    beowulf? pah!

  • anything else that would matter to the real computing community

    ah yes, real computing by definition excludes photoshop... of course.

    let me explain something to you: the number of people who use photoshop so massively outweigh the number of people who use gcc that the notion of compilation benchmarks applying to the "real world" is almost laughable.

    its applicability in distributed computing...

    are awesome. you should really look into it if you are, indeed, "serious" about distributed computing. the project is called appleseed [ucla.edu]... point, click, cluster....

    No discussion of its Java compilation speeds,

    now, if you'd been paying any attention at all to this board for the last, oh, four weeks, you might have noticed the wwdc banner ad touting mac as the Next Big Java Platform. did you go and check out any of their material on java? you should really give project builder and interface builder a whirl... with those tools i'll beat you to market even if you have a compile time of zero.

    I'll stick with Solaris and NT.

    i assume from this that you're running solaris on an x86. i don't even need to go there...

  • The PPC is a great architecture, powering the RS/6000 AIX machines for years

    The POWER chips that are in AIX machines aren't quite the same architecture as the PowerPC chips in Macs. They've got bigger busses, bigger caches, and some extra instructions i think. IBM only puts PowerPC chips into their low-end RS/60000 workstations (604e chips i think). They don't use them in the server range.

  • The above statement is in error:
    Macs run WinNT 4 for PPC and OS/2 Warp for PPC as well as DebianPPC, SuSE PPC, YellowDog, LinuxPPC 2000. Soon Mandrake will join the list.

    Not that you'd want to when Mac OS X is so sleek.
    (okay, Linux has it's place on the Mac, but X is so sweet!)

    The PPC is a great architecture, powering the RS/6000 AIX machines for years. No sense in knocking the Mac when using standard non optimized code. Now, perhaps this would have been a fair run if Amiga OS had been tried all the way 'round... but all Jay showed is that his current setup suits him fine.

    Yes, Jay can say that if gcc is prepared by Apple for the g4 that he'll only use it when Apple sends it to the FSF... he's letting his gnu philosophy get in the way of fair minded benchmarks. Apple complied with anything the FSF would have wanted by posting the source to their take on gcc. If he's hung up on licensing issues, that's his right, but that's no excuse for publishing benchmarks as even-handed.

    A host is a host from coast to coast, but no one uses a host that's close
  • More Photoshop test's...sigh, if you read the article, that's what he was trying to avoid. He wanted comparisons on the type of apps HE uses.
  • by gatesh8r ( 182908 ) on Sunday June 03, 2001 @12:41PM (#180030)
    The author did make mention that there is perhaps some problems with the benchmarks (example, not optimized and mature gcc, etc) and that the MacOS was more optimized for Motorola's/IBM's PowerPC. It's difficult to design for something that is very proprietary in design.

    Let's see: Good Macintosh; $2000. Good x86; $1000. There's another thing I would consider a lot: Price/Performance ratio. Depends really on what you are doing. Of course, most /. readers already know this. Integer-based calculations are in the majority of programs out there. Period. Few programs are heavy floating-point calculations, such as video and image editing. So it makes it seem that the Macintosh is this "supercomputer". Why? Adobe Photoshop 6.0??? You can't judge a computer's overall performance by one application!

    Though, dear Mac fans, don't bark yet that the whole thing is a sham. There is still some creditability in the whole thing and it needs to be looked in to... prove you're better; don't flame!

  • By the way, the "per bogohertz" comparison was outright dishonest. It doubles the actual cost of the G4, even by these tests, since the G4 is a dual processor. Presumably most people who buy dual processor computers are actually planning to use both...

    Agreed! I'd really like to see how things would have come out in more SMP friendly conditions. The dual G4 seemed to be an effort by Apple to overcome the MHz war, and was priced accordingly. OTOH, the classic OS has had only limited support for SMP, so the uniprocessor numbers aren't totally misleading. Apple released a machine a few years ago - I think it was 604 based - that supported multiple processors, but because it was the only machine available, developers never got into it.

  • The sole reason Photoshop is faster on Macs than PCs (recently), and the reason Steve always brings it out to show off, is altivec.

    You've got your cart before your horse. Just like a Photoshop "benchmark" is useless to you, a gcc "benchmark" is useless to media professionals who care about things like Gaussian blurs and MPEG encoding. How many Macs do you think Apple would sell with an ad campaign centered around how fast they compile kernels?

    When you hear Apple say that a Mac is twice as fast as an Intel system, just assume they're talking about the kinds of tasks their target market would care about.

  • Everybody knows Linux is so fast it can execute an endless loop in 5 seconds flat.
  • Vector instructions can be used for many applications that you might not first think of. For example, check out this link to Stepwise.com for a good explanation on how altivec can be used to speed string comparisons.

    http://www.stepwise.com/Articles/Technical/StringL ength.html

    What these benchmarks really show is that the gcc compilers for 80x86 are vastly superior to those for PowerPC. It also shows how the majority of Linux (kernel and user space apps) is un-optimized for PowerPC. It doesn't mean that PowerPC is slow - just that it's full potential isn't being taken advantage of. A more usefull comparison would be to compare binaries compiled with CodeWarrior. I'm sure a 1.2GHz Athlon would still kill any PowerPC but not to the same extent these benchmarks show.

    Willy

  • I don't think Apple's more extreme claims of superior performance based on a handful of carefully selected Photoshop filters are to be taken seriously. But I think it is legitimate to assume use of the G4's Altivec extensions, given that Code Warrior and other tools used in MacOS development support them with virtually no added effort. Performing benchmarks with gcc misses a huge part of the G4's performance advantage, one that any normal Mac user's performance-sensitive apps would be using.

    While I've been typing this, I bet at least five people have already pointed out the same thing... ;-)

    Unsettling MOTD at my ISP.

  • Another important thing to consider is that Linux is mostly optimize for the x86. The Power PC port works, but there is a lot of improvement that needs to be done to before comparing it with the x86 code. MM is a mayor area of improvement in the PPC code and possible the reason why the benchmark spend most of the time on kernel space.

    It is also important to consider that this was not a very good comparison and I don't consider the benchmarks to be precise.
  • by abumarie ( 306669 ) on Sunday June 03, 2001 @12:39PM (#180074) Homepage
    I don't thnk that you will ever get an agument from anyone over how very, very, very badly Motorola has messed up with being unable to deliver faster versions of the PowerPC. However, you should look at a couple of issues with these benchmarks: 1) 450/733 = .61 and 533/733 = .72. Thus any scores over .61 and .72 respectively, indicate that the PowerPC is doing more per clock cycle than then PIII. If Motorola can ever get their act together (and that is not a certain), normal code on the PowerPC will run every bit as fast and faster than the x86 processor. Combined with the fact that the PowerPC has a nice quiet and fairly energy efficient air-cooled chip you might have some nice machines. Unfortunately, all benchmarks can have some rather un-intentional bias. My 1.2 Athlon would do 105 Scimarks in Windoze 98, 113 Sciemanrks in Wine under Redhat, and 119 under Windoze 2000 for Tim Wilkin's Science Mark benchmark. Same machine, same memory, same disks, etc. the only difference was the os. Even given the same os, the tweaks that it goes through are also a function of the author's machine. Please pass the salt, I need a grain.

  • http://www.jdkoftinoff.com/eqtest.tar.gz
    (gpl'd with source)

    450 mhz g4:
    1.7 gigaflops with altivec
    410 megaflops without altivec

    500 mhz pentium:
    220 megaflops

    --jeff
  • You just sit and look at synthetic benchmark results all day? Seriously though what should matter to you is how well your system performs on the programs you personally use. You can toss around all the numbers you like but ultimately, whatever gets the job done the cheapest and fastest is the best for you to get (factoring in of course your personal prefrence). If you just point at a sheet with lots of large numbers from synthetic tasks, you're kidding yourself if you think you have a true picture of what's going on.
  • x86 is, if nothing else, a power hog

    I would encourage you to go to Motorola and Intel's site and look up power consumption of the various processors. The results may supprise you. When I last checked the PPC 7400 vs the PIII comppermine I found that the PIII used just slightly more (1 watt or so) power than an equal mhz 7400. Of course, the Coppermine is a fully rolled solutions, cache and all, whereas Apple adds 1MB of L2 cache to the 7400 to make it the G4. SRAM uses quite a bit of power and pushes the G4 over the PIII in the power/clock category. OR did you think Apple stuck those gigantic heatsinks in there just for fun?

  • by yerktoader ( 413167 ) on Sunday June 03, 2001 @12:37PM (#180093) Homepage
    I once saw a letter that a Macatista wrote to Popular Science (if memory serves). It was in response to the feature article on breaking the Giga-hurtz barrier.

    Unsuprisingly, the Macatista wrote that meggy-hurtz don't matter, and besides, Mac's are 3 times more powerful than a Pentium 3 anyway! Pop.Sci. wrote back, saying breaking the Ghz IS a milestone thats important to note, and that tests have shown that yes, in some areas, Mac's are 3 times more powerful than a Pentium 3. However, these same tests show that in some areas, x86 platforms are 3 times more powerful than the Mac.

    This argument has long bored me. The arcitechural differences between x86 and PPC have been vast until the last year or so. According to an article at Ars Technica [arstechnica.com], the Intel Pentium 3 chip is somewhat like the PPC, but the AMD Athlon is even more similar to the RISC found in the Mac. Even if that's the case, have you ever seen the price difference between the two platforms? Plus add in your options (and the price thereof) when buying a Mac. You take what Apple will give you. Apple's prices on memory are so laughable as to be a great stand-up routine.

    Now don't get me wrong, I have nothing against the hardware, and the only problem I have with the OS as an intermidiate user is the file organisation system. I just think that Apple's managment sucks, and that I get more bang for my buck going out to a local computer store, in which I can support the mom&pop's of America. A one or two year parts, and three years labor warranty is good enough for me.

  • This is so very true. By my own limited testing of Linux (RedHat 7, 2.2 Kernel) on a PIII 600 against a Sun Netra t1 105 (440MHz USII) running solaris shows this: Linux gcc 2.9.2 compile of openssl 0.9.6a ~ 4 minutes Solaris 8 gcc 2.9.2 same compile ~ 5.5 minutes Running make-test on the above binaries: Linux was approximately 20% faster. Using Sun's C Compiler the build took 2x longer, but the performance difference in 32-bit mode was under 10%. In 64-bit mode, the difference went back up to 18%. I have always assumed that a 440MHz USII is approximately as fast as a 600-733 MHz PIII, but maybe I'm a little off. After this, I can definately say that gcc does less useful optimizations on the sparc paltform (and therefore most likely PPC as well). I still haven't figured out why the Sun compilers takes twice as long to compile the same code though...
  • I'm not going to prerelease the results, but they will be available soon. Poorly done benchmarks are worse than none at all. A benchmark based on compiling code for different processor architectures is an example of a very poor benchmark.

    My benchmarks are real world, web application suite numbers. Nothing special, nothing rigged, I have a 20M website that's a combination of static pages, static images, PHP/mySQL dynamic pages, Perl forms-driven pages that write and update flat files and PHP/mySQL pages that update the database.

    The benchmark itself is script driven and simulates users on the site. There are 10 different user scripts, and they run 500 times each in 10 different fixed orders - currently as 10 simultaneous users. I'm looking at adding clients to increase the number of user tests, as I've been unable to max out the OSX box with this test suite.

    The simulation results are mined from the Apache log file and show the activity that you would expect from this near-real world example. CPU time is not captured, only successful page requests. Total elapsed time is interesting.

    The only thing that is "rigged" in any way, is that the pages are all set no-cache, so that all images and pages are delivered each time they are requested. As far as I can tell based on status returns, there is no caching being done.

    The website and the client scripts will be available to download from the benchmark page so you can run them yourselves if you wish. If you do run them, running the analysis script against the log file will allow you to upload your results to the benchmark server - should be an interesting set of data for different server configurations.

    I will say that the dual 500mhz OSX Server currently out performs the Dual 850mhz Intel Linuz box by a significant margin.

    I know that some will still not believe, but that's OK. You can run these tests yourself and post your findings on the benchmark page.

    I'll publish the URL soon.

    -t
  • by thewitt ( 457112 ) on Sunday June 03, 2001 @02:36PM (#180103)
    I've nearly completed a web serving benchmark with multiple PPC configurations running OSX Server, and Intel and AMD hardware running Linux.

    The results look nothing like the compiling benchmark, and have convinced me to start a web hosting company using OSX Server on Macintosh hardware.

    The benchmark utilities will be downloadable and you can run them on your own favorite hardware. Benchmark requires PHP and mySQL database support, but will run on more than just Apache. I'll also set up a site where you can upload your results - configuration and resulting data.

    -t
  • by ChrisCox ( 457745 ) on Tuesday June 05, 2001 @03:10PM (#180120)
    Mr. Carmack; Just how exactly have you optimized the code for both platforms? I mean, you're famous for shipping applications with highly optimized x86 code, and near debug quality PPC code. Speaking as someone who has optimized a major application for both platforms: PowerPC wins in a fair test, for integer and floating point. (I have to admit, the just announced AMD dual 1.2 and single 1.4 Ghz systems do edge out the G4/733 on speed - but not by as much as most people think) As for memory systems -- have you really tried them? The only things that a P4 system can do faster in main memory than a G4 system are memcpy, memset and memcmp. Anything more complicated bogs down the system and the G4 flies ahead. P3 was worse, with RDRAM or SDRAM. Even the AMD DDR systems aren't as fast as Apple's G4 with PC133. As for the implementations in Photoshop -- call me when you're in San Jose. In some cases it's identical C code (with full optimizations enabled), and in many common cases it's highly optimized for each platform. Oh, and Intel has worked extensively on the x86 code. Chris Cox

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...