x86 vs PPC Linux benchmarks 269
Jay Carlson writes "We've all heard about how Apple's hardware is really fast compared to PCs. ("Supercomputer!" "Twice as fast as a Pentium!" "Most powerful laptop on the planet!") So, if you *aren't* going to use Photoshop and Final Cut Pro, how fast is it? I care more about integer apps like compilers, so I did some careful benchmarking of a few x86 and PPC Linux boxes. Submissions welcome."
Benchmarks (Score:1)
apple hardware is _MUCH_ faster (Score:1)
So this really proves nothing. If you want to benchmark two boxes like this what you have to do is time each operation individually. Store on an Apple box is about 3x faster and load is 6x faster. ALU operations are usually about 5x faster. It seems to me that Apple is the Mercedes of computers while IA-32 is like a Kia.
Re:Gcc is an x86 compiler... (Score:1)
Re:Gcc is an x86 compiler... (Score:1)
Has anyone else noticed (Score:1)
Re:here's a thought... (Score:1)
good job for the g4 (Score:1)
Instructions for Do-It-Yourself x86-Biased Benchma (Score:1)
Step 1) Disable one of the Mac's processors
Step 2) Run x86-biased benchmark suite
Step 3) Publish useless benchmarks
Step 4) Congratulate self yet again for saving money by building your own x86-compatible PC so you can use it to create useless benchmarks (Uses 5x the power or a Mac! Generates 5x the heat! May be faster under some circumstances when plugged into wall power! x86
Step 5) Reward yourself by enjoying music, movies, TV, books, artwork, advertising, and Web sites all produced and encoded on Macs by people who are too busy enjoying their work to have time to pause and make useless benchmarks
Re:The tests that matter to me (Score:1)
Re:I do this every year... (Score:1)
Please, be cool about him. His only a demi-god. Nothing more.
As for the moderators... Brrrrr... Post #292 even has benchmarks posted, with a link to the source, but got Score:0... Mr Carmack had vapourclaims to offer, beyond his persona.
Do we talk fat facts or do we not?
Re:Report on price/perf of x86 vs PPC (Score:1)
G4 fares quite well even with incomplete Altivec support in the FORTRAN libraries.
http://developer.apple.com/hardware/ve/acgresea
>>>>>>
Research from outside laboratories/developers/users
An Evaluation of PowerMac G4 Systems for FORTRAN-based Scientific Computing with Application to Computational Fluid Dynamics Simulation
by Craig A. Hunter, NASA Langley Research Center
http://ad-www.larc.nasa.gov/~cah/NASA_G4_Study.
Certainly true (Score:1)
Shows Nothing. (Score:2)
Nice, what we all axpected (Score:2)
TO THE AUTHOR OF THE BENCSHMARK (Jay): Having the machine names in the comparison boxes is silly. We don't care what you'vr named your machines. Try CPU abbreviations next time. (e.g. P3/733-192, G3450-320,
Mathematica 4.0 on various platforms (Score:4)
Numbers are relative to a G3-300MHz (higher are faster).
There are more numbers on the homepage.
Athlon 1.2 GHz, 512MB, Windows 2000 [73]: 4.78993
Athlon 1.2 GHz, 512MB, Linux [72]: 4.4734
Gateway Select 1000, AMD Athlon 1000 MHz (1GHz), 512KB L2, 192 MB, Linux [65]: 3.77305
Kryotech 1GHz AMD Athlon, 512k cache, 512MB, Linux [66]: 3.69674
Gateway Select 1000, AMD Athlon 1000 MHz (1GHz), 512KB L2, 192 MB, Linux [60]: 3.57748
Dell Dimension XPS B1000r, 512MB Ram, Win98 SE [64]: 3.38084
Dell 4100, 933MHz, 128MB, Linux [63]: 3.19988
COMPAQ AlphaStation XP1000, 2 GB RAM, 4MB L2, Digital Unix 4.0F [50]: 3.16987
AMD Athlon, 800 MHz, 512 KB L2, 256 MB, Linux [61]: 3.00154
Dual Xenon 866, 512 MB, Windows 2K [71]: 2.9618
PenguinComputing, dual 800Mhz PIII, 128Mb, Linux [67]: 2.76745
Athlon 700, asus k7m, 512 mb, win98 [43]: 2.70199
Dell 800 Mhz, Pentium III, 512 MB RDRAM, Win 98 SE [57]: 2.69821
Athlon 700 MHz, 128 MB, Linux [51]: 2.67027
Athlon 650 MHz, 256 MB, 100 MHz, Win NT 4.0 [42]: 2.60662
Compaq-Digital Alpha 8200, 625MHz, 1GB RAM, DEC-UNIX [11]: 2.48495
SONY VAIO F409 notebook, PIII 650 MHz, 128MB RAM, Red Hat LINUX 6.1 [55]: 2.21137
Athlon 650 Mhz, 32 Mb, Windows 98 [32]: 2.14558
Athlon 550MHz, 128MB, Red Hat Linux 6.1 [45]: 2.13797
Dell XPS T600r, P3 Copper wl 256kb, 128MB, Linux-2.0.36 [39]: 2.09939
Dell Precison 410, 2 PIII 550MHz, 256Mb RAM, Windows NT 4 [28]: 1.8561
Dell Precision 210, 550 MHz, 128 Mb, NT 4.0 [34]: 1.83606
Dell XPS T550, PIII-550, 128MB, RedHat-5.2, Linux [24]: 1.777
Gateway GP7-500, 500 MHz, 192 MB, WinNT 4 [6]: 1.70318
PowerMac 8500, 500 MHz MACh Carrier G3, 1 MB L2, 256 MB, MacOS 8.6 [35]: 1.68249
Benchmarks are so controversial (Score:5)
Re:Who buys an Apple Macintosh to run Debian (Score:1)
This sucks (Score:2)
Now you're telling me that they have a SUCKY performance? I was at least expecting that the G3/500 would be able compete with a P3/750 and a G4/500 with a P3/1Ghz. Damn it!! Why must you do this to me Slashdot? Why must you cast doubt in my heart?
PI with MMX? (Score:2)
Or more like 3.14159 + 2010?
--
Forget Napster. Why not really break the law?
Bah (Score:2)
First of all, because my older Mac basically does what I want in a style I like. I won't give Microsoft a penny- we're not talking about that, so that's moot, we're talking about Linux platforms. I also won't give Intel a penny if I have a choice- and am seriously questioning whether it's good to give nVidia money either at this point. They'll only use it to impede progress, they are already doing it, strong-arming vendors.
Second of all, the CPUs are so different anyway that it seems crazy to try and compare them. It's like the x86 are model airplane engines screaming at 60,000 rpm and the PPC is a truck rumbling at 2000 rpm. It's a difference between top-end horsepower and bottom-end torque. PPC is register-rich and has a relatively shallow pipeline. I daresay the version of GCC used could have been better, but even if it was fully optimised, the 'torque curves' of the chips are DIFFERENT!
The x86 has been designed for years to scream for doing certain very narrow tasks- the simple repetitive processing of games, the focussed processing of well optimised OS routines. In many ways this is the most common situation (though I tell you, I've seen PCs sag and go unresponsive... admittedly running windows...). However, PPCs are a decent general purpose tool for _broader_ tasks. Anything that can use all the registers at once, slog through really big amounts of data in complicated ways... hence, the way Photoshop filters keep coming into the spotlight.
I don't know what all this proves, nor do I especially care since any of it is 'good enough'. I guess the bottom line for me is that I can't see performance metrics as being the end of the story. When you look beyond the actual performance and consider what happens as a result of your buying decisions, it becomes clear that the better immediate information people have, and the more prone they are to consider nothing whatever but raw results defined as narrowly as possible, you see the real reason why Microsoft is choking IT to death, why nVidia is currently threatening vendors to cut off the air supply of competing 3D chipmakers, why Intel destroyed other choice for so many years.
The end result of "X>Y, therefore all your base are belong to X" is cartels, stagnation, and the choking off of true progress. This is the case even when X is indeed >Y. You can't have a market if you're only allowed to buy one thing- and if you're not free to do whatever the hell you want, for any or no reason, you're being restricted by your own anal-retentive perfectionism and playing right into their hands. Two years from now these CPUs will ALL look like crap, but if you can successfully and publically make the case that there is only one greatest choice and nothing else will do- why, you are part of a market force handing complete dominance to that choice (like with Microsoft) and trusting it to keep on deserving your support AFTER it has no competition left and can do what it wants: and when have we EVER seen ANY company deserve our trust after it has controlled its market? It's not in their nature.
Which is to say- I'd still get a Mac. And people can do as they please, but I really can't have much respect for those who'd denigrate me for my choices- I have enough time and patience to run an older 300Mhz G3 machine in relative comfort, so that counts as 'enough', plus I cannot forget the larger situation, all these companies busily trying as hard as they can to do away with all capitalism and become the single source for whatever it is they do. That disgusts me- it's not what I call a suitable model for society. So rather than whine or write long dissertations about it, I ACT in accordance with my beliefs.
If any of you guys REALLY believe that PPC should go away- you should be running Windows, not Linux. People have all kinds of motivations for what they do, and in the larger scheme of things, 50% or 80% or even 300% processing speed disparity is pretty insignificant. Have some historical perspective.
End long, rambling, crotchety ol 'rant ;)
Photoshop versus standard benchmarks (Score:4)
Re:Gcc is an x86 compiler... (Score:1)
Re:Gcc is an x86 compiler... (Score:1)
XFS on solaris would indeed be nice; UFS is slooooooow. I'd even settle for ext2 - no journaling but it's a helluva lot faster than UFS. Actually it'd be nice if XFS/linux worked on bigendian systems too.
Re:Linux is best on x86 (Score:1)
Re:Gcc is an x86 compiler... (Score:1)
Re:Gcc is an x86 compiler... (Score:2)
Unfortunately unless you only use gcc, it isn't a very good benchmark for CPUs. Unless every CPU ran in the same otherwise identical system, other differences would greatly affect the outcome. The gcc test is in fact a fairly good exercise for the system as a whole - it covers disk i/o, memory bandwidth, cpu power, and the abilities of the OS. Unfortunately the CPU is not usually the bottleneck in this scenario. If you have little memory, it's the disk subsystem. If you have lots of memory, it's either the ability of the OS to use it, or the memory bandwidth itself. I'm actually a believer in the gcc benchmark - but not for CPUs.
Re:Benchmark downplay G3/G4 claims hosted on Mac.C (Score:1)
It's not that the processor sucks. It's just that Apple has put a heck of a markup on them, so that they make a hell of a lot more off of them than the manufacturer does. This is the opposite of the situation in the PC world, where the PC builder makes much less profit than Intel... so what should be a superior processor architecture gets marginalized.
Thank you, Apple.
"SuperComputers" have fast fp, not int (Score:2)
In general, though, scientists don't care about integer results-- floating point is more important. I have heard that some Crays were notoriously slow at integer arithmatic.
Re:"SuperComputers" have fast fp, not int (Score:2)
However, this whole set of semantics has been one oversimpification after another.
1. At one time, a supercomputer was extremely helpful in designing nuclear weaponry--the faster the better, whether that was an Eniac or a Beowolf.
2. So the Commerce department defined a supercomputer in terms of whether one could design a bomb on it-- and failed to recognize that PCs could catch up to that standard.
3. Apple designs a system with commendable floating point performance-- which approached the theoretical limits of a Cray (don't ask me which one). Much as Intel tried to hype the i860 as a "Cray on a chip" (albeit a very slow one), Apple hyped up the "Banned in Iraq" angle.
As a student in computational science, I recognize that certain problems demand more that just fast fp units. Some Bioinformatics problems, as you mentioned, are essentially memory intensive-- multiple alignment, etc. Others, such as tertiary structure prediction, may be more floating point intensive.
Some scientific problems truly stretch the limits of currently available computing power. It would be a waste of time and effort to cobble together a generic i386 or ppc bpinary, and slap them on a dual processor Macintosh with only one working processor, or a overclocked Athlon with a screwy IDE controller.
Re:"SuperComputers" have fast fp, not int (Score:3)
I really was not too impressed with the benchmarks presented here-- and not because they were anti-Apple. The fastest machine had poor disk performance-- but this is really very relevent to compilation and development. Because the benchmark was very specific-- cross compiling for MIPs machines-- (wtf?), I have no idea how far these benchmarks can be stretched.
I think he should have tried to optimize the binaries-- after all, he was compiling from source. And how do his compilers compare with the latest gcc releases (not snapshots, though)?
BTW, although I own both a Mac and a PC, my mac is old, and I mostly use my (faster) PC. I am not an advocate of either platform...
Re:Nice, what we all axpected (Score:2)
If he'd simply put "G4/450" and "P3/733" in the tables, I for one would have been suspicious about the amount of RAM, etc.
Re:Examples? Suuuuuure.... (Score:2)
Er, that is to say, I got 1,261 search results each representing a patent. I don't have 1,261 patents myself. :-)
Thanks for clearing that up... with the USPTO's track record, I wouldn't have been much surprised to learn that you actually obtained 1,261 patents in the course of looking up some patents. ;)
here's a thought... (Score:3)
But it seems to me that GCC builds much slower code on powermacs than whatever it is Apple builds their binaries with. (If Apple's using a modified GCC, which wouldn't much surprise me, I sure wish they'd throw us their patches.)
Now, if I'm right, and GCC produces relatively unoptimized binaries, shouldn't you compile GCC itself (and maybe even the rest of the system) with another compiler before pitting machines against each other in a compiling race? It seems to me that you're using a slow, badly-compiled GCC binary, otherwise.
Granted, this seems an excellent real-world test, since nobody does that. But I can't help but feel that we're (by "we", I mean the Community[TM] )currently incapable of exploiting the PowerPC, and it seems unfair to blame the chip.
Maybe I'm wrong. I would love to see some discussion from the GCC team. I just thought since nobody else seemed to have brought this up...
Re:PPC vs. X86 (Score:3)
It's not just Apple.
However, to give you a bit of good news, all Macs sold in the last several years have user-installable RAM, and with the exception of the original (Rev. A through D) iMac, it's very easy to do - easier than in many PC's.
The new G4's use standard PC133 SDRAM, all other model desktops use PC100 (the Cube and iMac), and the TiBook uses PC100 SO-DIMMs, the iBook (old and new models) uses PC66 SO-DIMMs, though PC100 works fine, too.
PC133 SO-DIMMS seem a tad flaky so far - I just got a Gateway 9500 laptop at work and still haven't gotten a 3rd party 256MB SO-DIMM that will work with it (we've tried Samsung, Micron, and Hitachi, with Infineon on the way). Apple will probably start using them with a TiBook revision at some point, after the interoperability issues vanish.
- -Josh Turiel
Re:Photoshop versus standard benchmarks (Score:2)
My point is that Motorola and Apple have really dropped the ball with respect to AltiVec and developer relations.
Re:Hmmm (Score:2)
Re:The tests that matter to me (Score:4)
the Altivec unit handles only single precision
floats
-> only useful for special tasks or for calculations were you can take care for the
reduced precision,
-> not for general scientific numerical calculations
Re:PPC vs. X86 (Score:2)
Why? Because Apple actually has the x86 world beat in the notebook category. The cost of my iBook (I work in education) was $1545 + $237 for the AppleCare warranty. Try and configure a PC laptop the same way for $1545. My iBook has an XGA display (12" yeah, but still XGA) AirPort card, built in ethernet and 56K modem, as well as DVD. I swear by Dell systems, but I couldn't come close to touching the iBook for the same price, and I would have had to tolerate an external wireless antenna, as you can't have ethernet and 802.11b in a laptop yet...
I think that the G4 desktops are still overpriced, but the iBook line is very reasonable. The iMacs are okay, but the 15" CRT is dead, Apple. If they came out with a 17" version, they'd see a renewed interest in them...
---
Re:yab (Score:2)
Fairview (Score:2)
This benchmark in particular is bullshit. What should be looked at is peak work done per clock and total price per clock if you want a price comparrison test. Or work per watt or maybe even cool facter per work/clock. PPC chips have an advantage of not needed an instruction decoder and a larger register space. You can do a bunch of LOAD operations on the same clock so more operations are running on the next clock than with x86 based systems. On the otherhand x86 processors have the advantage of a higher clock speed which begins to negate the lower register space. In general, anything you compile with gcc is going to suck ass. Do you REALLY think Apple uses gcc? Goddamn they use Motorola's compiler! They give you gcc because they don't want to pay the licensing fees on the fucking thing for every copy of OS X they sell. Motorola's compiler is oodles better than gcc ever will be, ever. In the benchmark's preamble he says "well I don't use Photoshop" well goddammit the Apple statement just became invalid as did his entire premise. Since he wasn't benchmarking Photoshop 6 he ought to have been using a 450MHz P3 and Athlon rather than shit that is OBVIOUSLY going to be faster in mere clock speed. If you can't tell from the tests the compilations have alot to do with moving data between the processor and memory. The G3 in the iMac and G4 are already at a disadvantage due to their bus speeds while the Athlon gets to chug along on its EV6 derivitive bus. Testing the processor ought to involve shit that can be held entirely in the chip's cache (which can be considered part of the chip because it afterall it's memory cache). Then the chip's FSB speed and register space size become important. You can then get a good measure of how much the processor is doing on every clock and how many clocks it needs to get a particular job done. At this point you've got a real test. Work done per clock or per second or watts used per work cycle or something. Here is where you say X processor is more better than Y processor. This is why SPEC tests cost so much money. They have LOTS of different tests that test things in different ways and they still don't show a true measure of real world performance.
In the real work you might be running Final Cut Pro or Photoshop 6 in which case you damn well better have a system they are optimized on (yes I'm aware FCP is only available on Macs). If your real world workload involves compiling go with the system that works best for your particular task at hand. This dude did a compiling benchmark test, he has benchmarked COMPILERS compiling binaries for whatever. Everyone here's been arguing "well such and such is faster than such and such" fuck that. This is not a x86 vs. PPC test or something. This is somebody feeling righteous because they keep getting reeled in by marketing quotes and feel like they've been lied to. Use what works not what a marketing department says works.
Re:yab (Score:2)
Re:More work per clock. (Score:2)
Oh yeah, nothing's worse than those damn noisy Intel chips.
--
Re:"SuperComputers" have fast fp, not int (Score:2)
Ugh. This makes some assumptions that are incorrect. Several sciences make extremely heavy use of integer computations versus floating point. Whats more, these are the sciences that are driving large scale computational projects today, consuming cycles at an exponentially growing rate.
Specifically I am writing about bioinformatics and related information theoretic sciences, where data mining operations, large scale (GB -> TB) database comparisons are the norm and the coming norms.
A supercomputer has always been a nebulous term. Ask 10 people for a definition, and you will get 11 answers. Fundamentally there is no single quantifyable differentiating factor between a computer and a supercomputer. It is more of a subjective view than an objective firm quantitation.
My AMD Athlon based system can theoretically hit 3.6 GFLOPs. But can I really pull that much work through it? The G4's can hit about the same amount, theoretically. Does this make them supercomputers?
IMO, hell no. I have a simple definition (non-quantitative) of a supercomputer. A supercomputer allows you to tackle the large problems you need to handle rapidly, in order for you to effectively do your science. If your calculation runs fast on your pocket calculator, over the whole range of problem sizes you are willing to consider, then for your particular problem, your calculator is a supercomputer. If you need a massive supercluster of > 1000 processors to run your BLAST jobs in a reasonable period of time (this is my domain), then that defines your supercomputer for you. If the Sony PS2 vector units tied together on a network do wonders for your problem, well...
The definition is subjective. You cannot quantify it in any reasonable way. The work on clusters is giving the folks setting up export restrictions fits on what to do for these.
And finally, a science is NOT only floating point intensive in simulation codes. In fact the majority of codes using modern methods are more limited by memory latency and bandwidth than they are on the core FP system. The quality of the compilers matter far more than the FP ability of the chips.
Just my observation as a reformed computational physicist (learning to be a bioinformatics type). Aside from this, gross generalizations tend to be incorrect....
Re:Gcc is an x86 compiler... (Score:2)
Pentium optimisations generally reduce
performance on P6 or later generation cores.
Dunno how much difference 686/ K7 optimisations
make.
Rob
Re:More work per clock. (Score:3)
Thus any scores over .61 and .72 respectively, indicate that the PowerPC is doing more per clock cycle than then PIII. If Motorola can ever get their act together (and that is not a certain), normal code on the PowerPC will run every bit as fast and faster than the x86 processor
This illustrates a common fallacy - if chip A does more work per cycle than chip B, then chip A is "better" and as soon as those "idiots" who make it get the clockspeeds up there it will perform better. This neglects the fact that B may do less work per cycle precisely because it is designed for extreme clock speeds, and in fact there are plenty of instance where the "speed demon" cpus (high clock speeds, simple instructions) outdo the brainiac chips (lower clock, beefier instructions). The reason Pentium 4 trounces any PowerPC is that it is designed to scale to 2GHz. Of course it does less work per clock, but overall it does more per second, and that is the more important metric.
Re:Ah, statistics (Score:4)
No, the Apple Store [apple.com] price I quoted in that table is for a single-processor G4/533. Go price it yourself.
(I'd give a URL but the Apple Store is too sessional.)
Re:Oh deary me... (Score:4)
All this means is that compiling on a RISC architecture is bound to be a great deal slower.
I'm aware of that, and I talk about it on that page. See the "Choice of workloads" section.
The install-egcs test does measure native compilation performance. This is relevant for people who use the box for development.
The install-glibc and cross-gcc tests are both compiling to a single RISC architecture, little-endian MIPS. The amount of effort required to optimize for PPC or x86 doesn't factor into this.
If you don't care about development compile times, just look at the cross-only numbers.
Re:Oh deary me... (Score:2)
Re:apple hardware is _MUCH_ faster (Score:2)
Re:No, not really. (Score:2)
Do you have any examples?
Surely they can't have patented emitting certain code though - just the algorithms they use to produce it?
Re:Fairview (Score:3)
Hmmm (Score:3)
He just bought a titanium powerbook.
'nuff said.
Although that cinema display is super slick (as it should be for its price tag...).
But still an interesting test... (Score:4)
No, not really. (Score:5)
Hi. I'm a GCC maintainer. I don't work on this part of the compiler, but I can speak to this point:
Nearly all of the optimizations in GCC are not machine-specific. Those kinds of optimizations, ones which are specific to the processor, are called peephole optimizations, and while every little bit helps, they don't make that much of a difference. The big ones are done at an intermediate level, before the compiler "knows" what processor it's using and starts to chunk out the opcodes.
More specifically, unlike the Linux kernel, glibc, and other major projects, GCC is not designed for and targeted primarily at Intel chips. The x86 is just one more back-end like any other; sometimes it falls behind and sometimes it pulls ahead, development-wise.
Some have, some have but won't be in the upcoming 3.0 release in a few weeks, and some are yet to come.
The biggest problem is that many of the really cool optimizations -- the ones that make a big difference and aren't CPU-specific -- have been patented by IBM and other major players.
Examples? Suuuuuure.... (Score:5)
I can give you over a thousand examples, with the help of our f[r]iendly patent office (my tax dollars at work). Just go to http://www.uspto.gov/ [uspto.gov], look under the green Patent Grants area, follow the Advanced Search link, and search on "compiler and optimization". Doing this today, I got 1,261 patents, but some of them don't apply here.
Er, that is to say, I got 1,261 search results each representing a patent. I don't have 1,261 patents myself. :-)
Re:Gcc is an x86 compiler... (Score:2)
The Three types of Lies (Score:5)
Damn Lies, and
Benchmarks
Nothing really new here, is there? (Score:5)
MacOS X, stuff like Maya, Final Cut Pro, etc. etc. quite obviously runs better on PPCs, barring some strange circumstances. I imagine that with enough "brute force" (RAM, dual processors, etc.) one could get a PC to run this stuff faster than a Mac.. but what's the point? You might as well just keep it simple and buy a Mac that'll run it pretty well outta the box.
I agree though, that cost is an important consideration. With 760MP around the corner, if it ever does surface in quantities making it available, dual Athlons might give dual G4s a bit of a whippin', especially considering AMDs prices as of late. In general I find you can buy a PC with a much faster proc, more RAM, etc. for the same cost as a Mac from the Apple store.
Still, even a 1.2GHz Athlon would probably choke on OS X, and the G4 will at most hiccup...
Re:Meanwhile...in the real world (Score:2)
When we developers don't have to wait a full hour for a full rebuild when we're compiling today's games. (If you think that's bad, the Windows 2k guys had to wait *12* hours for a full build.)
Hardware is SLOW SLOW SLOW.
Re:there's a reason it does more per clock (Score:2)
Since the others who corrected this statement did so in AC mode, I really thought that I should correct this in a way that will get read.
Most instructions in either a RISC or CISC chip take more than 1 clock cycle to complete. This is why pipelining works. Tyipcally the RISC processor (like MIPS or PPC) will take more machine instructions to accomplish a given task. The advantage here is that the chip can be better optimised for a smaller set of possible instructions and is likely to finish each instruction in a shorter amount of time. The drawback is that the larger number of instructions required to complete a given task is likely to take up a larger amount of space in RAM and on disk.
When dealing with a CISC processor like that x86 family, you have a larger number of instructions to choose from and can thus accomplish a given task in a fewer number of instructions. Because the length of each instruction can vary, the most used instructions are usually the shortest and can therefore save RAM and disk space. The drawback is that the processor is less optimised for each particular task that it knows how to do and may therefore require more clock cycles to execute each instruction.
Please note that my comments above are generalizations. There are a large number of tradeoffs involved in creating a processor. Consideration must be given to cost, power consumption, supported instructions, compatability, clock rate, pipeline length, and so forth. This is not an easy task to achieve because there are so many different variables. This also means that when you are evaluating the finished product you must consider more than just the number of instructions (CISC vs. RISC) or the clock rate or the number of instructions executed per second (MIPS).
Furthermore, the processor in only one part of the whole computing solution. In addition to the processor, you need the supporting motherboard and RAM systems which introduce a whole slew of bottlenecks to the system. You also need a good compiler which optimises appropriately for your priorities (program size vs. execution speed, etc.). Also consider that a system may be well optimised for a problem that you are not trying to solve. Some chips are better at solving floating point problems while others are better at solving mostly interger problems.
________________________
Re:More work per clock. (Score:2)
Only on desktop. Itanium runs at a mere 800 MHz (and yet has the fastest FP performance in the world).
Itanium is actually a pretty good argument against the sheer cluelessness of people who insist on doing performance-per-clock comparisons. It is manufactured in P858, the same process as the Pentium 4, yet runs at less than half the clock speed. It uses much more power, has much lower integer performance, has higher FP performance, has lower memory performance, and is more scalable for multiprocessing. What sort of generalization are you going to derive from THAT?
For what may be hoped to be introduced in the near future, a 1 GHz PPC chip should outperform a 1.1 Intel and have some other potential redeeming characteristics.
This is meaningless. For starters, Intel has a 1.7 GHz processor out now, so it doesn't make sense to compare it to a 1.1 GHz processor (or various vapor PPC products). Second, by the time a 1 GHz PPC is finally shipping, Intel will have something a lot faster. Third, you are assuming that performance scales linearly with clock speed, which is a horrible assumption (clue: the memory performance is not affected by clock speed changes).
Having the long pipeline so you can scale past 2 GHz is not all that its cracked up to be in the real-world. Mis-predicts cause too many pipeline flushes with other bad potential side effects. For some stuff its fine, for many things it ain't. The PPC runs with a very short pipe.
It doesn't matter. If you have two computers, one with double the pipeline length, and the other with half the cycle time, the misprediction penalty will be identical. One will have to recover double the number of stages, but since each stage takes half as long, it's the same.
Re:Gcc is an x86 compiler... (Score:2)
Given that your SPARC compilation was slower than your Intel compilation, there are at least three possible explanations:
I do this every year... (Score:5)
I'll set up whatever current game I am working on to run with the graphics stubbed out so it is strictly a CPU load. We just did this recently while putting the DOOM demo together for MacWorld Tokyo.
I'll port long-run time off line utilities.
I'll sometimes write some synthetic benchmarks.
Now understand that I LIKE Apple hardware from a systems standpoint (every time I have to open up a stupid PC case, I think about the Apple G3/G4 cases) , and I generally support Apple, but every test I have ever done has had x86 hardware outperforming PPC hardware.
Not necessarily by huge margins, but pretty conclusively.
Yes, I have used the Mr. C compiler and tried all the optimization options.
Altivec is nice and easy to program for, but in most cases it is going to be held up because the memory subsystems on PPC systems aren't as good as on the PC.
Some operations in Premier or Photoshop are definitely a lot faster on macs, and I would be very curious to see the respective implementations on PPC and X86. They damn sure won't just be the same C code compiled on both platforms, and it may just be a case of lots of hand optimized code competing against poorer implementations. I would like to see a Michael Abrash or Terje Mathison take the x86 SSE implementation head to head with the AltiVec implementation. That would make a great magazine article.
I'll be right there trumpeting it when I get a Mac that runs my tests faster than any x86 hardware, but it hasn't happened yet. This is about measurements, not tribal identity, but some people always wind up being deeply offended by it...
John Carmack
Oh deary me... (Score:4)
Firstly, the action of compiling on different architectures is very different, even without considering optimisation strategies. To compile code into the CISC code of the x86 architecture is very different from that of a RISC chip such as the PowerPC. For a start, instruction ordering etc. for a RISC chip, even for not really optimised code can take far more processing time. Then, if you add optimisation, which in a RISC architecture is a FAR more complex task.
All this means is that compiling on a RISC architecture is bound to be a great deal slower.
Basically, this "benchmark" is measuring not only the intrinsic speed differences of the architectures and chips but also degree of optimisation the native compilers used can cope with and the extra processing power needed to generate the code during the comile stages.
Basically, using compilation as a benchmark is not at all useful, other than to test the difference in speed of two similar peices of equipment using identical software (ie. compilers & OS) or the difference between two versions of the same OS or two versions of the same compiler.. Basically, you can only change one variable to deliver a meaningful benchmark if using the method chosen in this "study."
The only way to get a half-way meaningful benchmark for the two systems used here would be to write a program which did lots on disk I/O and integer manipulation, worrying about whether it's being biased for or against certain types of architecture or use (eg. loops sitting in cache etc.). This would give you an idea of the real-world speed differences between the two systems. However, you won't be just measuring the intrinsic speed of the machine but also the different ways the kernels have to do things on the two architectures, the degree of optimisation the compilers building the kernel and the program could generate and the speed of the hard disk built into the machine.
As you can see, it's a tricky thing comparing two types of machine.
Brilliant. You've benchmarked your hard drive. (Score:2)
You mentioned in the article that the test isn't disk intensive, but every stage of GCC feeds the next via a file. Tons of RAM or not, you're still benchmarking your disks.
And BTW- your hunch that gcc produces shitty PPC code is correct. Run the bytemark tests if you want a more interesting benchmark of CPU performance. Make sure to test using different compilers on the same CPUs to show how much a compiler can affect efficient CPU utilization in the software it's building.
Just to be fair (Score:3)
----
Althea [sourceforge.net] verified to run quickly on Mac and PC hardware.
Apple Has Always Been Deceptive - Look at this! (Score:2)
The "Faster" model:
500MHz PowerPC G4
1MB L2 cache
256MB SDRAM memory
20GB Ultra ATA drive
DVD-ROM w/DVD-Video
ATI Rage Mobility 128
10/100BASE-T Ethernet
56K internal modem
Two USB ports
One FireWire port
The "Fastet" model:
500MHz PowerPC G4
1MB L2 cache
256MB SDRAM memory
30GB Ultra ATA drive
DVD-ROM w/DVD-Video
ATI Rage Mobility 128
10/100BASE-T Ethernet
56K internal modem
Two USB ports
One FireWire port
Extra AC adapter
Extra battery
So here's my question: Why is the "Fastest" G4 any faster than the "Faster" G4?
Because the hard drive is 10 gigs Larger?! (they're all 4200RPM).
Or is is the extra ac adaptor that somehow makes it "fastest"?
Friggin Apple. Buncha liars.
Re:Hmmm (Score:3)
I applaud what Jay did. With the release of OS X Server, it's obvious that Apple is no longer only pandering to the Photoshop market, and plain-Jane int benchmarking is very valuable in evaluating the use of Apple as a server platform; what the hell do I care if my Mac server can run Photoshop?
But Photoshop continues to be the "benchmark" of choice for Apple. No discussion of its Java compilation speeds, or its applicability in distributed computing, or large-scale simulations, or anything else that would matter to the real computing community. Just... Photoshop. FTN, I'll stick with Solaris and NT.
--
Re:Hmmm (Score:2)
Re:here's a thought... (Score:2)
Hmmm (Score:2)
Re:Examples? Suuuuuure.... (Score:2)
Re:PPC vs. X86 (Score:2)
64 megs: $100
128megs: $200
256megs: $400
Prices for PC133 memory on pricewatch.com
64 megs [pricewatch.com]: $9
128megs: [pricewatch.com]: $15 256megs [pricewatch.com]: $26
Unless you think paying $400 for something that you can get for $26 is somehow fair, I think it's you who needs to stop 'beeing a troll'. Looser.
Gcc is an x86 compiler... (Score:3)
Most people know that gcc is slower on suns then the sun compiler. That is all about optimization. So why wouldn't it be the same for ppc?
Try doing this benchmark on darwin, and im sure the macs will do better against the intel boxes running darwin, then running linux. Im not saying that it will be faster, im just saying your comparing apples and oranges.
yab (Score:3)
The comparable data are:
Re:Hmmm (Score:2)
But for your accounting prof's needs a mac may still be better.
I have 5 computers: 2 FreeBSD, 2 WinMe, 1 MacOS. Each is good for one thing in particular. The Mac is my system of choice of productivity -- spreadsheets, publishing, webdev, Photoshop, that stuff.
The PCs do have some advantages in some areas, but they have plenty of disadvantages too. In the end they are web browsers and game boxes, to me anyway.
Let that poor guy enjoy his powerbook in peace!
I'll say it out loud: It's OK to use a Mac!
Re:Hmmm (Score:2)
What you are not taking into account is how the apps and the OS work -- the "feel" of the computer is MORE IMPORTANT than the raw speed for many users. Performance isn't measured just in how fast a file opens; it's how fast can you get to that file, and say force it to open with some other app than the default, and how much time is setting up the *%(@_! printer going to take?
I do a lot of pro-level publishing work on a 400MHz PowerBook. You know what? As sick as this sounds, it is fast enough. I experience no delay in any operation long enough to make me think, "Crap, I need a new computer." And this Mac sits on my desk next to an 850MHz PC.
I think it sucks that Apple's clock speed is lagging, but the fact that they are still in business is a testament to a couple of things:
1. Speed isn't the most important thing. We've passed some threshold where even a 1-year old computer is just plain fast ENOUGH for a lot of tasks. And that's not complaining or compromising; it's genuinely good performance.
2. The MacOS continues to remain a lot more useful than Windows to a lot of people... to enough people, anyway, to keep Apple afloat.
It's OK that raw speed matters to you. But don't make it the center of the debate.
Re:Hmmm (Score:2)
My mom had a G3 Powerbook. Towards the end of its warranty period, she had to send it in like 4 times for service. The last time, Apple said "enough is enough" and they sent her a new G4 powerbook.
That's not a typo. Apple replaced a G3 powerbook with a new G4 powerbook that was much more expensive. And it didn't take a ridiculous amount of bitching at them -- THEY OFFERED IT. She had it in about 1.5 weeks. (had to ship the dud back first, in a freely-provided shipping box.)
Some aspects of Apple's service are bad, but in my experience they come through when it counts.
Re:PPC vs. X86 (Score:2)
I had a Toshiba Portege laptop at work. It needed more RAM. We cracked it open and popped a laptop-format memory module in.
When we turned it on, the display said, "Please remove the non-compatible memory module and replace it with an authentic Toshiba part" or something like that.
It was the ultimate insult.... Tosh forced you to buy their super expensive memory. At least when Apple was doing that crap they'd make weird arbitrary changes to the SHAPE of the memory board, so you wouldn't feel teased!
Re:Benchmarks are so controversial (Score:2)
beowulf? pah!
Re:Hmmm (Score:2)
ah yes, real computing by definition excludes photoshop... of course.
let me explain something to you: the number of people who use photoshop so massively outweigh the number of people who use gcc that the notion of compilation benchmarks applying to the "real world" is almost laughable.
its applicability in distributed computing...
are awesome. you should really look into it if you are, indeed, "serious" about distributed computing. the project is called appleseed [ucla.edu]... point, click, cluster....
No discussion of its Java compilation speeds,
now, if you'd been paying any attention at all to this board for the last, oh, four weeks, you might have noticed the wwdc banner ad touting mac as the Next Big Java Platform. did you go and check out any of their material on java? you should really give project builder and interface builder a whirl... with those tools i'll beat you to market even if you have a compile time of zero.
I'll stick with Solaris and NT.
i assume from this that you're running solaris on an x86. i don't even need to go there...
Re:Hmmm (Score:2)
The POWER chips that are in AIX machines aren't quite the same architecture as the PowerPC chips in Macs. They've got bigger busses, bigger caches, and some extra instructions i think. IBM only puts PowerPC chips into their low-end RS/60000 workstations (604e chips i think). They don't use them in the server range.
Re:Hmmm (Score:2)
Macs run WinNT 4 for PPC and OS/2 Warp for PPC as well as DebianPPC, SuSE PPC, YellowDog, LinuxPPC 2000. Soon Mandrake will join the list.
Not that you'd want to when Mac OS X is so sleek.
(okay, Linux has it's place on the Mac, but X is so sweet!)
The PPC is a great architecture, powering the RS/6000 AIX machines for years. No sense in knocking the Mac when using standard non optimized code. Now, perhaps this would have been a fair run if Amiga OS had been tried all the way 'round... but all Jay showed is that his current setup suits him fine.
Yes, Jay can say that if gcc is prepared by Apple for the g4 that he'll only use it when Apple sends it to the FSF... he's letting his gnu philosophy get in the way of fair minded benchmarks. Apple complied with anything the FSF would have wanted by posting the source to their take on gcc. If he's hung up on licensing issues, that's his right, but that's no excuse for publishing benchmarks as even-handed.
A host is a host from coast to coast, but no one uses a host that's close
Re:Hmmm (Score:2)
Re:Hmmm (Score:3)
Let's see: Good Macintosh; $2000. Good x86; $1000. There's another thing I would consider a lot: Price/Performance ratio. Depends really on what you are doing. Of course, most /. readers already know this. Integer-based calculations are in the majority of programs out there. Period. Few programs are heavy floating-point calculations, such as video and image editing. So it makes it seem that the Macintosh is this "supercomputer". Why? Adobe Photoshop 6.0??? You can't judge a computer's overall performance by one application!
Though, dear Mac fans, don't bark yet that the whole thing is a sham. There is still some creditability in the whole thing and it needs to be looked in to... prove you're better; don't flame!
Good point! (Score:2)
Agreed! I'd really like to see how things would have come out in more SMP friendly conditions. The dual G4 seemed to be an effort by Apple to overcome the MHz war, and was priced accordingly. OTOH, the classic OS has had only limited support for SMP, so the uniprocessor numbers aren't totally misleading. Apple released a machine a few years ago - I think it was 604 based - that supported multiple processors, but because it was the only machine available, developers never got into it.
Re:Photoshop versus standard benchmarks (Score:2)
You've got your cart before your horse. Just like a Photoshop "benchmark" is useless to you, a gcc "benchmark" is useless to media professionals who care about things like Gaussian blurs and MPEG encoding. How many Macs do you think Apple would sell with an ad campaign centered around how fast they compile kernels?
When you hear Apple say that a Mac is twice as fast as an Intel system, just assume they're talking about the kinds of tasks their target market would care about.
With Linux, hardware performance doesn't matter (Score:5)
Vector Instruction can help everywhere.. (Score:2)
http://www.stepwise.com/Articles/Technical/StringL ength.html
What these benchmarks really show is that the gcc compilers for 80x86 are vastly superior to those for PowerPC. It also shows how the majority of Linux (kernel and user space apps) is un-optimized for PowerPC. It doesn't mean that PowerPC is slow - just that it's full potential isn't being taken advantage of. A more usefull comparison would be to compare binaries compiled with CodeWarrior. I'm sure a 1.2GHz Athlon would still kill any PowerPC but not to the same extent these benchmarks show.
Willy
How about Altivec? (Score:2)
While I've been typing this, I bet at least five people have already pointed out the same thing... ;-)
Unsettling MOTD at my ISP.
Linux is best on x86 (Score:2)
It is also important to consider that this was not a very good comparison and I don't consider the benchmarks to be precise.
More work per clock. (Score:5)
The tests that matter to me (Score:5)
(gpl'd with source)
450 mhz g4:
1.7 gigaflops with altivec
410 megaflops without altivec
500 mhz pentium:
220 megaflops
--jeff
So basically.... (Score:2)
Re:Benchmarks are so controversial (Score:2)
I would encourage you to go to Motorola and Intel's site and look up power consumption of the various processors. The results may supprise you. When I last checked the PPC 7400 vs the PIII comppermine I found that the PIII used just slightly more (1 watt or so) power than an equal mhz 7400. Of course, the Coppermine is a fully rolled solutions, cache and all, whereas Apple adds 1MB of L2 cache to the 7400 to make it the G4. SRAM uses quite a bit of power and pushes the G4 over the PIII in the power/clock category. OR did you think Apple stuck those gigantic heatsinks in there just for fun?
PPC vs. X86 (Score:3)
Unsuprisingly, the Macatista wrote that meggy-hurtz don't matter, and besides, Mac's are 3 times more powerful than a Pentium 3 anyway! Pop.Sci. wrote back, saying breaking the Ghz IS a milestone thats important to note, and that tests have shown that yes, in some areas, Mac's are 3 times more powerful than a Pentium 3. However, these same tests show that in some areas, x86 platforms are 3 times more powerful than the Mac.
This argument has long bored me. The arcitechural differences between x86 and PPC have been vast until the last year or so. According to an article at Ars Technica [arstechnica.com], the Intel Pentium 3 chip is somewhat like the PPC, but the AMD Athlon is even more similar to the RISC found in the Mac. Even if that's the case, have you ever seen the price difference between the two platforms? Plus add in your options (and the price thereof) when buying a Mac. You take what Apple will give you. Apple's prices on memory are so laughable as to be a great stand-up routine.
Now don't get me wrong, I have nothing against the hardware, and the only problem I have with the OS as an intermidiate user is the file organisation system. I just think that Apple's managment sucks, and that I get more bang for my buck going out to a local computer store, in which I can support the mom&pop's of America. A one or two year parts, and three years labor warranty is good enough for me.
Re:Gcc is an x86 compiler... (Score:2)
Re:Web benchmark coming (Score:2)
My benchmarks are real world, web application suite numbers. Nothing special, nothing rigged, I have a 20M website that's a combination of static pages, static images, PHP/mySQL dynamic pages, Perl forms-driven pages that write and update flat files and PHP/mySQL pages that update the database.
The benchmark itself is script driven and simulates users on the site. There are 10 different user scripts, and they run 500 times each in 10 different fixed orders - currently as 10 simultaneous users. I'm looking at adding clients to increase the number of user tests, as I've been unable to max out the OSX box with this test suite.
The simulation results are mined from the Apache log file and show the activity that you would expect from this near-real world example. CPU time is not captured, only successful page requests. Total elapsed time is interesting.
The only thing that is "rigged" in any way, is that the pages are all set no-cache, so that all images and pages are delivered each time they are requested. As far as I can tell based on status returns, there is no caching being done.
The website and the client scripts will be available to download from the benchmark page so you can run them yourselves if you wish. If you do run them, running the analysis script against the log file will allow you to upload your results to the benchmark server - should be an interesting set of data for different server configurations.
I will say that the dual 500mhz OSX Server currently out performs the Dual 850mhz Intel Linuz box by a significant margin.
I know that some will still not believe, but that's OK. You can run these tests yourself and post your findings on the benchmark page.
I'll publish the URL soon.
-t
Web benchmark coming (Score:3)
The results look nothing like the compiling benchmark, and have convinced me to start a web hosting company using OSX Server on Macintosh hardware.
The benchmark utilities will be downloadable and you can run them on your own favorite hardware. Benchmark requires PHP and mySQL database support, but will run on more than just Apache. I'll also set up a site where you can upload your results - configuration and resulting data.
-t
Re:I do this every year... (Score:3)