Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Hardware

The Battle in 64-bit Land, 2003 and Beyond 371

An anonymous reader writes "Paul DeMone has an excellent article up at Real World Technologies on the future of 64bit computing. Find out where MIPS, HP, Intel, AMD, Sun, Fujitsu, and IBM are headed."
This discussion has been archived. No new comments can be posted.

The Battle in 64-bit Land, 2003 and Beyond

Comments Filter:
  • 64 bits? (Score:3, Funny)

    by Anonymous Coward on Sunday February 02, 2003 @07:54PM (#5212373)

    when we get to 1mbit is when things start to get interesting, until then..............
  • by trmj ( 579410 ) on Sunday February 02, 2003 @07:57PM (#5212386) Journal
    Intel will release a 64 bit processor first, but 2 months later AMD will come out with a 61 bit processor that runs twice as fast. Don't ask me how, or even why speed is relevant to the computing power, but they will do it.

    Then, 6 years later, China will come out with their own.
  • Here's a better URL (Score:4, Informative)

    by wiggys ( 621350 ) on Sunday February 02, 2003 @08:01PM (#5212400)
  • Heat and power (Score:5, Interesting)

    by Autonymous Toaster ( 646656 ) on Sunday February 02, 2003 @08:02PM (#5212404) Homepage
    The article is very detailed on many points, but doesn't seem to have much mention of environmental aspects like heat dissipation. I can remember when this was a big issue with every new CPU, but lately it seems to have been swept under the rug. What's changed?

    I'm certainly interested in the speed of CPUs, but heat production in the embedded space happens to be a bigger issue for me.

    • Re:Heat and power (Score:3, Informative)

      by Exitthree ( 646294 )

      I think Intel made this change. With the emphasis on speed speed speed, MHz is everything, Intel spent all of its money on clock-cycles and lost sight of efficiency.

      Now this is partially coming back to bite them. They can't market the Itanium 1 successfully at 800 MHz, even if it compares with a 2 GHz chip because of the perceived differential. The Itanium 2 fares better, but it's still a power hog. The companies that focuses on a balance between clock-cycles and efficient design are the only winners (namely IBM) because their chips have a wider application. You won't see an Itanium 2 in a laptop, but you might see a PPC 970.

      • With the emphasis on speed speed speed, MHz is everything, Intel spent all of its money on clock-cycles and lost sight of efficiency.

        Intel's emphasis for the P4 was on performance, and one way to increase performance is to increase the pipeline depth [colorado.edu]. This approach is just as valid as AMD's strategy, and judging by AMD's troubles increasing clock speed and manufacturing problems, perhaps Intel is on to something.

        If you are looking for efficiency from Intel, take a look at the P4M or centrino lines.

        You won't see an Itanium 2 in a laptop, but you might see a PPC 970

        Intel would cringe to see an Itanium in a laptop- it wasn't even designed for desktop use. Itanium is competing against the Ultra Sparq's of the world for huge $20k+ MP servers.
      • Totally wrong (Score:5, Insightful)

        by zealot ( 14660 ) <xzealot54x@NOSpaM.yahoo.com> on Sunday February 02, 2003 @10:19PM (#5212899)
        They can't market the Itanium 1 successfully at 800 MHz, even if it compares with a 2 GHz chip because of the perceived differential. The Itanium 2 fares better, but it's still a power hog. The companies that focuses on a balance between clock-cycles and efficient design are the only winners (namely IBM) because their chips have a wider application. You won't see an Itanium 2 in a laptop, but you might see a PPC 970.


        You couldn't possibly have a worse understanding of the markets involved here. Itanium is targeted at technical computing workstations and massively parallel processing supercomputers. The people buying these things know exactly what they're looking for, they're not Joe Consumer "tricked" by MHz over what constitutes actual performance.

        I can't believe so many posters here believe that pushing MHz in the Desktop space troubles Intel in the high end space where clock speeds are lower. It doesn't. People in the desktop space buy on MHz, people in the high end space buy on performance, reliability, scalability, and more (not necessariliy in that order either). Power usually isn't a concern (it's accepted that a costly cooling solution will be necessary).

        By the way, the reason the Itanium 1 has problems is because its performance is not good. The Itanium 2 is much, much better. Get a clue.
    • Re:Heat and power (Score:5, Insightful)

      by Gothmolly ( 148874 ) on Sunday February 02, 2003 @08:16PM (#5212469)
      Do you really need a 4GHz, 64bit chip for your embedded app?
    • Re:Heat and power (Score:5, Informative)

      by chill ( 34294 ) on Sunday February 02, 2003 @08:23PM (#5212499) Journal
      Heat dissipation, in watts, is listed in the table near the end. I believe it was mentioned a couple times, especially in conjunction with the PPC 970 processor.

      This article didn't address the embedded space. Who in their right mind is going to stick a CPU with a die size about that of a pack of playing cards in an embedded device?

      Notice the absence of the XScale and Hitachi lines of embedded processors? This was a preview of the direction of 64-bit SERVER and WORKSTATION processors.

      While you are right, power is a concern, it is way down on the list for the target audience of that article.
      • Re:Heat and power (Score:3, Insightful)

        by ppanon ( 16583 )
        Who in their right mind is going to stick a CPU with a die size about that of a pack of playing cards in an embedded device?

        Hmmm, maybe the military, if the application demands that level of computing horsepower?
  • 64 bits? (Score:5, Funny)

    by Anonymous Coward on Sunday February 02, 2003 @08:04PM (#5212414)
    Hah! My Commodore 64 has 64 BYTES! Hah!
    • Re:64 bits? (Score:3, Funny)

      by Squarewav ( 241189 )
      reminds me of friend who dint want to buy a N64 couse "has the same amount of ram as a C64" I couldnt convince him otherwise
  • how fast the x86 64-bit processors are? They are stuck on platforms with lousy I/O. Why bother to have a 64-bit processor for anything other than a server, and if you want a server, you better get one with good I/O.
    • Re:Who cares ... (Score:5, Interesting)

      by NerveGas ( 168686 ) on Sunday February 02, 2003 @08:13PM (#5212454)
      Lousy I/O?

      A few weeks ago, I was looking into buying a $35,000 Sun system. I needed a machine with better memory bandwidth than a PC could offer. The machine in question interleaved its memory 8 ways, if you had all of the processers!

      Then, I noticed that each bank ran at 75 MHz. Boy, was I shocked. That means that all 8 banks together run at the equivalent of 600 MHz. The new Granite Bay chipsets, with dual DDR 333, give you the equivalent of 666 MHz.

      Both systems use PCI to connect to the outside world. The PC has a 533 MHz front-side bus, and an AGP port. I can't think of anywhere that the Sun would have had any better I/O.

      Now, when you get into 8-way systems, the I/O between processers is better on the "high end" machines. But before you can come up with more I/O than a modern PC, you have to spend about 6 figures. In other words, two ORDERS OF MAGNITUDE HIGHER!

      steve
      • Sun is in deep doo-doo. Their SApRC chip is slower than commodity pentium chips. Their O/S has never been that hot and their prices are ridiculous.

        Methinks that the reason McNeally spends all his time bitching about Microsoft is that he knows the ship is sinking and wants to prepare his alibi.

      • Re:Who cares ... (Score:3, Interesting)

        by ostiguy ( 63618 )
        You are correct. This week I was building out a couple compaq^H^H^H^H^H^H hp proliants - 1 64 bit 133mhz pci-x slot, and 2 64 bit 100mhz hotswappable slots. This was on a mere 2 xeon cpu box. The enormous size of the x86 market will push its IO capabilities more quickly than sun can do on its own. With integrated firewire and usb 2.0, soho motherboards need to have great north/south bridge and cpu interconnects.

        ostiguy
      • by aphor ( 99965 ) on Monday February 03, 2003 @12:44AM (#5213374) Journal
        http://www.sun.com/servers/workgroup/880/

        Let's all make sure we're talking about the same thing.

        The IO on a server is rarely going to run through an AGP port. That's because you're not going to use a V880 to pump textures to a GPU card for playing games. A V880 is designed to kick any PC's ass up and down the street as an entry-level fast fileserver and database server.

        The V880 has several PCI busses for all of its PCI slots (count em).

        Some of the PCI slots are 66MHz 64 bit wide PCI slots. How many of those do you have in your PC? (clue: AGP doesn't count).

        What kinds of PCs can you get that can have 64GB RAM? And 8 way concurrency on access to that RAM? (Clue: do your homework on Intel SMP limitations).

        How can you possibly saturate that 533MHz FSB on the PC? You do it swapping textures across the AGP port! Try loading up your PC with FCAL adapters, hooking them to smart disk arrays with gigs of write-through cache and see how much IO you can get.

        • by TheLink ( 130905 )
          We're not talking about the same thing, but the x86 stuff isn't as bad as you say.

          http://www.dell.com/us/en/biz/products/model_ped ge _3_pedge_4600.htm

          -- excerpts
          6 x 64-bit/100MHz PCI-X (supports 3V or Universal PCI Adapters); Legacy: 1 x 32-bit/33MHz PCI (supports 5V or Universal PCI Adapters)

          See the 100MHz?

          512MB - 12GB 200MHz ECC DDR (Double Data Rate) SDRAM Features four-way memory interleaving for higher performance (requires DIMMs to be added in set of four of equal capacity)
          --- end excerpt

          And that's maybe USD15K (add 4GB etc). How much is that Sun you're talking about? USD37K?

          Comparing the price, IO, Mem bandwidth, SPEC scores, it looks pretty attractive compared to the entry level SunFire (2x900MHz 4GB).

          I've seen benchmarks on Sun vs x86s and so far my impression is the Suns get slaughtered easily in the low to low-mid end ranges.

          As for 64GB RAM X-way concurrency stuff, you could get that if AMD Opteron succeeds.

          I don't count Itanium as x86 because so far it isn't - you might as well be running POWER/Alpha/SPARC and an x86 emulator.
    • Re:Who cares ... (Score:3, Insightful)

      by cheezedawg ( 413482 )
      Lousy I/O? Care to be a bit more specific? Are you talking about FSB or memory bandwidth? Bandwidth between northbridge and southbridge (v-link, i-link, etc)? I/O to integrated HDD or USB controllers? PCI? Super IO? LPC?

      There are dozens of types of I/O on an x86 system- some of them are great, some of them are "lousy", and many of them don't require anything faster than they already have (like keyboard). But as is, your comment doesnt make very much sense.
    • lousy I/O?! (Score:5, Informative)

      by Fefe ( 6964 ) on Sunday February 02, 2003 @09:47PM (#5212783) Homepage
      PCs do not even today have lousy I/O. In fact, because the PC architecture has less registers, code needs to store stuff in memory more often, which lead to PCs outperforming RISC machines in memory bandwidth over the years. Sun and IBM in particular have been outperformed in RAM bandwidth for over a decade. They mad up for it in good floating point performance, but now the PCs are catching up there as well.

      By the way, AMD's HyperTransport and Hammer memory infrastructure is quite similar to the "perfect scalability" Alpha memory hardware that has been making headlines recently. I expect Hammer to rule the planet here. Madison also has huge memory bandwidth, but it wastes most of it reading NOPs and instructions that are predicated away or otherwise discarded. ;-)

      Also, if you actually read the article, you will notice that even the PowerPC translates their ugly and complex instruction set to an internal instruction set, which is more RISCy. This is the very thing that RISC afficionados have been using as argument against x86 for years!

      The world isn't that black and white.
  • here [realworldtech.com].
  • 64 bits.. (Score:3, Insightful)

    by ObviousGuy ( 578567 ) <ObviousGuy@hotmail.com> on Sunday February 02, 2003 @08:06PM (#5212428) Homepage Journal
    There needs to be a true revamping of CPU architecture, not simple adding of bits. 64 bits is fine and dandy, but the convoluted instruction set, seemingly random usage of registers, and an inability to do fast floating point operations really hampers the x86 system. Seeing as how IA64 is based on x86, this will be a problem into the future.

    And with IBM announcing further support of the Intel architecture, there doesn't seem anywhere for the computer industry to expand.

    It isn't even an argument of "what are we going to do with all this power?" It's more like "where's the fucking power?"
    • Re:64 bits.. (Score:5, Insightful)

      by NerveGas ( 168686 ) on Sunday February 02, 2003 @08:21PM (#5212489)

      "Where's the power?"

      Easy. It's in the PC.

      Yeah, I know. Some of the super-expensive RISC chips blow away PC's on floating point. But look at your FLOPS per dollar. Chances are the PC will be at least an order of magnitude lower.

      It's been trendy to bash PC's for quite a while. However, if you've been "in the business" for two decades, and had your eyes open, you've realized that things have been slowly changing.

      In the "bad old days", PC's sucked hard. Companies like Sun, DEC, and IBM were the only choices if you needed more computing power than an average automobile.

      Because of the economies of scale in the economy market - and the competition - PC chip makers like Intel, AMD, Cyrix, etc. kept improving their products steadily. Now, a modern PC chip compute with "big iron" chips very well in integer work, and are fast approaching (in some cases, BEATING) them in floating point work - and all at a tenth to a hundredth of the price.

      Back in the bad old days, it didn't matter how fast of a computer you bought, it still wouldn't run a desktop at an acceptable speed. These days, it practically doesn't matter how SLOW of a chip you buy, it'll still run a desktop at an acceptable speed.

      There will always be a market for the big iron and specialty hardware, but as time goes on, the PC technology has improved by leaps and bounds over the years.

      Now, don't get me wrong. There's still room for massive amounts of improvement. I would love to see the x86 architecture, and all of the legacy crop dropped like a hot potato. I'm not confident that it will happen in this decade, but it sure would be nice if it was.

      steve
      • But look at your FLOPS per dollar. Chances are the PC will be at least an order of magnitude lower.

        Problem is that not all flops are born the same. The highly iterative algorithms that the scientific and engineering communities need to run start to sprout actually relevant errors when calculated using "only" 32 bits.

        I would love to see the x86 architecture, and all of the legacy crop dropped like a hot potato. I'm not confident that it will happen in this decade, but it sure would be nice if it was.

        Of course it's happening, what do you think Itanium is? What about ARM? It's not like ARM processors are rare either. Of course we'll never see it dropped on a "pee cee" circa 1984, but the newer processors are doing a great job of pretending the cruft isn't there.

        Dave

        • Yes, ARM isn't x86, and Itanium's moving away. But I still believe that tens years from now, commodity desktop systems very well may still have x86 crap in them.

          As I recall, if you asked Intel and/or MS fifteen years ago, by 1990 (or thereabouts), things were supposed to be fully 32-bit and multiprocesser. It wasn't until 2000 that MS produced a really was a viable, completely 32-bit OS for the desktop. (As good as NT4 was at the time, there were quite a few run of the mill home "desktop" tasks that simply couldn't be done.)

          So sure, they'll tell us that in ten years, the legacy stuff will be gone, that chips will be at ten gigahertz, and that we'll all have web-enabled refrigeraters and a levitating car like the Jetsons. But if there's one thing that history has proven, it's that progress tends to move more slowly than the pundits (especially those with financial ties to the industry) would have us believe.

          steve
    • Re:64 bits.. (Score:5, Interesting)

      by Waffle Iron ( 339739 ) on Sunday February 02, 2003 @09:17PM (#5212675)
      64 bits is fine and dandy, but the convoluted instruction set, seemingly random usage of registers, and an inability to do fast floating point operations really hampers the x86 system.

      The instruction set and register layout is irrelevant. All modern X86 CPUs translate the inctruction stream on-the-fly to an internal RISC-like architecture with multiple parallel execution units. Using register renaming, all modern X86 CPUs have dozens of general-purpose physical registers that can be simultaneously mapped onto the legacy logical registers.

      There is no need to expose the internals of any particular CPU generation to the software because the details change with each new design. The CPU's on-the-fly recoding knows how to optimize for the details of its particular internal implementation better than a C compiler. (Exposing the implementation details to the compiler is one reason why I think that the whole Itanium concept is a bad idea in the long run.)

      The floating point performance is a function of the target market. If a CPU manufacturer was so inclined, they could create an X86 with world-record FPU performance. It's just not needed for the majority of places where X86's get used today.

      • Re:64 bits.. (Score:3, Interesting)

        by ppanon ( 16583 )

        Exposing the implementation details to the compiler is one reason why I think that the whole Itanium concept is a bad idea in the long run.

        While I agree with the general idea of implementation hiding, if your programming model is siginificantly different from your implementation model, you are going to need extra work or pipeline stages for your instruction decoding/register mapping which will negatively affect your CPU performance.

        The CPU's on-the-fly recoding knows how to optimize for the details of its particular internal implementation better than a C compiler.

        Not always. You can feed runtime profiling information back into your compiler to optimize branching code and other constructs better than hardware can. This is great for whole classes of applications (such as scientific or engineering simulations). In those cases the C (or Fortran) Compiler definitely knows more about how to optimize for a particular processor than the processor would because it has more resources at its disposal to do so.

        On the other hand it's less clear to me how much of a benefit the compiler has in general-purpose computing applications such as databases, MS Office, or windowing system functions where the input is much more random. In those cases, the processor's recent runtime statistics may prove better at branch prediction and instruction re-ordering than
        a compiler's prediction on global statistics. I would think each approach has different areas of strength (such as the difference between a global optimizer versus a peephole optimizer).

        I wonder, would it be possible to combine the two approaches while keeping most of the EPIC/VLIW goals of low runtime instruction decoding/ordering overhead? For instance, having some way to mark certain branch instructions as having more random statistics over time and requiring the CPU to gather runtime branching statistics to improve branch prediction.
      • Good post, but I have one quibble.

        The floating point performance is a function of the target market. If a CPU manufacturer was so inclined, they could create an X86 with world-record FPU performance. It's just not needed for the majority of places where X86's get used today.

        An x86-based FPU will always be slower than a comparable FPU in a processor with RISC instruction set. The x86 uses stack-oriented instructions for the FPU. It turns out that register renaming can't be done as aggressively on stack-based instruction sets as it can be done on register-based instruction sets. This can cause the input stream to the FPU to choke on the x86 for an FPU-intensive application. So x86 FPU operation will always be slower than a similarly designed register-based FPU. This is probably the only real performance bottleneck with the x86 instruction set that hasn't been overcome by advanced architectural techniques.

    • Re:64 bits.. (Score:4, Informative)

      by CapnFreedom ( 642783 ) on Sunday February 02, 2003 @09:48PM (#5212786)
      Have you even looked the IA64 ISA? It is a significant departure from x86. IA64 supports x86 instructions through emulation only (which is why x86 perf on IA64 is lagging.) It is not a 64-bit extension of IA32, which is an extension of IA16.

      With IA64, there are 128 integer and floating point registers, 64 1-bit predication registers, eight branch registers, instructions are fixed length, every instruction can be predicated, speculative execution is supported at the instruction level, registers are preserved across function calls with register stacks, and register rotation can help prevent explicitly prevent antidependencies in tight instruction loops. Each of these is a departure from x86

      I'm not too sure of this fact (and I am too lazy to double check) but I'm pretty sure IA64 requires a 64-bit operating system to even boot, unlike x86 which boots to 16-bit real-mode, and then is switched to 32-bit protected mode by the OS.
    • by Fefe ( 6964 ) on Sunday February 02, 2003 @09:53PM (#5212808) Homepage
      1. x86 has been revamped many times. That's why it is still competitive, although its doom has been predicted numerous times.

      2. x86 actually has faster floating point than most RISC CPUs. Why don't you actually read the article and look at the stats they give there? In particular thanks to SSE, x86 not only has directly addressable floating point registers but it has huge performance gains to offer for vectorizable calculations. Did you ever ask yourself why all the movie special effects farms have moved their render farms to x86?

      3. "Seeing as how IA64 is based on x86"... Care to pass that crack pipe around or are you going to smoke it all alone?

      4. "And with IBM announcing further support of the Intel architecture"... ?! What the fsck are you talking about? The only Intel architecture IBM recently announced support for is IA64. You seem mighty confused, man.
  • by march ( 215947 ) on Sunday February 02, 2003 @08:13PM (#5212455) Homepage
    It amazes me that this discussion is even taking place.

    I would have thought that by now, we'd be discussing 128bit or 512bit computers. I mean, I've been working on Dec Alpha chips for 8 years now. A nice, fast, 64 bit processor. (Tru64 kinda sux though).

    8 years in computer time is like 800 years in human time. What's up? 64bit processors should be old new now...

    • Re:It amazes me... (Score:4, Informative)

      by NerveGas ( 168686 ) on Sunday February 02, 2003 @08:29PM (#5212526)

      You're not likely to see 128- or 512-bit general-purpose computers in your lifetime, I'm afraid. The increase from 32-bits to 64-bits isn't for performance reasons, it's for memory addressing.

      A 32-bit computer can address up to 4 gigs natively. Intel has some extensions to allow up to 64 gigs, but with a performance penalty.

      By moving to a 64-bit computer, the address space becomes astronomical - it is 4 billion time larger than the 32-bit addressing space. In the last twenty years, the average amount of memory in a computer has gone from about 512k to 512 megs - it's increased by about a thousand times. At that growth rate, a 64-bit address space would easily last through our lifetimes.

      When you see video cards and dedicated gaming hardware that has a 128-bit (or higher) processer, it's done for different reasons. Usually, they need to perform complex mathematical operations that are very repetitive and easily parallelizable, which is not generally the case with a general-purpose CPU.

      steve
      • Re:It amazes me... (Score:3, Informative)

        by Zeinfeld ( 263942 )
        You're not likely to see 128- or 512-bit general-purpose computers in your lifetime, I'm afraid. The increase from 32-bits to 64-bits isn't for performance reasons, it's for memory addressing.

        Actually Very Long Instruction Word machines were in vogue about ten years ago. Yale built a 512-bit machine. The compiler technology ended up being the most interesting stuff however, it was bought by cray and resold to various companies ending up in the Intel compilers.

      • Re:It amazes me... (Score:3, Interesting)

        by Q Who ( 588741 )

        By moving to a 64-bit computer, the address space becomes astronomical - it is 4 billion time larger than the 32-bit addressing space. In the last twenty years, the average amount of memory in a computer has gone from about 512k to 512 megs - it's increased by about a thousand times. At that growth rate, a 64-bit address space would easily last through our lifetimes.

        Err... 20 years aren't exactly a "lifetime". What about 3 times that? Whoah, billion times the memory. Also, recall that about 60 years ago, memory was counted in bits.

        And there is always possibility for a breakthrough.

        I think your prediction is a bit... exaggerated.

      • sounds like somebody I heard who once said that 64k of memory would be enough memory for any application a person would ever want to use...

        for the record, there are a lot of people in the world of biology, radiology, and bioinformatics who would love to have a 128 or 256 or 1024 bit computer. applications like nMRI could then address the individual hydrogen atoms they excite... astronomers could address all of the stars, planets, and meteorites in the sky... historians could address all of the people who have lived in the past and will live in the future... etc. etc. lots of interesting, non-gaming applications become possible with the advent of high-bit processors... (just going to show that Isaac Asimov was way ahead of his time...)

        • for the record, there are a lot of people in the world of biology, radiology, and bioinformatics who would love to have a 128 or 256 or 1024 bit computer.

          for the record, there are not 2^128 atoms of silicon on the planet. More than 64 bits may be useful, but it will require quite a bit more technology before that ever happens.

      • At that growth rate, a 64-bit address space would easily last through our lifetimes.

        Actually, I'd bet that the growth rate increases (exponentially), just like the growth rate of everything else in the entire computing industry. All in all, however, I agree with your final points.
      • by Tomster ( 5075 ) on Sunday February 02, 2003 @10:26PM (#5212918) Homepage Journal
        You're not likely to see 128- or 512-bit general-purpose computers in your lifetime, I'm afraid.

        With advances in medicine, regeneration, nanotech, and cybernetic replacements/augmentations, I fully expect to live at least 200 years. Did you take that into consideration when making your prediction? :)

        -Thomas

        • by f97tosc ( 578893 ) on Monday February 03, 2003 @02:33AM (#5213681)
          With advances in medicine, regeneration, nanotech, and cybernetic replacements/augmentations, I fully expect to live at least 200 years. Did you take that into consideration when making your prediction? :)

          What you fail to realize that these replacement/ augumentations will not be possible until research labs have access to 128- or 512- bit general purpose computers.

          Tor
      • A 32-bit computer can address up to 4 gigs natively. Intel has some extensions to allow up to 64 gigs, but with a performance penalty.

        36 bits, actually. Also, there's still a 4gig/process limit.

        When you see video cards and dedicated gaming hardware that has a 128-bit (or higher) processer, it's done for different reasons.

        It should be noted that when someone calls a video card 128 bit, it doesn't mean all that much, since you almost never program a video card directly, and the only registers you are likely to see are for vertex shaders. Typically, a video card does a lot of vector operations, like you say, but it only addresses 32 bits. The newer cards have memory busses 128 or 256 bits wide, bit that's mainly for high bandwidth, low latency memory access.

  • by Mr_Tulip ( 639140 ) on Sunday February 02, 2003 @08:16PM (#5212470) Homepage
    Microsoft is eagerly awaiting 64 bit processors, as they will "greatly decrease the incidence of Integer overflow exceptions, and memory overwrites"
    • While this is an obvious Microsoft bash, there ARE things in the new 64bit processors that will help do this. See the discussion of systrace in the BSDs, and why it can't be fully implemented on i386, for details.
  • 0.13 mm? (Score:5, Funny)

    by dido ( 9125 ) <dido&imperium,ph> on Sunday February 02, 2003 @08:22PM (#5212496)

    Despite shipping 0.13 mm x86 devices for about a year, Intel's first 0.13 mm IA64 MPU, code named Madison, won't be introduced for another 5 or 6 months. The EV79, a 0.13 mm shrink of the 0.18 mm EV7, will be even later, shipping in about a year.

    Holy cow... I didn't know microprocessor features were still so freaking huge! Methinks the author needs to remember that there is an HTML entity readily available as &micro;. :) Unfortunately it seems slashdot is stripping out most of my entities so we can't see it here . 0.13 mm is 130 microns, which is roughly where IC technology was in the mid- to late-1980's if I'm not mistaken. That can't possibly be right. If use of the entity is out of the question (just as it seems to be on ./), maybe they could have said 0.000013 mm or even spelled out the word "micron" right out.

    • Yah, the micron character only shows up properly in IE.

      Presumably it's due to some stupidity of only using MS tools such as MS Word, Frontpage, and IE for article processing and submission. If I was the author, I would be more than a little annoyed at the editor/webmaster since they probably mandated the submission format and should have checked the resulting article and corrected the references. The end result reflects poorly on an otherwise good article.
    • Or called in 130 nm, as seems to be the wave of the future with the terminology (90 nm, 65 nm).
  • Alpha (Score:2, Interesting)

    by camusatan ( 554328 )
    Interesting - although according to the article the Alpha's been sorta EOL'ed for years and years now, it still kicks unbelievable amounts of ass.

    I presume Digital...Compaq...whoever.. killed it for purely political reasons? Or are there some technical reasons I don't get?

    • Re:Alpha (Score:3, Interesting)

      by NerveGas ( 168686 )
      The reason that Alphas are in the state they're in now is purely economical, not technical.

      They're great processers for floating-point work. For integer work, though, they're not competitive. I've beaten a $25,000 Alpha at RDBMS work with a $12,000 PC. Now that doesn't mean that the Alpha sucks, just that it only excels in certain areas.

      Unfortunately, because it only excels in certain areas, it appeals to a much smaller audience. Things didn't work out, and it's sad.

      steve
      • by Fefe ( 6964 )
        In my experience with Alpha it *is* competitive in Integer performance. You won't notice that if all you do is run gcc all day, because gcc historically does more work on Alpha than on x86.

        But on my integer applications Alpha has always been competitive. Caveat: it has been a few years since I had an Alpha at my disposal.

        But please read the article you are commenting on. Their stats also show that Alpha integer performance is top notch.
  • by AtariDatacenter ( 31657 ) on Sunday February 02, 2003 @08:29PM (#5212530)
    In and of itself, a 64 bit processor with a 64 bit operation system really doesn't mean better performance. You've really got to have application which leverage that kind of platform. And there aren't many. On my SPARC servers (which all have 64 bit CPUs), going from a 32 bit OS to a 64 bit OS so no real improvement or degradation regarding performance in a wide variety of applications. Going 64 bits for most people mean nothing.

    The main selling point for SPARC, which most people who aren't dealing with Sun don't understand, is not the CPU itself or the speed of a uniprocessor box.

    It is the total package. (Admittedly, the lower part of that is the uniprocessor performance.) On the upside, Sun has some very compelling benefits. Almost all major UNIX programs (commercial) are developed for SPARC, often as the primary development platform. The binary compatibility is awesome. The binary tat I compiled on my workstation (with 5 years old technology that is several CPU generation behind) will containue to run the most modern hardware. There's no recompiling for different/newer architectures (unless you're looking to gain a specific advantage of a new processor and your compiler can do it). And probably one of the best features is an awesome scalability story. If your code does threads, or uses more than a processor at a time, you can scale from a 1 CPU to 100+ CPU configuration. No special programming to worry about clusters or to take advantage of new hardware. Additionally, because the hardware is (majority) single vendor, you gain a great deal of relaibility over platforms which has an incredible amount of diversity (wintel). Okay. That's a double edge sword, admittedly.

    That said, it is too bad that Sun just can't keep up in the uniprocessor world. But it has quite a number of real-world advantages beyond performance which keep it afloat, which may surprise people.
    • Most of the vendors have various advantages like this. SGI has been packing more and more chips into smaller boxes, and you can plug a bunch of machines into eachother and have them run on a single system image. Itanium2 is more powerful than all of them but it's also far less flexible than any of the established 64-bit architectures. Whether this will prove a problem remains to be seen.
    • by nathanh ( 1214 ) on Sunday February 02, 2003 @10:00PM (#5212832) Homepage
      In and of itself, a 64 bit processor with a 64 bit operation system really doesn't mean better performance. You've really got to have application which leverage that kind of platform. And there aren't many. On my SPARC servers (which all have 64 bit CPUs), going from a 32 bit OS to a 64 bit OS so no real improvement or degradation regarding performance in a wide variety of applications. Going 64 bits for most people mean nothing.

      Nonsense.

      64-bits means a larger address space. This means clean support for more than 4GB ram. This already affects my work - the ability of software to use exactly 110% of actual RAM must be a physical law of the universe - and I'm hardly working with the top end of equipment. Oh sure, there are nasty hacks in Linux and NT to use more than 4GB RAM, but the kernel guys have been very clear on the matter: if you use more than 4GB ram then you should use a 64-bit CPU.

      64-bits also means larger SIMD instructions. More data shovelling. Faster processing. Maybe that won't make your compiler faster, but it is almost certainly going to make for faster encryption and decryption of your emails, more vertex calculations per second for your OpenGL application, and faster image processing for filters in Photoshop.

      I strongly disagree with your claim that for "most people" there is no benefit from having 64-bit CPUs. The benefits are there: you just aren't looking hard enough.

      • While you're more correct than the parent poster, I feel I should raise the point that "most people" don't have > 4GB of ram, don't encrypt their emails, don't use Photoshop, etc. In the high-end desktop and of course server markets, 64-bit obviously makes a difference. But "most people" aren't in that market.
      • Clarification:
        64-bits means a larger address space. This means clean support for more than 4GB ram

        Though may be necessary for RISC chips (need a fixed instruction/data size and known predictable boundaries to be efficient) this is not a requirement for CISC chips. Natural word size of processor doesn't necessarily equal size of data bus. In fact, the fact that they're both equal to 32 bits or both 64 bits is a fairly recent import. The I8086, I8088, I80286, M68000, MOS6502 all had address bus sizes different than the natural word length. Later M680x0 chips with the new MMU had 32 bit natural address bus, but I think it could be switched back to 68000 style 24 bits for bad programs (anyone remember 32 bit addressign in MacOS?). I think some of the new Pentiums have ability to address more than 4Gb, but no one is going to rewrite the VM code in their OS to take advantage of this. Maybe some Linux or FreeBSD hackers with more skill than I have can try. It would be interesting to see the performance on this, but many things would have to be rewritten (gcc obviously) and recompiled (pointers are now different sizes) for anybody to bother I think.
        • The data and address buses aren't the same size in any current processor. Data bus is 64-bits in all x86 chips, address bus is 32-bits. It's unusual for a CPU to have a larger address bus than its natural word size. That means that pointer variables become a special case (because how do you add an offset to a 64-bit pointer if you can't do 64-bit integer math?) The current Pentiums have a 36-bit *physical* address bus, and both Linux and Windows support it. But pointers are still 32-bit, and all you get is a 4GB virtual address space that maps to a 64GB physical address space. This allows you to have memory-window type operations, but that's hardly a clean design.
    • I contest that. First of all, 5 years of backwards compatibility is not an argument for SPARC, it's an argument for x86. x86 has 20 years of backwards compatibility.

      Second of all, there are only so many application that people really buy "server hardware" for, and as soon as you put more than 3 Gigs of memory in the machine, you gain performance from 64-bit hardware.

      The performance gain is particularly big for databases and applications like full text search that have a large working set. Also, crypto software can in many cases reap substantial gains from the native 64-bit arithmetic. In layman terms that means: 64-bit is good for databases and web servers. And, believe it or not, those are the applications people buy server hardware for.

      Application servers are cheaper and more reliably done using a cluster of el-cheapo off-the-shelf x86 machines than one big iron, independent of the number of bits.
  • 64Bit Apache (Score:2, Insightful)

    by Anonymous Coward
    Hopefully, 64bit will bring some performance to the Apache project :)

    ---------

    FunPic [funpic.de]
    Happy Tree Friends [funpic.de]
    Ownage [funpic.de]
  • P970 vs. Itanium (Score:2, Informative)

    by Anonymous Coward
    Wow. You could get FOUR P970 from the transistor count of ONE Itanium 2. But the Itanium 2 isn't four times faster than one P970s, its not even as fast as two P970s.

    Seems that IA64 is dead. People will go x86-64 for compatibilties sake, and IBM P970 if efficiency is important..

    • > Wow. You could get FOUR P970 from the transistor count of ONE Itanium 2

      Wow, you just compared a high end server processor with a desktop processor.

      POWER4+ is IBM's competitor to Itanium 2. The article compares the PPC970 with an upcoming _mobile_ processor from Intel.

      Also, I think you'll find that most of the difference in transistor count, is cache...

      From the looking forward chard in the article, you see that the PPC970 will probably have a total of less than 600k of cache, whereas the Maddison and Deefield Itanium 2s will probably have around 6.25MB and 3.25MB of cache respectively...

      Then notice how Deefield with 3MB less cache than Maddison, also has 180 million less transistors...

      hmmm..but at the same time, that chart shows the POWER4+ with 128Mb of L3 cache...and only 184 million transistors. I'm not sure if that's a mistake, or if the POWER4+'s L3 cache will be off die...
  • by bkontr ( 624500 ) on Sunday February 02, 2003 @08:38PM (#5212563) Homepage Journal
    I'm definitely interested in how well IBM's 970 processor will compare with Intel and AMD processors. Apparently the chip will eventually power Apple products and IBM's midrange Linux lineup Also, IBM appears to be making a move to attempt to challenge the Wintel monopoly making a hardware (970) and software (OS X, Linux) marriage. This hardware is an excellent way to bring the ease of use of OS X and the server abilities of Linux to hardware that IMO will energize both. Microsoft's stranglehold on Intel prevents Windows from actual full strength competition with Linux, so that no company can really count on Intel to say that they are 100% behind Linux.....I don't say Intel wants to be manipulated by Microsoft but much of their business is directly tied to and manipulated by Microsoft. The idea is making chips that Apple and IBM (Linux) can use at the same time will boost volume and in time reduce the overall price of the chip and therefore the cost of the products they provide. But that is just part of the big picture: If IBM Linux and Apple sales grow, it will (hopefully) FORCE Intel and Microsoft to become hyper competitive AND COST effective...this can only be a good thing for computer industry.
    • Despite the estimated performance of the PPC 970 being 'only' around that of _current_ high-end desktop processors, there are some interesting things to note about it's design.

      First, the power consumption is a very low 42W at 1.8GHz. That's pretty sweet. Also note it's die-size is quite small. Less than half that of competing (at the time) Intel processors, and 50% less or so than that of AMD's Opteron. That's going to make for some nice price savings.

      True, it's not going to be anywhere near as fast as top of the line desktop processor competitors at the time it's released, but, will it be fast _enough_? Yes, of course it will. And with the price savings potential based on die-size, and the low power consumption, putting two 970s in one box is quite feasible, and will make for a pretty sweet workstation. The flexibility this provides for is always welcome.

      Yeah, I'd prefer it if Apple switched over to x86-64 (AMD), but I don't think they can do that and keep their developer community intact.

      • Yeah, I'd prefer it if Apple switched over to x86-64 (AMD), but I don't think they can do that and keep their developer community intact.

        Very true, not a chance Apple will retain the intrest of developers if every product on the market for Mac must be recompiled, CDs re-duplicated, possible packaging materials and manuals reprinted, shipped. That , and consumers would be pissed at having to buy another version. "ANOTHER $200 for MS Office? I just BOUGHT one last week! F this, I'm going to buy a Dell!" The Apple camp is just now starting to settle in very nicely with OS X, and very recently a large insterest in making "the UNIX side of things" work more smoothly has come underway. They're not ready for another big change in the way things are done.

        A PPC emulation layer is not not that feasable on x86, the 68k emulator was only successul on PPC because it was an inferior design (albeit a great one, for it's time), so that problem is not so easily solved, either.

        This is is already a disaster without even CONSIDERING the implications on 3rd party hardware. I'm sure Apple wouldn't just start dropping ATX motherboards in their cases, there is too much custom design/proprietary additions to the hardware now to just toss. AirPort card slots, FireWire onboard, the DAC chips, etc. That would be a total waste of money, and since the OS is coded to work with the exact specs of the current design, all kinds of drivers and OS modifications would be needed.

        Besides, the x86 family of chips is a little long in the tooth, not enough has been done with the chips other than tooling them to win the MHz war. Sure there have been improvements, but PPC was never about clock speed, and it shows in the design. If only there was as much consumer force driving PPC as there is driving x86, there might be more successful R&D done with it, and we'd have these wonderful 3GHz clock speeds drawing 40W with multiple short-stage pipelines and large caches. *sigh*
  • Units? (Score:3, Insightful)

    by rabidcow ( 209019 ) on Sunday February 02, 2003 @08:42PM (#5212576) Homepage
    What's with this graph? http://www.realworldtech.com/includes/images/artic les/battle64-2003-fig1.gif [realworldtech.com]

    Am I the only one who likes seeing UNITS on things?

    Itanium 2/1000 scores a little over 1400 somethings at just above 800 something elses. Is this better or worse than the Athlon XP/2250, which scores less than 800 whatever-they-ares at 900 who-knows-whats?
    • Re:Units? (Score:2, Funny)

      by dynoman7 ( 188589 )
      Am I the only one who likes seeing UNITS on things?

      No. No. Me too. I did some looking around and found that the x axis is in WhatsAGiggers and the y is in GammaLammaBingBongs. Hope that helps.

      AMD ROX! DUKE SUCKS!
  • by acomj ( 20611 ) on Sunday February 02, 2003 @08:56PM (#5212626) Homepage
    I mean we all know by now those spec benchmarks really don't translate well into real world performance. He's got nothing else to go on but to say machine A is faster than machine B based on spec2000 alone is kinda nutty. Bus speed, memory bandwidth and a host of other factors effect machine speed.

    Also I know POWER4 chips are made very conservativly so they don't fail as often, I'm assuming its the same for many of these other workstation chips.

    Also the power consumption issue is glossed over quickly, but I'm hearing it getting to be a big deal. Power/ cooling costs are making some of these a difficult sell in the server room.

  • by MisterP ( 156738 ) on Sunday February 02, 2003 @09:08PM (#5212646)
    I realize the article was about CPU's but what about the software?

    Are HP and SGI porting HP-UX and Irix and all the associated apps to IA64 or are they focusing on Linux for this platform?

    What about IBM and Power4? What OS (AIX?) and applications run on that platform?

    I think an equally important and even more interesting aspect in this luming 64 bit war is going to be the software.

    • by foonf ( 447461 ) on Sunday February 02, 2003 @09:36PM (#5212739) Homepage
      Power4 runs AIX and OS/400 (and if it doesn't run Linux now, it should very soon). At one point a combination of AIX and SCO Unix called "Project Monterey" would have been available for IBM's Itanium systems, but it was scuttled in favor of linux.

      HP-UX has already been ported and is shipping on HP Itanium machines right now (and you probably saw that post yesterday about OpenVMS also being on the way). SGI seems to be leaning more toward linux, they were going to port Irix at one point but I don't know whether it is available yet. SGI's page on their Altix Itanium systems does not mention any OS other than Linux.
    • What about IBM and Power4? What OS (AIX?) and applications run on that platform?

      Besides the inhouse IBM OSes (OS/400, or whatver it's called, pOS I think..., and AIX) remember that the PowerPC architecture is a subset of the POWER architecture, therefore a subset of POWER4. At my work we have a test IBM 6xx series box with SuSE PowerPC Linux on it (not going to be used besides testing, not enough third party software really for PowerPC Linux yet). With IBM software already being ported to x86 Linux, I'm sure shortly they'll have a large amount of software soon. I'm assuming NetBSD would run as well without a hitch.
  • Floating point (Score:3, Interesting)

    by Anonymous Coward on Sunday February 02, 2003 @09:10PM (#5212652)
    It seems Intel's got a great floating point beast in the Itanium. But is this really that hard to do from a technical stand point?

    For example the Power4 can issue 4+1 branch instruction per cycle. If IBM was targetting rendering simulations (BTW with OpenGL2.0 your VPU/GPU will do this instead of you CPU! There is already a plugin for Maya that lets your ATI 9700 do the final rendering instead of yourCPU!) or science work couldn't they simply add additional floating point pipelines to handle 4 instructions per cycle?

    It doesn't seem that hard to create a CPU to score well on SpecFP. Just give it lots of bandwidth and FP execution resources. Things like branching and OOOE don't really matter like they do for SpecINT. I know its not that simple, but it seems that a company would find it easier to win SpecFP than SpecINT.
    • Re:Floating point (Score:5, Interesting)

      by Fefe ( 6964 ) on Sunday February 02, 2003 @10:09PM (#5212862) Homepage
      Actually, the IA64 performance is very bad in the real world. True, their benchmarks look impressive, but I haven't been able to reproduce that.

      I had the opportunity to log in to an 4-way 900 MHz itanic-2 box, which was outperformed by my lowly 900 MHz Pentium 3 notebook by a factor of 4 (single CPU benchmark). I did some mp3 en- and decoding and compiling on the box.

      Also, ia64 is spectectularly bad for MPEG-4. Go ask google for ia64 and xvid and you will find a computer science class in Germany trying to optimize ia64 and they found that after their optimizations (which yielded a big speed-up) ia64 still was handily outperformed by their el-cheapo desktop Athlons.

      Take those benchmarks with a grain of salt. I basically think that IA64 is a big flop. Intel needs a miracle to make people buy this crap. But, as they say, your mileage may vary. ;)
  • Itanium Roadmap (Score:3, Interesting)

    by Best_Username_Ever ( 582302 ) on Sunday February 02, 2003 @09:36PM (#5212741)
    One thing the article hasn't been updated to mention is that Intel have changed the Itanium roadmap. They will be introducing a dual core processor in 2005 (Montecito), this is no longer a rumour. Intel are playing catchup here, IBM and Sun are already much further along this path. Intel do however have the resources to throw into development to do this successfully, the gains they have made from Itanium-1 to Itanium-2 suggests that catching up is not beyond them.

    I wonder how much of the battle for domination in the server market will be decided by economics rather than technology. I suspect that if Intel can kill off AMD (how long can AMD sustain their current losses?) then they could use their dominance in the desktop market to subsidise the development of Itanium and really drive it into the server market, killing off the strugglers like Sun by seriously undercutting them with price/performance. In the long term I think only IBM stands in Intel's way.
  • by wowbagger ( 69688 ) on Sunday February 02, 2003 @10:34PM (#5212936) Homepage Journal
    It is such a shame to see good CPU architechtures die, and crap live on.

    The Motorola 68K family were a joy to work with - lots of registers, and a very orthoginal instruction set - you could use any A register for pointers, any D register for data - none of this "ECX is for loops, EDI for destination pointer, ESI for source pointer" crap of the x86.

    It's dead now, save for use as a microcontroller.

    The Alpha was a ass-kicking, name-taking monster. While I never seriously programmed on it, it was 64 bits long before anybody else knew how to spell it - it had well established software and compiler technology. It is STILL one of the leaders.

    But for all intents and purposes, it's dead, Jim. Yet Itanic, with an unproven design concept, is flourishing (sorry, having worked with DSPs that implemented the VLIW idea, I have doubts about the real-world performance of VLIW in a multitasking environment).

    As Billy Joel said, "Only the good die young...."
  • ... When will Big Blue buy Sun?

    (or is it just too much fun turning the hose [mcspotlight.org] on them...?)
  • by Anonymous Coward on Monday February 03, 2003 @12:12AM (#5213272)
    Finally someone tells it like it is! Computer architects have known for a LONG time (eg., 10 years) that MIPS and SPARC were horrible architectures (designed by people who clearly misunderstood the whole RISC concept) and that Alpha was a fantastic architecture that got the 801-idea spot on. As IBM Fellow and Turing Award winner John Cocke pointed out, the whole idea was FAST instructions that were simple enough for compilers to generate and optimize. It had nothing at all to do with the number of instruction types or their complexity. Not only was Alpha the first 64-bit architecture, but it's the only one that has legitimately scaled over a 10+ period. While it is a tragedy to see the Alpha die due to incompetent marketing, it is gratifying to finally see an informed article that gives credit where credit is due. Long live the Alpha!

    The only thing that the author fails to note is HP's responsibility for the wretched Itanium 1. The first IA64 architecture was designed by HP and Intel in collaboration, and HP was the one who pushed the idiotic EPIC idea.
  • bang-for-the-buck (Score:5, Interesting)

    by g4dget ( 579145 ) on Monday February 03, 2003 @12:17AM (#5213292)
    While there are quite a few IT managers who either don't know any better or have painted themselves into a corner when it comes to servers, and therefore need the biggest single processor bang for any buck, most people who really care about CPU performance care about bang-for-the-buck.

    Unfortunately, none of the current crop of 64 bit processors deliver: the cost of true 64 bit systems (those capable of actually using more than 4 Gbytes of memory) generally starts somewhere upwards of $10000, and for that you do not get anywhere near 10 times the performance of a $1000 PC.

    The main reason right now to get a 64 bit system at current prices is because the applications just cannot be shoehorned into a mere 2-4 Gbytes. If AMD can change that equation and deliver comparable bang-for-the-buck to current PCs, with 64 bit addressing being icing on the cake, they have a winner. None of the other players seem to be capable of doing that--they have tried and failed miserably so far.

  • ...just buy an Atari Jaguar.

    I mean seriously, can't you do the math?
  • AMD just postponed the Hammer introduction for six months.

    That's annoying, but I can see it. Not enough new equipment is being sold to support the introduction of a new architecture right now. If you want a server farm, you can buy one real cheap from someone going out of business. Might not even need to move it from the co-location site.

Lots of folks confuse bad management with destiny. -- Frank Hubbard

Working...