Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Digital Hardware

End Of The Line For Alpha 514

Scareduck writes "Infoworld reports HP has released the last iteration of the Alpha chip. I used these babies in the late 90's, and for a time, they were da bomb. Sadly, the economics weren't there, DEC management really didn't have much of a clue, and Alpha has, at long last, bit the dust. Alpha-based servers will continue to be sold through 2006, and supported through 2011. Farewell, Alpha; the world's line of chips seems to have declined to Intel and a handful of niche guys." Slashdot ran for the first 7 or 8 months off an Alpha box.
This discussion has been archived. No new comments can be posted.

End Of The Line For Alpha

Comments Filter:
  • by Stop the war now! ( 662586 ) on Wednesday August 18, 2004 @03:32PM (#10005815)
    to "Omega" then?
    • You Bastards! You blew it up!

      Oops, wrong movie.
    • "Niche guys"? (Score:5, Insightful)

      by ScottGant ( 642590 ) <scott_gant@sbcgl ... minus herbivore> on Wednesday August 18, 2004 @04:48PM (#10006750) Homepage
      the world's line of chips seems to have declined to Intel and a handful of niche guys

      Didn't know that AMD is out of the game now. Guess they don't sell 64bit CPU's anymore...but we got those 64bit Intel chips in everything now don't we? Whoa...look-at-em go!

      I also didn't hear that the PowerPC architecture was all gone too...guess they're just selling what little inventory they have to the "niche" Apple market...but everyone know's that Apple's dying....any...day...now....

      Pfft...the submitter should remove head from rectum...
      • Re:"Niche guys"? (Score:5, Insightful)

        by Tassach ( 137772 ) on Wednesday August 18, 2004 @05:06PM (#10006880)
        By "Intel" the article should have said "x86". The x86 architecture, as fundimentally flawed as it is, has driven virtually everything else out of the market. Alpha's gone, PA-RISC is going, SPARC is on it's way out. The Power/PowerPC a architecture is hanging in there, so there's still some choice left for main-line computing.

        Of course the power of the various embedded processors (Dragonball,StrongARM) and single-chip computers are rising to the point that they could be meet most user's computing needs. We've reached the point where average users don't need any more power; they need the same power with less heat & noise and more reliability & stability.

        • "The x86 architecture, as fundimentally flawed as it is, has driven virtually everything else out of the market."

          So fundamentally flawed, in fact, that x86 CPUs are the highest-performing, most compatible CPUs in the world.

          Seriously, who cares what the hell your code compiles to anymore? What's wrong with x86?
          • Re:"Niche guys"? (Score:5, Insightful)

            by Tassach ( 137772 ) on Wednesday August 18, 2004 @09:03PM (#10008662)
            What's wrong with x86?
            In Two words: Little Endian

            In Three words: Variable Length Instructions

            The RISC guys had it right. So right in fact that even current x86 chips are RISC on the inside, and then waste close to half their transistor count on circutry that does nothing besides transform the x86 instruction set into something that isn't brainfucked. That Athlon-64 would cost half as much, draw half as much power, and generate half the heat if you ripped out the x86 emulation layer.

            • Re:"Niche guys"? (Score:5, Informative)

              by RzUpAnmsCwrds ( 262647 ) on Thursday August 19, 2004 @12:54AM (#10009650)
              "The RISC guys had it right. So right in fact that even current x86 chips are RISC on the inside, and then waste close to half their transistor count on circutry that does nothing besides transform the x86 instruction set into something that isn't brainfucked. That Athlon-64 would cost half as much, draw half as much power, and generate half the heat if you ripped out the x86 emulation layer."

              According to AMD and Intel comments, the translation circuitry is less than 5% of the total CPU. In fact, over half of the transistor count comes from L2 cache.
              • X86 costs. (Score:5, Informative)

                by JollyFinn ( 267972 ) on Thursday August 19, 2004 @04:11AM (#10010408)
                The x86 pain in the ASS is more than just a die area for translation circuitry!
                A) Legacy instructions, legacy exceptions legacy... Pain in the ass, self modifying code detection.
                B) Strong memory model. Reduces freedom in reordering stuff, or simply increases amount of time.
                C) Amount of programmer visible registers, and lack of triadic operations.
                D1)
                In P4 the trace cache holds quite little number of instructions, because they are MUCH bigger than RISC instructions, and there is more of them for equivalent code.
                D2)
                Athlon line has extra predecode bits in its Icache and 3 large decoders. That consume POWER!
                E) Amount of parallerism available trough the ISA, is limited.
                F) Cost of adding parallerism is a LOT bigger in X86 because of
                Decoders or tracecache parallerism costs more. POWER, and latency/clockspeed.
                All the myriadic exception models have to be compatible.
                More memory renaming required and all pain in there.
                FLAGS! Renaming, and all trickery making that work so that it won't hurt parellerism,
                and accessed by most execution units!
                G) Clock speed is hurt because of the issue. Remember than IBM and SUN ran 1/3 of clock speed of alpha all the time, because of their design methology, until alpha lost their fab. The clock speed is more function of design methology, but ISA adds more complexity on some structures, complexity increase the distance travelled so that hurts clock speed, but intel has superiour fabbing and design methology for doing full custom designs.
                Now A, and D brings to a nice little point. LEAKAGE POWER which is growing component. Logic transistors leak 30 times the cache transistors. Besides even for inorder RISC:s CPU:s decode and fetch consume most of power so, that is where the X86 complexity hurts, most.

                Now the scale of economics, is the reason why X86 is as fast as it is. When you do full custom circuit design there is no way a semiasic design methology will catch you in performance or performance/watt, if goals are same. If you wan't to compare RISC vs X86 go for similar design methology use VIA for X86 candidate, and G4+ for risc. Intel and AMD and Alpha are compareble, up until 0.35u EV6. Yes thats a 600mhz OO 4 inst/cycle risc design made in similar process as under 300mhz PII:s , and that trounced everything. Too bad it came late for Digital. After that there is no highperformance targetting RISC with full custom designmethology available. Power is highly limited by its design methology in terms of clockspeed and instruction latencies, and having different design methology would simply increase the fixed costs for IBM so much that the scale of economics is not there. And for embedded market they prefere ability to customize the processor for customers so design methology choise is obvious for them.

                One small point, in power comsumption execution units are CHEAP, its fetch, reorder, and decode that costs power. Cache too is cheap in power comsumption based. So lots of cache and execution units is cheap in powercomsumption and the rest is where the power comsumption lies mostly. Exceptions, decode, fetch, and reorder. Now in ALL things in the list X86 ISA makes things more complex than equivalent RISC, and spends more transistors in there.
            • Re:"Niche guys"? (Score:4, Informative)

              by akuma(x86) ( 224898 ) on Thursday August 19, 2004 @02:27AM (#10010016)
              1) The Alpha is also little endian.

              2) Complicated instruction decode can be removed from the critical circuit paths with pre-decoded caches. On one extreme, AMD uses predecode bits to mark where instructions begin in the i-cache. On the other extreme, Intel caches the fully decoded micro-ops in their trace-cache. When the variable length decode is out of the critical path, it can be made slower and therefore smaller.

              I don't know where you get your "half" numbers from, but I can assure you that the x86 overhead is nowhere close to "half". There is MAYBE 5-10% overhead in power/area. Most of the non-cache transistors in modern x86 CPUs go towards the out-of-order control logic (re-order buffers, schedulers, highly-ported register files, memory ordering buffers etc...) which attempt to extract instruction level parallelism from the program. High performance CPUs need this logic whether they are RISC or not.

              Another note -- Variable length instructions more efficiently encode your program so you don't need as big of an i-cache or as much bandwidth to the i-cache as a RISC processor. It's not all bad. Compile something on x86 and then cross compile it to some RISC processor and tell me how much bigger your binary is...

              Instruction sets are not where performance comes from. Circuit technology and underlying microarchitecture are FAR bigger components to performance and how much power your chip burns.
      • Re:"Niche guys"? (Score:3, Insightful)

        by fm6 ( 162816 )
        Didn't know that AMD is out of the game now.
        They're not (thank God! Imagine Intel with no real competition). But we're talking architecture here, and in that area AMD is more Intel than Intel.
  • Beta (Score:5, Funny)

    by Anonymous Coward on Wednesday August 18, 2004 @03:33PM (#10005825)
    Damn, sure took them a while to get to Beta...
    • Re:Beta (Score:5, Interesting)

      by attam ( 806532 ) on Wednesday August 18, 2004 @03:48PM (#10006059)
      incidentally, at MIT there is a course called 6.004 (Computation Structures) that all CS and EE undergrads have to take... in that class we implement a simulator for a processor called the "Beta" which is essentially a scaled-down alpha...
  • Sad (Score:5, Interesting)

    by AKAImBatman ( 238306 ) <akaimbatman@gmaiBLUEl.com minus berry> on Wednesday August 18, 2004 @03:33PM (#10005835) Homepage Journal
    It's truly scary how the Intel is becoming the only mainstream chip architecture left alive. Pretty good for something that intel originally created as a stopgap solution! I'm just hoping that UltraSparcs don't go anywhere.

    BTW, better colors [slashdot.org].
    • Re:Sad (Score:5, Insightful)

      by plover ( 150551 ) * on Wednesday August 18, 2004 @03:39PM (#10005922) Homepage Journal
      I read this in the article too, and all I could think was "but what about the PowerPC family?" Is that all the Mac is: a "niche" player?

      And who knows what the future will bring? AMD may diverge so far from Intel that they may eventually be considered their own architecture.

      I think the chip market is about as dead as *BSD (*according to Netcraft.)

      • Re:Sad (Score:5, Insightful)

        by sp0rk173 ( 609022 ) on Wednesday August 18, 2004 @03:52PM (#10006103)
        yeah i'm agreeing with this one. I hope PPC starts really moving - it's got some damn nice architecture behind it...POWER5's are going to be awesome. I would love to see the market open up for PPC, and start to see them sold next to Athlons and P4's.

        As far as AMD goes, they did a damn fine thing with AMD64. Hopefully they keep it up and keep diverging from intel, while still offering a cheaper and (in some cases) technologically superior competating product. I would hate to see the day when Intel really does own the processor market.
    • by PCM2 ( 4486 ) on Wednesday August 18, 2004 @03:40PM (#10005935) Homepage
      I'd say the PowerPC is a pretty mainstream architecture, considering how it shows up in everything from workstations to Power Macs to Cisco routers. Also -- sad, maybe, but scary? PC computers are kind of a niche market compared to all of the embedded applications out there. So what if it's all based on old Intel ideas, so long as you've got folks like AMD and Transmeta to keep pushing the envelope?
    • Re:Sad (Score:3, Interesting)

      by dj245 ( 732906 )
      I think its OK that there's only one mainstream architecture, as long as there is more than one company making it. That way, they can compete against each other to make architectures that will be used in the future, and the best architecture hopefully will win. We're already seeing that with AMD64 and Itanium. Arguably, the better architecture won.

      As long as there is competition for architectures, advancements in architecture will continue. Does it really matter that there is only one mainstream archit

    • Re:Sad (Score:5, Insightful)

      by 4of12 ( 97621 ) on Wednesday August 18, 2004 @04:32PM (#10006604) Homepage Journal

      It's truly scary how the Intel is becoming the only mainstream chip architecture left alive.

      That dominant 386 instruction set has grown larger than life, threatening even Intel, who was responsible for its initial creation.

      Intel's Itanium line has been a business flop, while AMD stuck to x86 compatibility in its K8 x86-64 development and is thereby is making inroads into Intel's market.

      The realities of a market demanding

      1. cheap,
      2. standard and
      3. backward compatibility
      are dictating to mighty Intel where they have to go if they don't want to end up dead-ended in the high end RISC market like SPARC, PA-RISC, MIPS and Alpha.
  • Goodbye Alpha, I barely knew ya... I remember at Fermilab when we got our first batch of Alpha powered Vaxes how wicked fast they were. And I think Altavista was running on Alphas in those days too.
    • Re:Barely Knew Ya... (Score:5, Informative)

      by jpmkm ( 160526 ) on Wednesday August 18, 2004 @03:43PM (#10005992) Homepage
      IIRC, Altavista(originally altavista.digital.com) was just a little demo project used to show off the digital alpha systems that it ran on.
      • DEC had the world's fastest, most reliable hardware, a flavor of Unix that was rock solid, a heritage on the Internet that went back to the mid 70s (Ethernet, firewalls, VPNs, wireless LANs, even Dave Mill's Fuzzball router ran on PDPs). What it didn't have were marketing people who could find their way out of a wet paper bag -- Ken Olsen saw to that.

        Enter the Internet Boom, DEC's last chance at a comeback. How do you market a capable platform around DEC's chimp-loving marketeers? Why, do something tha

  • Heh (Score:5, Funny)

    by Burgundy Advocate ( 313960 ) on Wednesday August 18, 2004 @03:35PM (#10005859) Homepage
    Isn't this the fourth or fifth time Alpha has died? Let it rest already!

    Zombie Alpha needs brains, badly.
  • Niche guys.... (Score:3, Insightful)

    by Chicane-UK ( 455253 ) <chicane-uk@ntlwor l d . c om> on Wednesday August 18, 2004 @03:35PM (#10005867) Homepage
    Yeah, like that little known outfit called AMD. I know you might not of heard of them, but they do make some good chips ;) :)
    • Re:Niche guys.... (Score:4, Informative)

      by Anonymous Coward on Wednesday August 18, 2004 @03:40PM (#10005930)
      He meant intel architecture, you could argue that AMD64 is a new arch but it's still X86. What sort of nerd are you anyway?
    • Re:Niche guys.... (Score:3, Informative)

      by drgonzo59 ( 747139 )
      AMD is still an Intel architecture.
      • Re:Niche guys.... (Score:5, Informative)

        by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Wednesday August 18, 2004 @04:03PM (#10006240) Homepage Journal
        No it isn't. Stop repeating this garbage. AMD has been making their own RISC-internals processors since the K5. The K5 is not very RISCy, but the K6 certainly is, although both of these processors, as well as the K7 (Athlon) and K8 (Hammer) all emulate the x86 instruction set. The Hammer-core processors in particular do not resemble the cores of the older intel processors, or did you totally fail to notice the 16 externally-expressed 64 bit registers? Intel's cores meanwhile have also changed dramatically since the simple days of the 486 and they have many more registers than are directly addressable, and utilize register renaming (among many other techniques) to speed up execution.
      • Re:Niche guys.... (Score:3, Interesting)

        by Just Some Guy ( 3352 )
        You mean, AMD64 is just like IA32, with the exception of different operators, wider paths, different supporting chipsets, different interconnects, and a metric buttload of registers (but other than that they're identical)?

        No, I wouldn't say that AMD is an "Intel architecture", although they make a line of chips that implement an Intel ISA. Their new stuff is markedly different.

        However, I admit that there are better examples of non-Intel architectures, such as those made by the small upstarts IBM and Mo

    • Re:Niche guys.... (Score:5, Insightful)

      by erikharrison ( 633719 ) on Wednesday August 18, 2004 @03:50PM (#10006082)
      Lets say x86 instead, and then the meaning becomes clear. The reason we say "Intel" when we mean "x86" is because, no matter how many other manufacturers make x86 chips (Via, AMD, and doesn't Unisys have there own x86 chip?) the technology is Intel's. All the other companies are niche players when it comes to controlling x86 technology. Via is for embedded, AMD is for price to power in the midrange market, and Unisys is x86 for mainframes.

      The fact that AMD seems to be getting the upperhand in driving x86 technology doesn't change the fact that there is one technology which dominates the market, and everybody else either controls a nice slice with another technology, or competes with the major x86 player in a more specialized niche.

      Alpha is dead, UltraSPARC is in doubt, and Via seems intent on shoving ARM out of the market. m68k is an abberation. There are two battles left. The battle of the archetecture (x86-64 vs POWER5/PowerPC), and the battle of x86 innovation (AMD vs Intel). That's sad.
      • Re:Niche guys.... (Score:3, Interesting)

        by drinkypoo ( 153816 )

        The technology isn't even intel's, only the instruction set is. The technology in between fetch and store is all entirely different not only between AMD and Intel (at least since the K5, the Am-386 is not significantly different from the i386 as far as I know) but also between one generation of intel processor and the next - From what I understand there's not all that much difference between P2 and P3, but P3 and P4 are pretty different, as evinced by the fact that intel has decided that they have to base

  • Cost of the servers (Score:4, Informative)

    by wolfemi1 ( 765089 ) on Wednesday August 18, 2004 @03:36PM (#10005875)
    "Pricing for the ES47 and ES80 systems with the new 1.15GHz EV7 will start at $29,200 and $49,300, respectively."

    Holy crap! And here I was, thinking that the Xeon servers were expensive!
  • AMD (Score:4, Insightful)

    by Snowdog668 ( 227784 ) on Wednesday August 18, 2004 @03:37PM (#10005884) Homepage
    Does AMD count as one of the "niche guys"? Granted, they're not as big as Intel but I've always thought of them as the chip to buy when you don't want to buy Intel.

    • Re:AMD (Score:3, Funny)

      by JLyle ( 267134 )
      Does AMD count as one of the "niche guys"? Granted, they're not as big as Intel but I've always thought of them as the chip to buy when you don't want to buy Intel.
      I think the author was lamenting that, given Intel's dominance of the microprocessor market, it seems truer than ever that niche guys finish last.
  • only intel? (Score:5, Insightful)

    by lavaface ( 685630 ) on Wednesday August 18, 2004 @03:37PM (#10005886) Homepage
    what about IBM's powerPC ???
    • Re:only intel? (Score:5, Insightful)

      by akuma(x86) ( 224898 ) on Wednesday August 18, 2004 @04:03PM (#10006252)
      IBM is a niche. Sun is a niche. Alpha, even in it's glory days, was a niche. AMD has 15-20% of the x86 market and is just slightly larger than a niche.

      Intel ships 1 million Prescotts a week(http://www.xbitlabs.com/news/cpu/display/2004 0512151634.html). This is not even full production capacity. This all done in 90nm technology -- a full 6 months ahead of anyone else. There were on the order of hundreds of millions of Northwoods sold and they are still selling.

      That's probably more volume in a single week than the entire IBM + Sun + Alpha volume for an entire year.

      Why is this the case? It is RIDICULOUSLY expensive to manufacture CPUs in this day and age. If you DON'T ship on the order of 1 million a week, you will never recover the costs necessary to build the all of the fabs.

      This is why Sun will eventually abandon SPARC. This is why IBM loses money in their microelectronics division, but will probably maintain POWER and eat the costs for strategic reasons. This is why HP/SGI and others have gone with Itanium.

      This is not to discount the technical acheivments of the these CPUs. I design processors for a living and have great respect for the Alpha design team. But at the end of the day, the only reason someone is going to fund the design a computer is to make money. Only the profitable survive.
  • by sarahemm ( 707486 ) <{sarahemm} {at} {sarahemm.net}> on Wednesday August 18, 2004 @03:38PM (#10005898) Homepage
    I can't see this bringing in much revenue. If I was a company currently using Alpha, it seems like a dead-end choice to buy yet another Alpha-based machine, knowing this was the last one. Seems like a better decision to migrate away now, rather than just prolong it.
    Of course, that's just my opinion, and business decisions rarely make much sense ;)
    • I think the idea is that this is a migration move. This allows current alpha users more time to migrate off of alpha and to another HP platform, rather than forcing them right now particularly if a third party app isn't avialable yet. HP'd rather have customers on alpha, than not have them at all. They can migrate at their own pace.
    • by alexhmit01 ( 104757 ) on Wednesday August 18, 2004 @04:10PM (#10006341)
      If you have a system on the Alpha that is say, 3 years old, and you were expecting to upgrade in 2 years, then this forces a decision: go through a PAINFUL migration expense now, or make a capital investment to push it off.

      Remember, buying equipment is easily depreciated over 3 years for PCs, probably longer is reasonable for Big Iron (I don't mean for tax purposes, I mean for their financials). If it costs me $0.5m in capital costs spread out over 5 years to upgrade a LOT of Alpha machines, even if it only costs me $200k to migrate off the platform, I may prefer to buy the Alphas that will only hit earnings by $100k...

      It also depends, what is IT's budget for new hardware vs. budget for software migration expenses.

      Also, if you were planning to buy a new Alpha to replace your old one, this is a smart time to buy it, because you can avoid dealing with the software migration now. Let's say you need to upgrade within 12 months, would you rather rush a migration job, or buy the gear and deal with the migration in 3-4 years, when you have time to plan.
    • by johnalex ( 147270 ) on Wednesday August 18, 2004 @04:13PM (#10006384) Homepage
      Actually, we're signing for a new one in a few days now. If you have software running on OpenVMS, the Alpha is still the chip to have.

      BTW, we're retiring a 1994-model DEC (yes, Digital!) Alpha 2100 with a 200 MHz (yes, that's megahertz) processor. The thing has run 24x7 for nearly 10 years and probably averaged less than a day downtime a year. We downed it only for hardware upgrades. We're replacing it with an DS 25, 2 processors, 2 GB RAM (our original had a whopping 64 MB when we bought it) and 5 36 GB drives (our original 2100 had 4 1 GB drives, and we were top stuff in town!). My, I'm feeling old.

    • it seems like a dead-end choice to buy yet another Alpha-based machine

      Only that (1) Alpha still has more than good enough performance, (2) you stick to what you already have working, (3) competitors don't have yet a compelling story on the viability of their RISC offerings, (4) going Intel feels like downgrading, (5) HP's migration proposals are still ridiculous, because (a) there is no good substitute for Digital Unix yet, HP-UX being much inferior, (b) no one believes in Itanium.

    • by flaming-opus ( 8186 ) on Wednesday August 18, 2004 @04:40PM (#10006677)
      As it turns out many HP customers are refusing to migrate to itanium/hp-ux. When one is considering real server-iron the currentness of the processor is not always of utmost importance. If there's a legecy app that runs on tru64 (I mean ultrix, I mean osf) and it's really expensive to port, a lot of shops are just going to keep running alphas until the wheels fall off and burn. [Look at all the guys still running on sperry 1100-series machines]

      True, it's a dead-end choice, but one that might limp along for another 6-8 years. Not everyone has the option of migrating NOW. That works if you're talking about tru64/apache to linux/apache, but not if your talking about tru64/Legecy-app-from-company-no-longer-in-busines s to anything else. A migration might cost millions of dollars. A dead-end alpha server might cost tens of thousands and put off the more for a long time.

      My call is that makes lots of sense.
    • Those with critical VMS-based systems are breathing a sigh of relief that there will be support and replacement hardware for their old-but-reliable servers that have been running VMS non-stop, 24/7/365 for the past DECADE. If you are used to that kind of reliability you are obviously the type that would be advers to changing the entire hardware architecture until the last possible moment. Many of them are the type of folks who wailed and gnashed teeth when they had to migrate from the old VAX hardware to
  • by Anonymous Coward on Wednesday August 18, 2004 @03:39PM (#10005915)
    *sniff*

    *sob*

    Oh, this is just too much for me to handle. The greatest Quake platform is dead.

    Good bye, cruel world!

    Really, tho, this is a shame. Alpha procs are (*sob* were *sob*) the fastest thing a mortal could get. Ignoring compile problems, I'd take an Alpha over an x86 or PPC any day.

    Back when Quake2 was the latest id title, I set up a dedicated server on my alpha box (a tiny multia). My roommate and I were amazed -- gameplay was glass -- it was actually better than running on an x86 dedicated server and better than running against a local server (same box). Could not believe it. It was so smooth.

    Sorry, I'm going to go get drunk cry a lot (I'm working on solaris today, and I just can't take all the pain).
  • Wikipedia (Score:5, Informative)

    by sometwo ( 53041 ) on Wednesday August 18, 2004 @03:41PM (#10005957)
    Here's the article about the alpha: http://en.wikipedia.org/wiki/DEC_Alpha [wikipedia.org]
  • Alpha Envy (Score:5, Funny)

    by Neon Spiral Injector ( 21234 ) on Wednesday August 18, 2004 @03:42PM (#10005965)
    I was talking with CmdrTaco and Keith Packard along wtih a few of the other XFree86 people. They were all going on about heating the bedrooms with Alphas in the winter. And telling other Alpha related stories. Then Keith looks at me and asked if I have an Alpha. I never felt so inadequate as a geek. So a couple months later I did pick up a dual 21164 (EV56) based machine. Sure enough it did keep my bedroom warm, that is when it wasn't tripping the circut breaker. So I moved it to the server room at work, where it sits now still hosting my websites.
  • ARM? (Score:5, Informative)

    by nullset ( 39850 ) on Wednesday August 18, 2004 @03:42PM (#10005968)
    I'd hardly call Intel the biggest CPU architecture out there.... maybe for PCs.

    ARM comes to mind. what about the embedded market? Atmel's AVRs, Microchip PICs, Motorola HC08's,HC11's, there's billions of non-intel architecture CPUs shipped every year. To those guys, intel is just a niche player....

    [flame suit off]
  • What's Changed? (Score:5, Insightful)

    by CommieOverlord ( 234015 ) on Wednesday August 18, 2004 @03:43PM (#10005989)
    Before there was Intel x86 (comptabile) and a number of niche processors, and now there's still Intel and a number of niche processors. The submitter's closing statement seems a tad alarmist.

    We still have Itanium, two Sparc variants, a number of Power variants, Transmeta, Opteron, and whole bunch of other niche processors, most of which probably have more market share than alpha.
  • Slashdot History (Score:5, Informative)

    by Lord Kano ( 13027 ) on Wednesday August 18, 2004 @03:43PM (#10005990) Homepage Journal
    Slashdot ran for the first 7 or 8 months off an Alpha box.

    If memory serves, Slashdot ran on a Multia. [obsolyte.com]

    LK
  • by Locutus ( 9039 ) on Wednesday August 18, 2004 @03:45PM (#10006012)
    IIRC, AMD licensed the Alpha memory bus design and it's still used today. It's how AMD ended up with such a fast bus and beat Intel for ~2 years with a faster FSB.

    So, if you run and AMD CPU then you're keeping the DEC Alpha technology alive. Also, don't forget that the DEC StrongARM was part of the DEC technical vision too. It's how Intel got into the handheld market. Too bad DEC thought Microsoft was it's future....

    LoB
  • by kbahey ( 102895 ) on Wednesday August 18, 2004 @03:46PM (#10006028) Homepage

    In the early 90s, there was this hot debate about RISC vs. CISC, and the merits of each, ...etc.

    This has all died out now, with CISC (read: Intel) coming out as a winner.

    Regarding the number of chips out there, AMD is not really different from Intel, at least it is instruction set compatible. Maybe this will change a bit in the 64-bit versions, but not right now. PowerPC is a good architecture, but not so wide spread. Outside of some IBM servers, and the 3% that is Apple's share, they are not used much.

    • by mihalis ( 28146 ) on Wednesday August 18, 2004 @03:53PM (#10006116) Homepage

      In the early 90s, there was this hot debate about RISC vs. CISC, and the merits of each, ...etc.

      This has all died out now, with CISC (read: Intel) coming out as a winner.

      Well, maybe. Intel is a big winner, but every single Pentium or Athlon is remarkably RISC inside. In fact these chips are so much more complex than any of the "pure" RISC or CISC chips the statement that CISC won is practically meaningless.

      Which side does Out Of Order Execution come from? Intel did it fast first.

      Who use OOOE now? Everyone.

      Theres a huge laundry list of features in modern high-performance CPUs that do not fit into RISC vs. CISC. Trace cache, micro-ops, CMT, CMP, etc etc

      • Sorry Out of Order execution has been done for ages before Intel implemented it. As usual different name, same concept. But the same ideas behind RISC, superscalar, out-of-order, pipelined, etc.. have been around (and implemented) since the 60's.

        Not even in micros, as I believe Metaflow and other vendors had out-of-order CPUS out there way before Intel released the P6 microarchitecture.
    • by Kourino ( 206616 ) on Wednesday August 18, 2004 @03:56PM (#10006157) Homepage
      Pff, it's not that clear cut, as most people know.

      Much of the lower level workings of "IA-32" chips are a lot more RISCy than they started out being. More complex instructions are implemented in microcode. On the flip side, architectures like PowerPC (and even SPARC ... register windows are neat, but not very RISC) aren't very RISCy at all compared with stuff like MIPS.

      Neither side won absolutely. This is probably as it should be.
    • "they are not used much"

      The numbers of PowerPC embedded processors shipped every year dwarf the combined total numbers of desktop, workstation, and server CPUs shipped every year from every architechture.
    • by dutky ( 20510 ) on Wednesday August 18, 2004 @06:59PM (#10007825) Homepage Journal
      kbahey [slashdot.org] wrote:

      In the early 90s, there was this hot debate about RISC vs. CISC, and the merits of each, ...etc.

      This has all died out now, with CISC (read: Intel) coming out as a winner.

      That's an odd take on history, unless by 'win' you actually mean: "all but one CISC architecture (Intel x86) eventually capitulated and either exited the field altogether (either adopting a new RISC architecture) or shifted to a niche (usually embedded) market."

      A little history lesson for all you folks who either didn't exist or weren't paying attention in early days of the microcomputer revolution: Back in the late-seventies/early-eighties there were a fair number of competing architectures in both the mini- and microcomputer markets.

      In the mini-computer world there were:

      • DEC PDP-11 and VAX
      • IBM S/360 and S/370
      • Data General Nova and Eclipse
      • Burroughs B5000
      • Hewlett Packard HP3000
      • and many others

      all of which were CISC designs (relatively few registers, memory-to-memory arithmetic operations, lots of addressing modes, etc.).

      In the microcomputer world there were:

      • Motolorola's 6800 (8/16-bit) and 68000 (16/32-bit)
      • National Semiconductor's 32000
      • Texas Instruments TI9900
      • Zilog's Z80 (and 16 and 32-bit successors Z8000 and Z80,000)
      • Rockwell's 6502 and 65816
      • and, of course, Intel's 8080 and 8086

      all of which were, like the mini-computers of the day on which they were modeled, also CISC variants.

      Ever since the mid-seventies, various research groups (at universities and major corporations) had been toying with ways to make architecturally faster computers. (that is, computers whose arrangement of registers and instruction set were inherently fast, rather than just rely on faster transistors and shorter busses for speed increases) A number of these efforts stumbled upon the same set of concepts:

      1. eliminate all features that are not easily used by contemporary compilers
      2. eliminate most addressing modes
      3. eliminate memory operands for arithmetic and logical operations
      4. eliminate variable length and variable format instruction encoding
      5. eliminate micro-proramming of instructions (hardwire everything),and
      6. break all instructions into parts that can be overalpped (pipelining)

      This was dubbed Reduced Instruction Set Computing, or RISC, as a contrast to the contemporary architectural practices, which the RISC camp lumped together under the term Complex Instruction Set Computing, or CISC.

      The RISC approach payed off pretty quickly with processors that could easily execute one instruction every clock cycle (CISC architectures tended to take many clock cycles per instruction) and a few commercial products appeared in the mid-eighties from MIPS, Clipper, AMD and IBM. The main complaints against the RISC approach came down to one of

      1. fixed-width instructions waste too much memory
      2. RISC instruction sequences are too difficult for assembly language programmers to understand, or
      3. we can make better compilers that will be able to use CISC features to better advantage than do existing compilers (all we need is a measly little research grant and five more years).

      In the end, however, all three arguments proved false (memory capacities followed Moore's law into the stratosphere, most everyone moved to HLL compilers, and the genius level optimizing compilers either didn't materialize or benefitted the RISCs just as much as they did the CISCs).

      One by one, all the big players either came around to the RISC way to seeing things:

      • Motorola and DEC dropped their existing CISC platforms and developed RISCs (M88k and
      • A key point here is that the original intent of the RISC designers was to design simple CPUs that would execute one instruction per clock. That was achieved. Early Alpha and MIPS machines represent that approach in its purest form.

        Then came the Intel Pentium Pro. It took 3000 people to design. It was far more complicated than any previous microprocessor, or, for that matter, most mainframe CPUs. And it executed more than one instruction per clock, while dealing with all the horrors of the x86 instruc

        • Actually, the key point was that before the RISC/CISC wars there were lots of different non-RISC architectures, after the RISC/CISC wars only one non-RISC architecture survivies in any sort of non-niche application. Every major architecture on the market today, and for the past fifteen years, has is a RISC architecture, either outright or by subtrefuge (as with post-PPro x86). The survival of x86 is not proof of RISC's defeat, it is the last holdout of the defeated CISC design philosophy, and that only in n
  • Niche Guys? (Score:3, Interesting)

    by LWATCDR ( 28044 ) on Wednesday August 18, 2004 @03:47PM (#10006052) Homepage Journal
    Farewell, Alpha; the world's line of chips seems to have declined to Intel and a handful of niche guys."

    You mean small players like IBM? I guess the G5 and Power line of chips are not really big time enough to worry about?
  • by MoralHazard ( 447833 ) on Wednesday August 18, 2004 @03:49PM (#10006069)
    Taking potshots like this at x86 chips is such bullshit. So what if it's not as optimal an architecture as the Alpha, or if the EV7 bus is pretty neat? The biggest advantage of using x86 systems over anything else isn't that they're the fastest chips, cycle-for-cycle, or that they're a particularly elegant solution. It's that they're CHEAP and FAST ENOUGH.

    Think about how many Intel Xeons you could get, on 9xx chipset mobos, for $30,000. If you built them yourself, probably 15-20. Is one (or four) 1.5 GHz Alphas are more useful than a cluster of 20 Xeons? Hell no!

    See, ever since Intel lost their de facto monopoly on powerful x86 chips (thank you, AMD!), their prices have dropped far enough that it's hard to beat x86 solutions on a price vs. performance basis. Even if you have to stack up more boxes in a rack to do it. Hell, Quad-CPU Xeons can still go for less than $6,000, if you build them from parts, so rackspace isn't really an issue.
    • by turgid ( 580780 ) on Wednesday August 18, 2004 @04:05PM (#10006270) Journal
      The biggest advantage of using x86 systems over anything else isn't that they're the fastest chips, cycle-for-cycle, or that they're a particularly elegant solution. It's that they're CHEAP and FAST ENOUGH.

      Thanks to the ruthless intel vs. AMD competition of the last half decade, that is now the case, but it didn't used to be.

      Back in the early '90s when the 64-bit RISC architectures were coming out, x86 was a joke. Now, Opteron is more or less a DEC Alpha with an x86 translation unit slapped on top and hypertransport, which made its way down from Cray, via the Sun E10k to the desktop.

      If it hadn't been for these radical RISC architectures, and the intel vs. AMD fight, things would be very different.

      Don't even think about multi-processor Xeon systems. The primitive bus architecture and interprocessor communications simply does not scale well at all past 2 processors. You can just about get away with 4 processors, but after that, you might as well just put space heaters in the box.

  • by glassware ( 195317 ) on Wednesday August 18, 2004 @03:52PM (#10006108) Homepage Journal
    As a CPU buff, I ordered a back-issue of Microprocessor Report where they discussed the introduction of the Alpha in glowing terms. The radical chip architecture and speed-at-any-price mentality was new at the time, but quickly proved itself to be the superior chip design approach. For most of the 1990s, the Alpha was the fastest chip on the market in both integer and floating point operations.

    Alpha was a Risc chip's risc chip. The IBM Power architecture has dozens of operations and permutations; the Alpha has a handful. This contributed not only to the Alpha's speed, but also to its insatiable demands for memory. DEC introduced a code-translator that allowed the Alpha to run x86-32 binaries at native speeds, but warned that memory requirements would grow substantially. The software never became cost effective.

    But, towards the turn of the millennium, something strange happened: the Pentium Pro architecture (happily renamed PII and PIII) inched towards the lead in integer operations. The P4 actually surpassed the Alpha chips. Intel had, by then, hired away some of the Alpha designers and began to adopt its performance enhancing strategies. How could Intel catch up to the Alpha when Intel was burdened with an architecture as convoluted as x86?

    Strangely, the x86 architecture can also be a benefit to chip design. Because x86 compresses commonly used instructions into tiny, awkward byte codes, the P4 generation of chips requires less memory and fewer cache misses - and the convoluted opcodes can be decoded quickly by the processor prior to dispatch. In the long run, Alpha's simplified instruction set proved to be less useful than machine-code x86 compatibility; and x86 chips are now little more than Alpha chips sitting behind an x86 instruction decoder. The Alpha design lives on in every CPU you buy, whether it be AMD or Intel.

    For further reading, check out CPU performance numbers on http://www.spec.org [spec.org] and read the commentary on Microprocessor Report [chipanalyst.com].
    • Revisionist crap !! (Score:5, Informative)

      by Macka ( 9388 ) on Wednesday August 18, 2004 @06:53PM (#10007785)

      But, towards the turn of the millennium, something strange happened: the Pentium Pro architecture (happily renamed PII and PIII) inched towards the lead in integer operations. The P4 actually surpassed the Alpha chips. Intel had, by then, hired away some of the Alpha designers and began to adopt its performance enhancing strategies.
      How could Intel catch up to the Alpha when Intel was burdened with an architecture as convoluted as x86?

      Not by your interpretations of events, and certainly not because Intel hired a bunch of Alpha engineers (that came much later). Unfortunately it's so old now that I can't find a reference to it in google, but you seem to be blissfully unaware of the law suit that DEC brought against Intel over the theft of Alpha IP that mysteriously found its way into the Pentium architecture. I was working for DEC at the time as a Tru64/Alpha support engineer, so I do.

      Some time prior to that there had been a quiet attempt at collaberation between DEC and Intel over the Alpha chip. I believe it was in a vain attempt to try and get Intel to adopt the Alpha architecture for future designs. Whatever the purpose, Intel were given extensive Alpha design docs to look at. Eventually they turned down the offer and went their own way.
      I remember eyebrows being raised inside DEC sometime after when the Pentium architecture started to make some very surprising, unexpected and unforecast performance leaps.

      It took some time to gather the evidence, but eventually Bob Palmer launched a law suit against Intel for theft of Alpha IP. For a while DEC were threatening to halt all Pentium shipments and demand large unspecified damages. Bob P should have stuck to his guns and screwed Intel for all he could get, but instead (being the bean counter he was and not a technologist) he saw this as an opportunity to unburden DEC of the escalating costs of constantly refitting the FAB production plants. Work that was needed to meet the next chip shrink goals and keep Alpha ahead of the game.

      In the end a deal was done. Intel brought all the Alpha fabrication and production plants off DEC, including StrongARM, and agreed to guarantee to produce Alphas for DEC for a number of years (I forget how many).

      DEC still kept control of the Alpha design & development, and it wasn't until much later after the Compaq buy out, in one last act of Corporate infanticide from a cadre of incompetent senior managers that lntel finally got their hands on the full set of Alpha technologies.

      But then that's what you get when Accountants run computer companies, not technologists and visionaries.

      Make no mistake about it, if DEC management had believed in Alpha technology as much as the rest of the people in the company, and DEC had kept the FAB plants and invested in them as they had originally planned to do, and there had been no Comaq buy out, you would today be looking at SMT Alpha EV8 chips running somewhere around the speeds of todays Pentium chips .. and NOTHING Intel, IBM or anyone else could product would have even come close to touching it. It wasn't any technology shortcoming that killed Alpha, just bad management heaped on bad management heaped on even more bad management.

      Macka

  • by turgid ( 580780 ) on Wednesday August 18, 2004 @03:53PM (#10006120) Journal
    Their plan to move everyone to itanic appears to have backfired [theregister.co.uk]. Has itanic finally sunk?
  • by Embedded Geek ( 532893 ) on Wednesday August 18, 2004 @03:55PM (#10006143) Homepage
    I worked with about 400 other developers on the embedded software for the B-2 Bomber. As our groups grew, the VAX clusters we used began to suffer. We complained to management but there was never any money for better mainframes.

    Then we switched over to a trouble report tracking program instead of doing everything on paper. The thing was implemented in house and made to run on the VAX'es. Suddenly everything slowed to a crawl, both development and trouble tracking. Since managers were the primary users of the tracking software, we knew it would have visibility. There was much rejoicing when the company bought a DEC Alpha...

    ...and put only the tracking software on it. No development work was allowed at all on teh new machine.

    SIGH. The salad days of youth...

  • by Greyfox ( 87712 ) on Wednesday August 18, 2004 @04:17PM (#10006441) Homepage Journal
    Every so often we see this story pop up in Slashdot. "Oh, that's sad," we think, reminiscing nostalgically about the VMS workstations of the 80's. We go on about our business and a year or so passes, then we get another story predicting the death of the Alpha. So to all you "Death of Alpha" submitters, I have one thing to say. "It's not dead. It's restin'."
  • And don't forget... (Score:3, Interesting)

    by callipygian-showsyst ( 631222 ) on Wednesday August 18, 2004 @04:22PM (#10006496) Homepage
    Years before Apple "invented" it, Microsoft had 64-bit operating system on PCs [winnetmag.com] on the Alpha platform.

    Maybe that's why some contries banned [slashdot.org] Apple's misleading advertising!

  • by abcxyz ( 142455 ) * on Wednesday August 18, 2004 @04:25PM (#10006536) Homepage
    We're about 6 months into our 4 year lease of the OpenVMS cluster, 4 ES47's with 7Tb of storage. Built like a tank, runs forever, and is an excellent Oracle DB server. Problem is the OS isn't a commody operating system, and not much runs on it any more (that we need). Our vendors are dropping support for the platform as well, so the move is on to start a migration plan, probably to linux.

    Have run alpha's for a long time, and they are still screamers. Problem is, you'll scream, then have a heart attack at the HP prices. Our current environment mentioned about was around $1.5M.
  • by lophophore ( 4087 ) on Wednesday August 18, 2004 @04:35PM (#10006630) Homepage
    Digital could not market for shit.

    And that was on a good day.

    Yes, there were certainly some engineering and management blunders (mostly management) but Marketing was completely inept.

    During the 70s the PDPs practically sold themselves, and during the 80s VAX literally sold itself; it was the hottest thing you could hope to get. So when the big Unix wave came, with its cheap-ass Sun hardware, and so-called software compatibility, the Marketing droids could not cope, and the former #2 computer manufacturer is now just a zit on HPs ass.

    Do I sound bitter? nooooooo.......

    • Half right. (Score:4, Insightful)

      by argent ( 18001 ) <(peter) (at) (slashdot.2006.taronga.com)> on Wednesday August 18, 2004 @07:01PM (#10007841) Homepage Journal
      It was definitely marketing, but it was more than that.

      Compaq dragged their heels on following Digital's development plan, and then pronounced its doom suspiciously close to the HP acquisition. Compaq *could* market, and if Compaq had understood what they'd got from DEC and really worked on expanding the Alpha business instead of going toe to toe against Dell's lower margins they and the Alpha would probably still be in business.

      Mentec, who *did* understand what *they* got from DEC, is still selling PDP-11s.
  • by erice ( 13380 ) on Wednesday August 18, 2004 @06:57PM (#10007806) Homepage
    In the late 80's RISC was an immensely poweful concept. Fabrication technology had advanced to the point where it was just barely practical to dispense with slow microcode and hardcode and entire useful instruction set. But you had to be very selective in what you implimented. Spending gates on performance rather than high level instruction handling is what allowed 12Mhz Sparc and MIPS processors to stomp on 25MHz 68K's.

    In the 90's, Alpha's "RISC at any cost" allowed clock frequencies that CISC chips could only dream of.

    But today's CPU are huge and obcenely complex. Instruction decoding is a tiny part of that these monster chips do. In almost doesn't matter what the user visable instruction set is. It always gets chopped up and re-ordered anyway. What does matter is market share. Huge chips require a small army of font end designers to design all the resource allocation and instruction re-ordering. They require a large army of back end engineers to create a vast array of custom cells, layout the chip, and tune the process. That means you must you must sell a very large number of parts if you want to keep those armies on staff. A superior instruction set helps only a little. Inadequately funded physical design hurts a lot. With the possible exception of PowerPC, RISC architectures just don't generate enough revenue to keep up.
    • doesn't matter what the user visable instruction set is.

      Sure it does. The further the instruction set is from what the processor's doing internally, the more time it takes for the front end to feed reordered instructions or recompiled instructions to the real ALU. The more time it takes, even if it all happens in parallel, the more latency there is between instruction fetch and useful work. When you combine that with a small register file that requires extra copies in and out of cache, even if that's simulated by a top-of-stack cache, you end up with huge pipelines and lots of instructions (real instructions hitting the internal ALU) that are just doing busywork.

      The longer pipelines you need to implement these inappropriate instruction sets means that cache misses and branch mispredictions are more expensive, because they cause huge bubbles in the pipeline and lots of wasted instruction cycles.

      Which means that your processors are running faster and hotter than RISC processors that do the same work ... the ones that were once thought outrageously hot but now seem merely tepid, and heat is turning into the next bottleneck in processor design.

      And that's why *despite* having a fraction of the resources directed to it than Intel or AMD have spent on their monster chips, and despite real neglect even before its doom was pronounced, the Alpha was still the fastest kid on the block right up until the day when, shortly before HP bought them, Compaq announced they were shutting down the EV8 development and terminating the Alpha line.

      No, a superior instruction set helps a lot. Not enough to satisfy Compaq, clearly, but more than enough that if Compaq had understood what they'd got from DEC and stuck to their original plans... instead of trying to outslug Dell on its own turf... EV8 would be the fastest chip on the market today.
  • R.I.P. (Score:3, Insightful)

    by arsine ( 802473 ) on Wednesday August 18, 2004 @10:43PM (#10009147)
    As a former dec flag waver, this is a sad day. From the company that brought us the first 32-bit and 64-bit cpus, helped develop X-Windows, helped Microsoft with NT and provide a server platform with some credibilty, and whose platforms were among the first to run UNIX I'm sorry to see the demise of one of the best lines of cpus to bite the dust

Whoever dies with the most toys wins.

Working...