Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware

Alpha-Based Samsung Linux Goodness 202

Peter Dyck writes: "This summer Compaq divested itself of the Alpha technology. The Alpha tech was purchased by Intel who most likely will bury it after grafting its best aspects to their own 64 bit IA-64 system. However, the non-exclusive terms of the deal allowed Samsung to continue producing and developing the best 64-bit processor architecture there is today. Now, as a happy owner of a four years old DEC AlphaPC164 I was delighted to see this announcement by Samsung Electronics. In short, the upcoming UP1500 motherboard will house a 64 bit 800+ MHz Alpha 21264B CPU, 4 GB DDR memory, 10/100 Mps LAN, USB and yes, it will run Linux."
This discussion has been archived. No new comments can be posted.

Alpha-Based Samsung Linux Goodness

Comments Filter:
  • but what about the 21364 chip? Is that gone?
  • Anyone know a place (online) that sells these chips?
    Motherboards for them?
    Any SMP motherboards out there?

    - jonathan.
  • Old News (Score:3, Insightful)

    by codealot ( 140672 ) on Monday November 05, 2001 @07:55PM (#2525281)
    The UP1500 was developed long before the Compaq/Intel Alphacide... it is not clear whether Samsung has any intention of continuing to support Alpha.
    • It may be old news, but it provides ample opportunity for overclocking weenies to pretend that they would know what to do with an Alpha.
  • I've been running alphas as shell and firewall boxes on my home lan for a few years now. Even though they are pricey, cheap ones are fast and with the compaq c compiler any real work I have to do is well taken care of. It's good to see that the processor line will continue. Now if they'd develop on freebsd a little more.....
  • I do not want to sound offtopic, but that sounds like the ultimate machine for any sub-50 employee business and/or graphic design. Personally, i did not even know Alpha architecture was still being made. It's great to see that after all of this time, the best there was is still around!
    Great Read,
    Thanks,
    Aj
  • I'm glad to see that Alpha processors are (temporarily, at least) going to continue to be an option. Alpha's are far from mainstream, but it's always good to see some competition in the market dominated by Intel.

    With that said, I feel that Intel makes a superior processor and Alpha's are already a bit outdated. Almost all modern apps require x-86 extensions such as MMX, SSE, and 3dNow, which Alphas do not support. I'd rather be running a hardware platform which supports these innovations and allows software to overcome x86 limitations. Alpha's are 64 bit processors, and they are quite fast, but they do not offer the specialised hardware instructions that x86 supports. Alpha's are like 1960's muscle cars. They're fast, but only because of the brute force under the hood. X86 machines are sleek and smoothe like a Porche because they use brilliant engineering and specialised extensions like SSE. I'll take the Porche over the outdated horsepower any day.

    Furthermore, Alphas are limited in the software platforms on which they support. Only certain flavors of Unix will run on an Alpha, while Almost all Unices, Windows, DOS, BSD, OS/2 etc. are supported by x86 based processors.

    • Alpha's are like 1960's muscle cars. They're fast, but only because of the brute force under the hood. X86 machines are sleek and smoothe like a Porche because they use brilliant engineering and specialised extensions like SSE. I'll take the Porche over the outdated horsepower any day.

      I think you really don't know what you are talking of. Alpha is the Brilliantly designed porsche. The X86 is HORRIBLE bastard son of intel, on that agrees anyone who has had even slightest exposure to that arch.
      Yes, SSE, MMX, etc. give X86 some advantage, but Alpha already has them! MMX is basically 64 bit adder alpha has from starts.

    • it's always good to see some competition in the market dominated by Intel.

      I'm not sure you read that it is Intel that purchased Alpha.

    • by zulux ( 112259 ) on Monday November 05, 2001 @08:09PM (#2525335) Homepage Journal
      Most extended instruction sets (MMX,SSE,3dNow,Veocity) that work on large chunks of floating point data at a time are not designed for accuracy - they are designed for speed. In an environment where precision is required only IEEE floating point is of vale - the extended instructions are great for Quake, Photoshop and benchmarks, but hopefully nobody is using them for real work.

      You assertion that X86 processors are 'brilliant engineering' is a but odd - X86 processors have a lot of cruft around to deal with old 8-Bit,16-Bit (Real and Protected) and 32-Bit modes. The Alpha and other chips that have been introduced in the last few years don't have all that garbage lying around and can concentrate on doing things correctly - where X86 designeres spend a lot of time making the things backwards compatible. Instead of being a 'Porche' as you described it - they end up being a VW Bug with a turbine engine graftwed on the hood - it works but it sure is ugly.
      • You assertion that X86 processors are 'brilliant engineering' is a but odd - X86 processors have a lot of cruft around to deal with old 8-Bit,16-Bit (Real and Protected) and 32-Bit modes.

        While there is no doubt that there is lot of cruft in the x86, you have to give Intel credit for getting way more performance out of it than anyone thought they wood. I remember back in the early 90s everyone kept talking about how RISC was going to kick Intel's ass for these very reasons: they would never be able to overcome the limitations of having to support backward compatibility. Yet, they are still standing, and RISC's advantages are very small in real terms.

        • "Wood"? Sheesh, preview is my friend.

        • I remember back in the early 90s everyone kept talking about how RISC was going to kick Intel's ass for these very reasons: they would never be able to overcome the limitations of having to support backward compatibility. Yet, they are still standing, and RISC's advantages are very small in real terms

          HAH! the P6 arch is RISC! There are really few instructions that current P2/P3 processor completes fast, it's optimized for most common case (nothing wrong with that), but all those "left-over" instructions left from previous years of X86's, are emulated in microcode. The CORE of all P6 processors is superscalar RISC.
          • HAH! the P6 arch is RISC!

            Well, that's true and it's not true. There is no doubt that modern Intel design borrowed some tricks from RISC architectures, but RISC itself ("Reduced Instruction Set Computer") refers to making a processor fast by reducing the instruction set in order to gain speed through simplicity of the core. This idea has basically failed. You would think that a simpler architecture would allow much higher clock speeds, but it didn't happen.

            Incidently, Intel has used microcode since (I think) the 486 (386?). Microcode and RISC instruction sets are two different concepts.

            • Well, that's true and it's not true. There is no doubt that modern Intel design borrowed some tricks from RISC architectures, but RISC itself ("Reduced Instruction Set Computer") refers to making a processor fast by reducing the instruction set in order to gain speed through simplicity of the core.

              It's not just "simplicity" of the core, it also has to do with relative timings of instructions. Granted, when instructions get real short (5-bit..), then processor is likely to start getting hazards in pipeline where it can't reshuffle instructions. So "not-short-and-not-too-long" instruction is best performing in traditional pipeline design.

              This idea has basically failed. You would think that a simpler architecture would allow much higher clock speeds, but it didn't happen. Incidently, Intel has used microcode since (I think) the 486 (386?). Microcode and RISC instruction sets are two different concepts.

              Microcode came with IA32 aka P6. Microcode and RISC relate in X86 very closely, without microcode P6 couldn't work as X86 it is.
              Consider what Intel is making with Itanium, it's really a very RISC:ish system, they make compiler do all the optimizations for them.

            • The x86 has been using microcode for a LONG time to do CISC instructions (there isn't any easy way otherwise), however, the core itself that executes these microinstructions was moved to RISC in the Pentium Pro. I'm not sure how dependant any of the hardware is on this instruction set, but it may be possible to have it interpret other machine codes that better use the internal features (tons of registers, 64-bit integers) with a microcode update.
            • When they say "Reduced Instruction Set Computer", they don't mean that the number of instructions has been reduced. It is a very common misnomer.

              A better name would be "Optimised ISC" or "Simplified ISC" etc. x86 is horribally restrictive with nasty modes of execution and cruft that descend from an old technology that was limited by this because of technology limits in 1976! Alpha was designed for the future in the 90's, and has no cruft or limitations in the ISA (Instruction Set Architecture) created by hardware limitations.

              I don't think anyone bemoans the lack of ADD R1, (R2) or whatever instructions (an add that requires a memory access at the address held in R2 to get the value to add). Also the lack of things like "POLY" (DEC VAX instruction) are not to be bemoaned.

              Before saying that RISC is something it isn't, go and read about it, and preferably do a course at university about computer architecture.

          • What would be great is if Intel exposed their RISC instructions so that you could run it like a true RISC. Maybe there are already secret backdoors which would allow you to do this - some instruction to make it jump into RISC mode. Anyone got any ideas on this?

            P
        • Since when does technical superiority mean that it succedes? RISC architecture is far superior to x86, look at the performance of MAC compared to Intel.

          But, superior hardware doesn't mean that it wins. Apple made a lot of choices that kept it from beating out Intel. While those choices hurt them in business, they helped to make the hardware superior.

          Or, for an example that is very popular here, Windows vs. Linux. Which is technically superior and which is most commonly used?

          RISC kick's Intel's arse in performance. Cost is the problem.
          • RISC architecture is far superior to x86, look at the performance of MAC compared to Intel.

            Actually, that's a very good thing to look at. Clock-for-clock, the Power architecture is only about 20% faster than Intel. Of course, nothing lies like benchmarks, but that appears to be about the average case.

            Or, for an example that is very popular here, Windows vs. Linux. Which is technically superior and which is most commonly used?

            Depends on what you define as "technically superior". If you are talking about object integration with the operating system, Windows blows Linux (and Unix) out of the water. The flexibility of objects in Windows is its greatest strength. On the other hand, if you are talking about architecture, Unix is (possibly) superior primarily because of the very isolated nature of its components. The latter is also why Unix is generally more stable than Windows.

        • by Christopher Thomas ( 11717 ) on Tuesday November 06, 2001 @12:37AM (#2526252)
          While there is no doubt that there is lot of cruft in the x86, you have to give Intel credit for getting way more performance out of it than anyone thought they wood. I remember back in the early 90s everyone kept talking about how RISC was going to kick Intel's ass for these very reasons: they would never be able to overcome the limitations of having to support backward compatibility. Yet, they are still standing, and RISC's advantages are very small in real terms.

          You should probably doublecheck your sources, as they seem to have misinformed you on a couple of points.

          Firstly, the past several generations _are_ RISC chips, with a wrapper around them that translates x86 instructions. This is why Intel chips have more decode stages in the pipeline than any clean architecture would (and why they were so eager to use a trace cache in the Itanium - among other things, it lets them skip the decode stages for instruction batches the processor has seen recently).

          Secondly, there is a *huge* performance difference in practice between RISC and CISC architectures, for the simple reason that you can't pipeline CISC processors. You have instructions that do wildly varying amounts of work, taking wildly varying amounts of time to do it, sometimes without the total execution time being known (like the "loop" and "rep [foo]" instructions). Pipelining requires an instruction set with instructions that take roughly the same amount of time and that share many steps in common between instructions. RISC neatly provides all of this.

          You can partially pipeline a CISC machine by only pipelining some types of instruction - heck, even a RISC machine will need to special-case things like divide operations - but pipelining is far, far more effective with a RISC architecture.

          This was one more nail in the coffin of CISC cores (there are serious hardware and compiler complexity problems too).
      • What exactly is "REAL WORK???" Last time I checked the people in the GFX dept where I worked did real work and they do most of it in Photoshop...
      • Though I like Alphas a lot, one problem with them is the *don't* implement IEEE floating point natively. There are enough differences that porting math to them is a pain.

        One obvious problem is that divide by zero causes a seg fault. Lots of code I have does things like:

        {double A = B/C; if (C!=0) do_something(A);}

        The fact that I divided by zero is irrelevant because the result is ignored later. Finding these and rewriting them is a major pain in the ass.

        You can compile with -mieee to get pretty good emulation, but that turns off all the parallel pipelines and slows things by 15% or so.

      • For those folk who continue to RISC vs CISC debate you'll find that both design mentalities have been munged together to where modern CPU's show qualities from both camps. Best description comes from this peice on arstechnica.... It's well worth the read.

        Article [arstechnica.com]
    • How ironic: I have no tolerance for people who claim to be smart yet can't spell.
    • Ummm.... (Score:3, Informative)

      by fingal ( 49160 )

      Would you care to have a read of here [microway.com] and then explain your car analogy again?

    • > X86 machines are sleek and smoothe like a Porche

      What are you smoking, dude? x86 architechture is a 20+ year old crap that should have been buried 10 years ago in favor of RISC architechture. RISC is sleek and smooth, not x86. phew.
    • Alpha's are like 1960's muscle cars. They're fast, but only because of the brute force under the hood. X86 machines are sleek and smoothe like a Porche because they use brilliant engineering and specialised extensions like SSE. I'll take the Porche over the outdated horsepower any day.
      Comparisons like that are pointless when the only real factor is speed/$. It makes no difference when you can pay 25% of the price for same performance.

      If you need 64-bit integers, huge amounts of RAM, very high-precision FP or large numbers of processors you'll want to avoid x86. But for the vast majority of applications there's little reason to go with anything else.

      A bit OT:
      I think the reason so many people are infatuated with Alpha is that the assembly code is 'clean' and the processor doesn't have backwards compatibility modes that require a little thinking to get around. The truth is, none of that matters when you need to get a job done.
    • Brilliant engineering? Comparing intel processors to a fine Porsche? Are you stark raving mad? Intel chips are a mega-cludge harkening back to 16 bit DOS, with 10 bit addresses, etc. That stuff has been glossed over but it's there, make no mistake. The current line is a kludge of a kludge of a kludge... a look at a "hello world" assembly program would tell you that.

      The user interface (machine/assembly language) to the intel monstrosity is so convoluted, arcane, and god-awful that very few CS/EE programs teach it. For being the most common PC processor that's pretty bad. If you want a Porche of a CISC chip, look at the motorola MC68040, or in the RISC universe, the IBM Power4 or Alpha.

      Intel chips are cheap, fast and just good enough, but they're Ford Tauruses, not Porches.
    • I honestly can't tell. I assume that it's sarcasm, but it is written somewhat like a troll, too.

      Just curious
    • Almost all modern apps require x-86 extensions such as MMX, SSE, and 3dNow, which Alphas do not support.

      Modern apps like Mozilla, GCC, Linux? Heck no. These run fine on Alpha.

      These "extensions" are mostly workarounds for deficient floating-point in the x86. They are very specific to x86 and irrelevant to any other ISA. (There are also vector extensions, which are supported on Alpha EV6 and up as the MVI extension.)

      X86 machines are sleek and smoothe like a Porche because they use brilliant engineering and specialised extensions like SSE.

      Boy have you been brainwashed. x86 has a butt-ugly ISA dating from the 1970's that only its mother could love. Alpha, PPC and SPARC (to some extent) are all redesigns that cure a lot of the problems in x86.

      Intel's 32-bit chips continue to thrive due to marketing, not technology.

    • by Svartalf ( 2997 ) on Monday November 05, 2001 @08:22PM (#2525396) Homepage
      Almost all modern apps require hacks like MMX and 3DNow? (Realize that while you're using either of those, you can't use the floating point pipeline because it uses some of the same paths as the SIMD engine. Also note that it costs cycles to switch back and forth and if you're not doing LOTS of matrix math, you're not going to use them- you're going to use hand tuned floating point/integer code.) How many really, really use them? Not a lot of them, in reality.

      x86 has hacks to get SIMD instructions, limited register spaces, weaker floating point, etc. AltiVec is a more rational scheme and PPC CPUs have much more useful register sets and rational instruction sets, and it's floating point is nearly twice as fast.

      Hacks do not a "Porche" make. To use your analogy completely, the x86 is a Mustang GT to the PPC's Porche. Both will get you there. Both go fast- but one is higher performance and handles better.
    • by red_dragon ( 1761 ) on Monday November 05, 2001 @08:26PM (#2525419) Homepage

      I feel like I'm feeding the troll here, but anyway...

      Almost all modern apps require x-86 extensions such as MMX, SSE, and 3dNow,...

      You'd only worry about this if you don't have access to your software's source. Besides, why should a non-x86 architecture support x86 features?

      ... which Alphas do not support.

      However, the Alpha, in keeping with the "pure RISC" philosophy, has MVI (Motion Video Instructions), which consists of a "whopping" 4 instructions (really).

      Only certain flavors of Unix will run on an Alpha, while Almost all Unices, Windows, DOS, BSD, OS/2 etc. are supported by x86 based processors.

      Could you please specify which "certain flavors" of Unix run on the Alpha? Where do you get the impression that x86 boxes are supported by "almost all Unices"? Last time I checked, I could not run IRIX, Tru64, or AIX on an x86 PC (there used to be an x86 version of AIX, but those days are long gone). Windows definitely did run on the Alpha (up to NT 4.0). FreeBSD, NetBSD, and OpenBSD also run on it. And bringing up DOS, OS/2, or OpenVMS is not worth the trouble, as they only run on a single platform (Yes, I know about OS/2 on PPC, but did anyone pay attention? NT/Alpha got a lot more usage than that).

    • Alpha's are like 1960's muscle cars. They're fast, but only because of the brute force under the hood. X86 machines are sleek and smoothe like a Porche because they use brilliant engineering and specialised extensions like SSE.

      Surely you jest?

      Alphas are the Porsches. The x86 architecture is a horribly ugly mass of cancerous protrusions and cruft that still has to perform like an 8080. All those extended instructions like MMX etc. are done better by the Alpha.

      -atrowe: Card-carrying Mensa member. I have no toleranse for stupidity.
      Poor spelling is just fine though.

    • It's posts like this one that makes me wish there was a "+1, Troll" option. If you want to *really* reel them in, you should post that one to comp.os.vms or comp.sys.dec.

    • If Alpha's are so outdated, why is it that the fastest computer on the face of the earth is a cluster of Alpha's? The Human Genome project uses GS-320 clusters running Tru-64 to crunch all their numbers.

      CISC was a good idea when memory was expensive and access to peripherals and even RAM was slow - now none of that is a factor. Alpha was designed as the modern, RISC replacement to the dated CISC design of the VAX. The x86 is also based on that outdated CISC design.

      MMX, SIMD (KNI), and 3D Now that you speak of are super instuctions - hardware designed to do the work software should. While they are faster, few applications make use of them (RC5 loves them...)

      Alpha does not support as many operating systems as the PC (largely because x86 has been cheaper for so long) but it supports better OSes - Tru64 (your commercial Unix), OpenVMS ("unhackable" by DefCon standards), Linux, FreeBSD, and NT4 SP3. They were never designed to be cheap, mass market machines, they are big iron - except by that standard they are super fast and dirt cheap.

      Perhaps you should reexamine your perception of what is and isn't outdated, limited abandonware.

      My original commet on /. when the Compaq --> Intel transfer was announced:
      http://slashdot.org/comments.pl?sid=12932&cid=13 00 07

      My website comment on the same topic:
      http://eisenschmidt.org/jweisen/alpha.html
    • Furthermore, Alphas are limited in the software platforms on which they support. Only certain flavors of Unix will run on an Alpha

      Would it shock you to learn that Windows NT was ported to alpha??? I thought so...

    • Talk about flame-bait... Geez.

      Anyhow. The x86 extentions are to make up for it's shortcommings, not to make it better than any other chipset, just to keep it as close as possible.

      x86s ARE very much like a Porche... They get a lot of press, and are very popular, but they certainly aren't the fastest or the best. The Alpha would be more like a Viper... Not very popular, gets less press, and beats the Porche at every turn.

      Finally, saying you are going to stick with something because of it's installed base is why most people stick with Windows... It's not really any good, it's just so popular that anything will run on it.
  • by leandrod ( 17766 ) <l AT dutras DOT org> on Monday November 05, 2001 @07:59PM (#2525295) Homepage Journal
    Fact one: what distinguishes Alpha from IPF is not some "pieces" that could be copied over, but a superior design and architecture. In order to take advantage of that, Intel would have to dump IPF and start over, effectively selling Alpha under a different name. That would be an unthinkable about-face.

    There is a very nice Alpha-EPIC comparision white paper from Digital, a shame I don't have the URL.

    Fact two: the deal just preceded the HP-Compaq one. It's a marchitecture thing.
  • by Angry Black Man ( 533969 ) <vverysmartmanNO@SPAMhotmail.com> on Monday November 05, 2001 @08:02PM (#2525306) Homepage
    The present CPU that is employed in Compaq machines like the AlphaServerSC and the Wildfire and in various cluster systems is the Alpha EV67 processor. The previous chip was shipped with a clock speed ranging from 666-833 Mhz. IIRC, the EV67 was able to deliver up to two floating-point results per clock cycle. The load/store units could load 16 B/cycle while the store bandwidth is slightly smaller: 10.6B/cycle. The bandwidth to memory is 5.3B/cycle, however, the type of memory determines the actual bandwidth through the bank cycle time of the memory. We were expecting a scaled up version of this chip named the EV68. It was projected to have an 833Mhz clock speed. I believe that this is perhaps some version of it.

    The density used is 0.18 instead of 0.25 which enables the location of a 1.5 MB secondary cache on chip. The largest difference will be that there will be 4 dual channels from the chip to interconnect it with neighouring chips at a bandwidth of 1.6 GB/s per single channel for what Compaq has called "seamless SMP processing". The path to memory is implemented by 4x5 Rambus links as the systems will be fitted with Rambus memory. The direct I/O dual link from the chip also has a bandwidth of 1.6 GB/s. Theoretically the chip could run at speeds of upward 1Ghz.

    I know that the Alpha 21264B is based loosely on the EV line of chips (more specifically the 67 and 68), can anybody further verify this with some more details? Thanks.
    • 21264 is based loosely on EV67 and 68, even EV6. IIRC, 21264B is based on EV68. Check out its reference manual [compaq.com].

    • hiya, ev4 = 21064 ev45 = 21064A ev5 = 21164 ev56 = 21164A ev6 = 21264 ev67 = 21264A and so on. The ev's are the nicknames, the 21's are the release names. The "two-digit" ev's are minor modifications on the base architecture. For example, the 21064A increased the on-chip cache from 8kB to 16kB. The 21164A I think was just a process shrink, but not sure. nick
    • I've played with one of these boards before...really nice performance thanks to DDR:-)

      At the time, we were still using the Irongate chipset configs in the kernel (since the 761 is aka Irongate II) and it wasn't as stable as I hear that it has become.

      As for the processor, they were using EV68 833MHz Alpha on the board (same exact processor as in the CS-20, fyi). I'm pretty sure that they haven't varied from this since that is what they were halfway through QA with

    • The poster is close but has EV68 and EV7 confused.

      The internal code names EV6, EV67, EV68 correspond respectively to external part numbers 21264, 21264A, and (I'm 99% sure) 21264B. I say "99% sure" because I left Compaq 2 months ago and haven't checked with contacts there, but 21264B would be the natural part number for EV68.

      EV68 is mostly a process shrink of EV67, but I think with some bug fixes and minor improvements.

      EV7, which should be released as 21364, uses a core based on EV67/EV68, but has an all-new memory subsystem with multiple RAMBUS channels for fast memory access and for building grid-structured multiprocessors. That's what the parent to this article was talking about, but it's not in EV68. EV7 is still under development, very far along but not quite done yet, and Compaq is committed to finishing it and releasing a generation of servers using it, according to what was announced at the time of the Intel deal.

      EV8 was going to be an all new core with simultaneous multithreading, reusing the EV7 memory subsystem. It would have been released as the 21464. EV8 was cancelled with the sale of the Alpha IP and engineering group to Intel. My friends in the EV8 group are at Intel already, while the Alpha engineers who were on EV68 and EV7 are still at Compaq for the time being.

      I don't have any contacts at Samsung/API, so I'm not sure exactly what they're doing. But it would be quite weird if they released something called 21264B that was anything other than an EV68...
  • Dual boot? (Score:3, Interesting)

    by rice_burners_suck ( 243660 ) on Monday November 05, 2001 @08:04PM (#2525312)

    Heh heh... I'd like to run FreeBSD on it. IIRC, it supports the Alpha.

  • I think 21264B is the beefed up version with 0.18 Micron. You should look at the specs: here [geek.com], while 21264 is here [microprocessor.sscc.ru]. You can then compare it side by side.

  • hidden details (Score:4, Insightful)

    by JDizzy ( 85499 ) on Monday November 05, 2001 @08:13PM (#2525351) Homepage Journal
    You have to go to the link, and make sure to look at the large image near the bottom.

    The image shows the 32bit pci bus only running at 33Mhz! I mean... I own a DIGITAL AlphaStation 4/233, and it has a 33Mhz. THis box is from 97.

    Just guessing from what I saw on the page... the kit is a strange malgamation of old, and new technology. The system has 133Mhz, btw nothing new for Alpha, for the memory bus, but not the pci bus.

    So... its is 64 bits.... but it isn't that special either.
    • 33 MHz... that's it. you can't make make PCI-32 go any faster than this, or you'll end up with a frozen system, malfuctioning cards, and other kinds of bizarre things. remember old PC-XT 12 MHz ??? well, I do. corrupted files was the least of our problems when we had to deal with overclocked ISA buses.

      If you want a faster PCI bus you'll have to search for a PCI-64 mother-board. these boards have PCI slots with 64 bit data-path running at 66 MHz, but they require special cards to take advantage of the extra speed/bits. If you attach a normal PCI card to a PCI-64 slot it'll work with 32 bit data-path at 33 MHz.

      Also, forget about the memory clock. There's a north bridge controler between the memory and the CPU. Take an overclockable Athlon mother board like Soyo Dragon as an example. You can boost the CPU front-side-bus way beyond 133 MHz, but the memory clock will remain at 133 MHz.
      • Incorrect. You can and do overclock the memory. Most often times the FBS and memory speeds are statically linked. In some bios revisions, you may be able to set the FBS as Memory+33mhz. This allows people to use 133mhz FSB processors with slower PC100 memory.

        Ever wonder why overclockers are eagerly awaiting the widespread release of PC2700 DDR-SDRAM? Because it can be a bitch to overclock your PC2100 memory past a certain point.

        So, your point is basically totally wrong.

        Oh, and don't forget that You can also run 64bit/33mHz PCI cards. Nicely enough, most of these cards are backwards compatable with older busses. I have a newer 3Com Gigabit ethernet card that supports 32bit/64bit and 33mhz/66mhz/133mhz. Hell, I don't even know if you can get a motherboard with PCI-X yet, but the damn NIC already supports it.

        Anyway, I don't see how this has anything to do with the original poster's point. He may have worded it poorly, but it isn't that hard to figure out his point:

        Not having at least 64bit/33mHz PCI in a newer server-oriented board is a major flaw. 32bit/33mhz PCI is quickly becoming stretched thin by the likes of gigabit ethernet and Ultra160 and now U320 SCSI.

        Hell, I even stress the PCI bus in my workstation systems at times. Thankfully I now have 64bit/66mhz PCI in my workstations. Thank you Tyan!
    • The image shows the 32bit pci bus only running at 33Mhz!

      Yeah, and like the submitter, I have a PC164 too... even it has 64-bit 33MHz PCI slots. I guess depending on what you want to do with the thing, it might not matter, but this seems like a really unbalanced board. Good for raw number crunching, but not so good as a database server (or anything else that wants a lot of disk I/O).

  • is this part of the specifications:

    "2MB of flash ROM
    - SRM Console for Linux Install"

    This means a REAL setup, with a command prompt. just like a REAL server should have (Think on SUN, PA-RISC, etc) not that crapy menus x86 machines have.

    Way to go Samsung. Add 2 or 3 more PCI slots and it'll be even better.

    Oh, and did you noticed te AMD 761 North Bridge ? nothing strange here. Athlon shares the same bus with Alpha. AMD licensed it a long time ago, so using an AMD chipset makes perfect sense.
    • Compared to the hacks needed to get Linux booted via ARC/AlphaBIOS, having the SRM console sure is nice, but all it really does is load the first sector(s) of a (possibly arbitrary) disk. This is not much different from how PCs boot their OS's. Contrast this with OpenFirmware-compliant systems, where the firmware can load a kernel directly from a partition.
      • SRM usually loads a secondary bootstrap loader, e.g. aboot, which understands ext2 filesystems. So this isn't a real limitation.

        Reasons SRM is better than a PC BIOS:

        1) It understands a serial console
        2) It can boot over a network (using bootp/tftp)
        3) SRM has no artificial limitations on memory size (as in x86 real mode).

        I have a Alpha 164LX motherboard in a case with Ethernet and memory... no floppy, hard disk, video, keyboard or mouse. Still it can boot onto the network and run web servers, etc.

        Of course Open Firmware can do the same...
  • Clock speed question (Score:3, Interesting)

    by Michael Woodhams ( 112247 ) on Monday November 05, 2001 @08:21PM (#2525390) Journal
    I remember about 8 years ago, the Pentium was just released with a maximum clock of 100MHz. At the same time, Alpha chips had clock speeds of 275MHz. How come Intel chips have increased clock speed by a factor of 20 while Alpha have increased by a factor of 3?

    (Yes, I know that performance depends on much more than just clock speed.)
    • Intel has boat loads of cash, current pentiums get thier speed with a 20 count'em 20 stage pipeline, while most risc processors are 4-8 stages. Pipelining is a great way to get speed, but very difficult to get working and stay working partiucally after branches and pipeline stalls/flushes with interrupt support being the messiest part. Intel takes something like 3 years to rev a 1.0GHz design into 1.2GHz and have multiple teams working a once, i.e. a team finishing 2.0GHz, 75% done for 2.2, 50% done for 2.5, 25% done for 2.7 and just getting started for 3.0. I don't know how may teams they have but they are not only pipelining the processor but also the developemnt teams, while others can afford only 1 design team Intel can afford 10 (a wild guess) and Intel knows that they will continue to dominate so the large engineering budget really isn't risky at all. Money talks. PowerPC, MIPS, ARM and Alpha are better technologies but money beats technology most of the time. (MS vs Linux)
    • >How come Intel chips have increased clock speed by a factor of 20 while Alpha have increased by a factor of 3?

      Without going too technical, intel designed it's pentium IV to be highly scalable in speed (but look at how poor it performs mhz/mhz-wise compared with AMD), Alpha had a good design from the start and they've built around it, intel went for the marketting hype machine.

      Also keep in mind that since over 2 years, not much work or funding has been put on Alpha technology... basically it's the same chip with more cache, reducing die size to increase clock speed and stick yet more cache, nothing much, nothing new, intel did the same with the pentium II/III... but in the same timeframe, intel pushed a lot of R&D and $$$ to pump out it's next generation processors. There's NO DOUBT that with the same energy, you'd probably have a 21464 making the IA64 a bigger joke that it is right now.

      The thing that pisses me the most in this story, is I come from an amiga background, I had a lot of respect for both alpha and Mips back then (remember the Raptor Screamernet renderfarm (Mips-based) that you'd stick near you amiga toaster system and it would render 25 to 40 times faster?, or the first lightwave port to alpha, screaming over 40 times the speed or my poor amiga 4000?), I knew that if my platform would eventually die, I'd have a supersweet alternative.

      But what happened? Microsoft pulled the plug on Windows2000 on the alpha, ok no problem, there's still some unix alternatives (but kiss goodbye to seeing alpha as a powerfull Windows workstation), and like if that wasn't enough, compaq bought it, waited, left it to die.. just like gateway did with the amiga. Wait till the technology gets too old (funny fact is even 2 years later the alpha CPU is still good and can be compared to current systems...2 years.... think about it).

      Anyways, the treatment the Alpha got is so unfair, it went the same way MIPS went, same way amiga went, and it's a proof that it's not the best technologies that wins. When I was still dreaming about seeing Win2K on alpha, and Compaq released it's workstation shortly after buying DEC, I knew there was something wrong because they would NEVER compare to intel, NEVER. but NEVER I thought that one day, the potential INTEL competitor would get bought by.... INTEL.

      Here goes my dream of seeing intel shoving 64bits technology into mainstream and normal people and general benchmarks sites noticing "hey, speaking of 64 bits, there's that Alpha processor that is 3 times faster... woah 3times?!? it's worth to check!!! it might be the next AMD!"

      It is.. (even if it's pre-amd) only geeks like us ,and some powerusers/scientists, noticed.
    • At its beginning the Alpha was a pure speed daemon, a very "simple" (in order, simple dispatching) but with a very high clock design.

      Then its design became more brainiac, now it is an out of order design: they choose to increase the Instruction Level Parallelism over the frequency.

      So the Alpha reduced its advance in clock speed..
    • I'm not particularly convinced by the answers so far. Yes, RISC does more with a MHz than CISC - but 8 years ago, Alpha had a 2.5 times advantage in clock speed over Intel, an unheard of clock speed for the time. If it was so good at high clock speeds then, why is the design mediocre at clock speed now? Surely even a very modest R&D effort could increase clock speed by more than a factor of 3 given 8 years of advances in semiconductor fabrication technology.

      George Walker Bush says:
      "clock speed has been more important for the Pentium ... The alpha ... pure clock speed has not been such a priority."

      The question is not 'why is it 2.5 times slower now', it is 'why is it 2.5 times slower now given that it was 2.5 times faster 8 years ago.' (I realize 800 MHz is more than respectable for a RISC chip - I've used top-of-the-line SGI and Sun machines with fewer MHz than this (although with 20 to 32 processors.))

      Pagercam2 writes:
      "Intel has boat loads of cash..."

      tcc writes:
      "over 2 years, not much work or funding has been put on Alpha ... same chip with more cache, reducing die size to increase clock speed"

      I would have thought the 'simple' changes tcc describes alone would allow for more than a factor of 3 in 8 years.

      What was the source of Alpha's big clock speed advantage 8 years ago, and why does this advantage no longer apply today?
  • by HalfFlat ( 121672 ) on Monday November 05, 2001 @08:35PM (#2525465)

    An older board - the UP2000 - is a dual processor SDRAM (not DDR) based Alpha motherboard, which has 6 PCI slots, two of which are 64-bit.

    This new board has DDR ram, but only 32-bit PCI, and then only three slots. While nice and all - DDR is good, and of course it's for the Alpha 21264B, not 21264A - this does seem a bit of a step backwards in the IO stakes. Especially when it's noted that the UP2000 has onboard Ultra-2 SCSI as well.

    Perhaps this board was originally targetted at the 'lower-end' workstation segment? Does anyone know if a more server-oriented 21264B board is on the way? It seems sadly unlikely given the current circumstances.

    If one wants to have 64-bit multiprocessing on a budget, what are the current alternatives?

    • If one wants to have 64-bit multiprocessing on a budget, what are the current alternatives?

      At this point, 64-bit multiprocessing on a budget is an oxymoron.
    • I thought DDR RAM was a bit contentious? With patents and such - or am I confused?
    • An older board - the UP2000 - is a dual processor SDRAM (not DDR) based Alpha motherboard, which has 6 PCI slots, two of which are 64-bit.

      The UP1500 is the successor to the UP1000/UP1100, both of which were also based on AMD chipsets. And don't be fooled by the DDR on the UP1500. Compared with the crossbar switches used on the "real" alpha motherboards (e.g. the UP2000), the memory subsystem of the amd761, nice as it is, can't hold a candle. If you're going to confine yourself to a PC's memory architecture, you might as well drop a couple of 1.6GHz Athlon MPs in it.

      I'd go dig up the links (check www.alpha-processor.com and www.microway.com), but that would just make me want to drop $15k I don't have on a new toy...

  • Why bother (Score:3, Insightful)

    by pagercam2 ( 533686 ) on Monday November 05, 2001 @08:39PM (#2525477)
    The Alpha was a good architecture for the time, but with 2+GHz Pentiums I can't see getting excited about a 64 bit workstation. Especially from Samsung, who to the best of my knowledge has never been a player in the workstation market. Workstations are pretty much gone as a market, Sun seems to be the only people staying afloat, SGI is dead, HP has sold thier soul to Intel. The x86 architecture isn't that great but they got the bucks to continue development and beat other better architectures by shear size of thier warchest. I hate to admit it but good engineering often looses to strong marketing (kinda makes you want to cry), but thats the unfortunate truth. I'm not sure if IA-64 will do that well, I think its going to be a tough transition, Intel will probably be forced to make more generations of x86 and AMD seems to be beating them using a lower clock rate, so it may just be a good time to invest in AMD. Its about time that somre revolutionary architecture comes in a shakes things up, things like StrongARM are a step in the right direction, but not really competive for desktop. Transmeta has great technology, but why buy a simulation when you can afford the real thing, Intel has improved their technology by borrowing from Transmeta so Intel in getting ahead and Transmeta without the huge sums of cash is falling further and further behind.
    • The main reason I am interested in 64-bit platforms today is to take advantage of the bigger address space... People working on high-end graphics and scientific simulations are already needing several GB to run one program. Yes, you can shove up to 64GB in certain x86 machines, but even with 1GB you start running into problems, because you have to enable highmem for the kernel to be able to access all of it, and highmem is probably one of the least stable parts of the Linux kernel today... And then when you reach 3.5GB or so (per process) you are just plain out of address space, and it's game over.

      So, in the near future I will definitely be in the market for some 64-bit machines. The problem is, how to put them together cheaply? I can get a top-of-the-line dual Athlon for $2000 these days... But modern Alphas and SPARCs are still in the several-$k range, and basic IA-64 machines are going for $10,000+ today. Hopefully in the next year or two we will start to see much cheaper 64-bit platforms... (I'm mostly counting on x86-64. I think AMD has the right idea to make an incremental enhancement to x86 without throwing it all out the window; and hopefully their price points won't be as astronomical as the other 64-bit options).
  • I'm coming to have a lot of respect for Samsung lately, what with their flat panels with integrated TV tuners, HDTV ready flat panels, their nice cheap 770 TFT [samsungelectronics.com] (of which I have several tied together with a Matrox G200MMS card), their Yepp MP3 player [samsungyepp.com] (of which I have one)(it even plays my cdex/lame encoded vbr mp3s), and a host of other cool products, not to mention a nice website. (menu: who we are, what we sell, where we are. Just what we need to know.)

    This Alpha board is another in their seemingly endless line of cheap but good products, not cutting edge like IBM or Sony, but taking existing technology and getting it to the masses at a reasonable price and quality.

    (/jonbrewer thinks he'll head to etrade and put his money where his mouth is.)
  • From here: http://www.theinquirer.net/02040103.htm [theinquirer.net]

    Samsung Alpha board suffers from DDR famine

    And fails to deliver on 1GHz Alpha

    By Pete Sherriff , 31 March 2001

    THE JOINT VENTURE which produces mobos for the DEC (sorry Compaq) Alpha microprocessor is suffering from a severe shortage of DDR cache memory, according to sources acutely close to the acute famine.

    The UP 1500 Alpha, which supports a 21264 Alpha at up to 800MHz speed and comes with 4MB or 8MB of level two DDR cache, is intended to arrive in July, with typical systems costing around $4,500.

    But a shortage of cache for the processor is hampering production, leaving system integrators truly "up in arms" and Samsung embarrassed at the short-fall.
  • pant. pant. pant. (Score:2, Interesting)

    by GISboy ( 533907 )
    If you are hungry for knowledge, slashdot is an all you can eat shmorgasboard...woof!

    (Still scanning all the pdfs)

    Man 'o man this brings back memories.

    I remember a discussion on architecture a while back when I was a newbie about which was better; the invariable "CISC vs RISC" discussion that degenerated into a flame war of mac vs pc.

    (being a newbie at the time, that was an introduction to what a flame war was. Glad I had the sense to lurk and listen.)

    As the discussion raged on with benchmarks of floating point and integer, FLOPS, expandability, usability and so forth, an Alpha user spoke up.

    I forget the exact words but it went something like this:

    "I've been reading this thread with great amusement for some time, because *everyone* in it points to a single benchmark run one at a time. On the machine I am posting from I run a NNTP server that transfers about 3G a day, an FTP server that does even more serving internally and externally, I'm a mirror for (I forget who he said) and, keep in mind, before posting to this forum, I was playing Quake @ 50fps. When you can do half of what I am doing on your pc's and mac's or even *touch* my frame rate, then we'll talk."

    To say the discussion ended abruptly would be an understatement.
    As a point of reference it was about 1994 or so and the pentium was maybe at the 100Mhz mark. 3G of data when 500M was an "increadible" amount of space. Getting Quake up to 30fps on your average pc was darn near impossible to mere mortals (much less a newbie such as myself at the time).

    After that, well, Alphas have always been awe inspiring because then, like now (reading the specs) these processors are beasts!

    And SMP systems that are becoming common today, well, Alphas and Suns were the only ones I was aware of (at the time) capable of such things...or were more common than their mac/pc counterparts.

    Aw, man, I've gone on long enough, sorry about that.

    /me wipes away a tear. {sniff}

    Thanks to all the posters of the specs, it is going to be a few days until I can wipe this stupid grin off my face.

    Cheers,

    GISboy
  • The Alpha stands alone as the fastest floating point processor for the money (bang for the buck). With the Compaq cc and fortran compilers, this beast blows away anything else. For scientific and engineering applications, none better. Of course, I recall Linus said years ago the future for Linux was in games not calculations. I think he has underestimated the calculations side.
  • page size (Score:2, Interesting)

    by Mike Hicks ( 244 )
    This is a little thing that people don't talk about much. Of course, it's quite possible that it doesn't deserve to be talked about much.

    Memory management is becoming more difficult to do efficiently these days due to the fact that the most commonly used processors (Intel-based) use a memory page size of 4 kilobytes. Each chunk of 4kB must be managed by the operating system. This is the unit of memory used for a great many operations. Swap space is also referred to as the `paging area', where little-used memory pages of running programs get sent.

    Of course, 4kB isn't the only page size that Intel CPUs support -- they can also handle 4MB pages (a little large)! 64-bit successors to the Intel x86 platform (both x86-64 and IA64) only support these same page sizes.

    Other CPUs can handle different page sizes. I think SPARCs generally have 32kB pages. Alphas apparently do 8kB. Many processors have variable page sizes as well.

    While I doubt the page-size issue is going to cause anything to completely keel over anytime soon, I do think that more flexibility could make memory management more efficient and increase performance.
  • don't get excited... (Score:2, Informative)

    by rbw ( 3143 )
    okay, let's review...

    The Inquirer has a story [theinquirer.net] posted March 31, 2001 about the UP1500. it says the product is "is intended to arrive in July". it is now November.

    these mailing list posts [freebsd.org] (including some by yours truly), show that the Samsung page in question [samsungelectronics.com], has been around since at least April 2001 and so has a page [samsungelectronics.com] which has listed the UP1500 as "Under Development" ever since.

    now, i'm no expert, but i think it is fairly safe to call this vaporware [m-w.com]. maybe the motherboard will come out at some point, but for right now, it's silly to treat it as news [m-w.com].

    (i will refrain from making commentary about how certain news *cough* organizations should check their sources before posting stories. oops! i just did.)
  • I think I will stick with my Tyan Tiger, with 2 x 1.2 Ghz Athlon's. $500 for the board, 2 processors and 256MB of RAM, life does not get better than this.
  • It is worth knowing that Microway will continue producing the 264DP motherboard that API dropped a while back. Thus Samsung isn't the only source for Alpha motherboards. And the 264DP rocks:

    *) Dual capable
    *) Dual memory busses, *each* with 2.6 GB/sec
    *) 4GB memory max (I wish this were higher)
    *) Dual 64bit PCI busses, don't know the speed
    *) Built-in Adaptec SCSI, usb, etc. FWIW, Microway seems to prefer adding an Intraserver PCI SCSI controller (Symbios based) and avoiding the Adaptec controller.

    These motherboards can really push data. Systems at 500MHz and 667MHz built around these boards crush x86 cpus at twice or thrice their clock speed. These systems are somewhat expensive, but they're worth every penny. You just can't get similar floating point performance or memory bandwidth from x86 machines, even with the new ServerWorks chipsets.

    Because the Alphas are a 64 bit architecture, your per-process memory space is huge. You won't get above 3GB virtual memory per process on x86 under linux, I believe NT has a similar or lower limit and SCO has (had? ;-) an 8GB per process vm limit. If you want more virtual memory (don't think swap, just virtual memory), you need to fiddle with your own segment/offset layer or similar.

    For what it is worth, we do in-memory data mining and number crunching in our lab. We regularly have processes with 15GB of virtual memory allocated (of course we're not swapping that much; we may be crazy but we're not stupid =-). For these purposes I love the Alphas. I have no knowledge about web serving, database serving, etc, from Alphas.

    -Paul Komarek
    • by Howie ( 4244 )
      somewhat expensive? $15000 for a 1U Dual-21264B node with 256Mb and a 9Gb drive according to Microway's website. I know it's a specialised market and scaling doesn't work lineraly, but you can get a lot of Dual P3-1Ghz for $15000. The memory consideration would have to be very important to you.
      • Microway's prices on the website are horribly out of date. At the beginning of this year, we bought two dual 264DP machines with 4GB each in rackmount cases with slide for $13,500 each. I'm sure the "loaded" config is less now.

        That's still expensive, but if you need a *single* *fast* cpu, a bunch of dual P3-1GHz won't do it for you. We need the large virtual memory, and more than that, our code is single-process and single-threaded. We just aren't into clustering, primarily for historical reasons (large, old codebase among others). Other labs might do things differently -- deal with their own memory allocators to span processes, and handle the extreme NUMA-ishness of a cluster. We'd rather put our money up front and save time.

        We've got a bigger machine which is basically a 264DP with 4 cpus and lots of memory banks -- the cpus share the two mem and pci busses. It can take up to 32 1GB dimms, for a total of 32GB of ram. It's a Compaq ES40 Model II.

        Because we run primarily single-process, single-threaded code, we have one or more users per cpu, instead of one or more cpus per user. This also saves administrative costs, because there is less hardware to deal with.

        -Paul Komarek

Byte your tongue.

Working...