Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Hardware

Intel's New Pentium 4 Chipsets Reviewed 125

RainDog writes "Intel has released its 845PE and GE chipsets for the Pentium 4 processor, and reviews are hitting the web. The new chipsets officially support DDR333, but are stuck with AGP 4X and ATA/100 support. What's most interesting about these new chipsets is that they're faster than VIA and SiS' latest Pentium 4 offerings, both of which support faster AGP 8X and ATA/133 graphics and disk interfaces. As if that weren't enough, Intel's new "Blue Mountain" motherboard comes on a black PCB with all sorts of multimedia ports and memory timing options. Not bad for the traditionally conservative Intel."
This discussion has been archived. No new comments can be posted.

Intel's New Pentium 4 Chipsets Reviewed

Comments Filter:
  • by tshoppa ( 513863 ) on Monday October 07, 2002 @07:45AM (#4401925)
    Again, the most prominent, first-mentioned, feature of the Intel reference motherboard is its... Black Color.
    • Of course this is important; on Slashdot in particular. It makes those transparent plexi-class casings all that more attractive. If I could only get glow-in-the-dark IDE cables I'd be set.
    • When the first thing a motherboard review cites is how great it looks, you know we've finally crossed the threshold where extra speed is irrelevent. It's time to back off investment in hardware and put that money into developers. The computers are fast enough; now we need software that is more stable, more secure, and most usable.
      • What do you mean fast enough. Fast enough for what? There are many things for which they are not fast enough at all. There are many applications of computers that I wish would take at most a minute (and that is just an arbitrary number because if it did take a minute I would probably wish that it took a second) but with today's computers may take an hour or so. Granted most people probably don't use %50 of their computer's potential but there are those who do. Are we supposed to wait untill everyone else catches up. While stability and security have nothing to do with the speed of a computer, usability may have a lot to do with it since with faster computers better AI can be designed that will be better able to respond to the users needs.
    • Black Cools Faster (Score:1, Informative)

      by Anonymous Coward
      Obviously, as any physics geeek knows, these new BLACK motherboards will radiate infrared heat faster, thus meaning cooler computing.
    • Me thinks you don't see the reason BLACK is important.

      I did some early heat dissipation testing on BGA (ball grid array) devices, and more than half the heat is removed through the bottom of the device. How are you going to get rid of the heat? Dissipate it. What's the best color to use when you want to dissipate heat?
  • by SEWilco ( 27983 ) on Monday October 07, 2002 @07:47AM (#4401935) Journal
    As if that weren't enough, Intel's new "Blue Mountain" motherboard comes on a black PCB...

    Wow. I eagerly await a candy-striped peppermint-flavored board, which surely will give better performance and more bang for the buck.

  • ...really looking cool - it's just that with a normal pc case you only see it for such a short time - during assembly.

    anyone know of any retailer selling these bundled with transparent case mods ?!

    • you only see it for such a short time - during assembly

      Unless you have something like a Lian Li PC65 [thinkgeek.com] and stick in some light strips so people can ooh and aah, "Ooh! it's a black motherboard in a black anodize cabinet! Aah! Does it actually run?"

      Of course, that black will show dust very well, don't you think so?

  • No Credibility (Score:4, Insightful)

    by Anonymous Coward on Monday October 07, 2002 @07:50AM (#4401951)
    The reviewer loses all credibility with comments like

    (OK, I admit it: I made up the part about the Firewire ports. But you get the idea.)

    all the way in the next paragraph after including Firewire was a feature.

    Also is this a review or an advertisment?
    • Even worse, look at the far left of the picture of the ports on the board, there is a Firewire port, near as I can tell. Top of the bank to the right of the PS2 connections. Looks an awful lot like 2 USB and an IEE1394...

      Hopefully, this guy knows what one is...
      • Hmm, you're right.
        It seems like the reviewer wrote about the 845PE, then reviewed the 845GE and somehow mixed up their specs. Still funny though.
      • First the comments. "It has firewire... Oh, I was just kidding. Or was I kidding about the midget"

        I saw the apparent Firewire port on the motherboard and got very confused.

        His chipset comparison summary page shows no Firewire on any of the mobos except for the SiS one though.

        So what's that port?
    • Re:No Credibility (Score:3, Insightful)

      by goldcd ( 587052 )
      That was a joke. "You all thought I was going to say it didn't come with an optional dancing midget and what I actually said was I made up the Firewire bit."
    • Re:No Credibility (Score:3, Insightful)

      by pacc ( 163090 )
      You don't fit in on slashdot.

      Anyone here should see that firewire is included in the features on an Intel motherboard and chuckle at the impossibly sublime humour of the reviewer, not needing any explanation.

      I mean, firewire, Intel, hello
    • Re:No Credibility (Score:3, Insightful)

      by Nintendork ( 411169 )
      This threw me off at first too as I scanned the text for the interesting parts. It took me a few seconds to link it up with the Midget joke.

      -Lucas
  • great (Score:4, Funny)

    by tps12 ( 105590 ) on Monday October 07, 2002 @07:55AM (#4401971) Homepage Journal
    845PE, GE, DDR333, AGP 8X, ATA/133, "Blue Mountain", black PCB (finally, someone for black kids to look up to...).

    Can someone cut through this heap of jargon and marketroid buzzwordsmithy and tell me how in the name of RMS this affects me, the Linux power user? Does it bother anyone that in three months we'll be reading an identical story about 928BE, TL, MOK444, LBJ 9X, PCP/420, "Grassy Knoll", and yellow LSD? When does it end, and why do we care?
    • Once again. (Score:3, Insightful)

      by Gekko ( 45112 )
      You do not equate to we.

      We are not all linux power users. Some are windows users, some are solaris users, some are casual linux users. Thats what makes the world great. Diversity.

      This is news for nerds, stuff that matters. Not all nerds are the same. Stuff that matters to me may not matter to you, stuff that matters to you may not matter to me.

      If you don't like it don't read the articles you don't want to. Please don't whine about them. Plenty of articles that will appeal to you will come.
    • Simple...

      If you want fast memory (which is often useful) but don't feel like shelling out $$$ for the greater than 2x cost of RDRAM, DDR333 is now (actually it was long before this article) an option, even supported by Intel. In some cases it even outperforms the rather expensive 1066 RDRAM. It's been around for a bit from the other manufacturers but this seems like an attempt by Intel to stick a fork in RDRAM despite the fact they were the ones championing it all along. Cheaper & higher performance = good.

      The rest of it is just a bunch of bells and whistles. When will it end, probably never.
  • by McVeigh ( 145742 ) <seth@holle n . o rg> on Monday October 07, 2002 @08:04AM (#4402006) Homepage
    http://anandtech.com/mb/showdoc.html
  • Is it me.... (Score:5, Informative)

    by trims ( 10010 ) on Monday October 07, 2002 @08:22AM (#4402083) Homepage

    ... or don't we see chipset manufacturers avoiding the hard problems completely? I realize that cost is an issue, but for the most part, we're talking about high-performance workstation and server boards, which cost $500+ or more.

    The biggest issues these days are:

    • Data Starved Processors - and this is all about latency (and, to a lesser extent, bandwidth) to main memory. I don't care if there is DDR400 memory support, what I want to kow is why isn't there a L3 cache? I mean, even the high-end Xeons these days have a max of 1MB or so on-die L2. Sure that's great, but do you know how many common datasets blow right through that? It's often dozens (if not hundreds) of cycles to access main RAM. The alpha architecture did L3 on the motherboard way back in 1994 or so. Why don't these modern server chipsets support 16MB or so of SRAM for L3 cache? Hell, they should probably support 64MB or so.
    • Improved hyperthreading support - go check out the Ars Technica [arstechnica.com] article on this. Hyperthreading can potentially really help performance, but it's being held back by (among other things) problems with cache coherency and loading. While much of this is on the CPU (and thus, a chipset can't help), there are a bunch of stuff that could be moved into the chipset for help.
    • Useful shit in the Chipset - ATA/133 isn't that useful (vs ATA/100). Firewire is OK, as is USB 2.0, but what I want to know is where are nice stuff like block data copy between video and RAM (like the SGI chipsets for the Indy/O2 had) for high-performance video processing? AGP is a joke for this (as anyone doing video processing will tell you). These chipsets are aimed at workstations, after all.
    • Standard interfaces for custom silicon - no, I'm not talking PCI-X or crap like that. There should be a standard interface directly to the chipset for people who want to do custom silicon ASICs and have them have direct access to the high-bandwidth internals of the chipset. I mean, even in the low end, why should a FCAL controller chip have to pass the PCI bus? Or a hard-core encryption coprocessor? Or a hardware routing ASIC? All need several GB of bandwidth directly to memory (or each other), and I can't see any reason not to have them surface mounted next to the north bridge with a dedicated interface.

    Unfortunately, there seems to be little innovation going on in chipsets these days. The high end looks very, very, very depressingly identical to the cheap consumer crap. WTF folks?

    -Erik

    • Re:Is it me.... (Score:5, Informative)

      by Pulzar ( 81031 ) on Monday October 07, 2002 @08:55AM (#4402311)
      Standard interfaces for custom silicon - no, I'm not talking PCI-X or crap like that. There should be a standard interface directly to the chipset for people who want to do custom silicon ASICs and have them have direct access to the high-bandwidth internals of the chipset. I mean, even in the low end, why should a FCAL controller chip have to pass the PCI bus? Or a hard-core encryption coprocessor? Or a hardware routing ASIC? All need several GB of bandwidth directly to memory (or each other), and I can't see any reason not to have them surface mounted next to the north bridge with a dedicated interface.

      Expect something like this in the early 2004, when 3GIO chipsets come out to production.. most will have 4+ side-ports directly to the northbridge to used as you please. The plan is to use them for peripherals, but you'd be free to attach anything that talks 3GIO. It probably won't be quite "a few GB of bandwidth", but that really depends on the chipset designer, and not a protocol/interface limitation.

      Improved hyperthreading support - go check out the Ars Technica [arstechnica.com] article on this. Hyperthreading can potentially really help performance, but it's being held back by (among other things) problems with cache coherency and loading. While much of this is on the CPU (and thus, a chipset can't help), there are a bunch of stuff that could be moved into the chipset for help.

      What usefull stuff can the chipset do for hyperthreading? I'd love to hear some ideas.

      The high end looks very, very, very depressingly identical to the cheap consumer crap.

      "Cheap consumer crap" is what sells the most, and most companies do not have the resources to do work on more than a couple of chipsets at a time, so most of the R&D time is spent on implementing the new standards and getting things to work at the new frequencies that CPUs and RAM require. Maybe things will get better when the economy picks up and high end becomes more profitable once again.

      • What usefull stuff can the chipset do for hyperthreading? I'd love to hear some ideas.

        How about huge L3 Cache? The problem with hyperthreading is that by definition it is going to cause a larger number of cache misses. Since you are maintaining 2 seperate contexts in one processor. In order to speed that up you are going to need more cache. Faster main ram will help, but won't solve it.

        This was already mentioned, but for Hyperthreading it seems to me that it is almost required if you want to get decent performance out of more types of applications. Hyperthreading is supposed to give the largest speed up to a multi-threaded application that processes similar data.
        • How about huge L3 Cache? The problem with hyperthreading is that by definition it is going to cause a larger number of cache misses. Since you are maintaining 2 seperate contexts in one processor. In order to speed that up you are going to need more cache. Faster main ram will help, but won't solve it.

          Another level of cache in the chipset is not going to help much. Integrating fast memory into the northbridge itself is prohibitive from the cost standpoint -- who's going to pay $100 more for a northbdirge when they can get another 500MHz added to their CPU for the same price, with much more impact? Using separate, high-speed DRAM that today's graphics cards use will bring the cost increase somewhat down, but the latency improves negligibly -- most of it is wasted on CPU to northbridge communication, and inside the northbridge itself, and the faster RAM might give you 5% latency increase.

          So, any additional cache would have to be hooked up to the CPU directly to possibly produce results, and that's out of chipset designer's hands.

          The best solution is, probably, to increase L2 cache size, or use a better sharing mechanism during hyperthreading to prevent two threads from thrashing each other's caches.
          • who's going to pay $100 more for a northbdirge when they can get another 500MHz added to their CPU for the same price, with much more impact?

            Part of the issue was raised in a post a couple parents down, is if the CPU is starved for data, the extra clock speed won't do jack. And those that want the max performance would pay more for the fastest CPU AND get the most cache rather than trading off on cost factors.

            For this very reason, I understand that one can get RISC workstations that have as much as 8MB of cache on die, on the processor card or next to the CPU somehow.

            So, any additional cache would have to be hooked up to the CPU directly to possibly produce results, and that's out of chipset designer's hands.

            Basically, if you want more on-die cache, that was in Xeon territory, where you pay a lot of money for a chip with the circuitry built-in, but I'm sure Xeons weren't available in 8MB L2s built-in.

            Because the processor card idea seems to have been abandoned by Intel, recent iterations if Xeons max out at 512k L2. Cartridged PIII Xeons seem to be still available new with 1M and 2M iterations, with 700 and 900 MHz clocks.
            • Part of the issue was raised in a post a couple parents down, is if the CPU is starved for data, the extra clock speed won't do jack. And those that want the max performance would pay more for the fastest CPU AND get the most cache rather than trading off on cost factors.

              But, my point is that the performance of another level of cache at the northbridge is much less than the extra 500MHz. None of the CPUs are curretly starved by the available bandwidth. Hungry, maybe, but there aren't many applications out there that even come close to requiring more than 3.2GB/s that the P4 can take in right now (4.2GB/s when we switch over to 133MHz FSB). So, 500MHz increase will matter in a large majority of applications.

              Now, I agree with you (and have said so in my message) that a cache at the CPU side can make a difference. A $100 of cache with minimal latency from the CPU could be better than 500MHz.

    • oh man, the companies are money sucking machines...they don't care about innovation that much, as long as users keep buying the "new" stuff.

      There are a lot of things to be done for improving performance, and the biggest problem is the memory and bus (ok that is two problems).

      Why are we still on these crap buses and memories ? video cards (ATI 9700 I mean) can do 20 GB / sec data transfer. If I had that throughput for the main CPU, the PC would be vastly more powerful.

      And also more things can be moved to hardware, like thread scheduling and other I/O things (keyboard, mouse, etc). The chipset is a good place to put them, except thread scheduling of course.
      • You could get that kind of performance but you would need special memory for your main CPU, and it would have to be installed in special banks in special quanitys. Remember that RDRAM has to be installed, 2 at a time? at the same size?

        It would be much more expensive for this ram and the motherboard. Most people's motherboards cost 1/5 what a new high end video card costs. Also, those high end video cards have a FIXED amount of ram which helps them a lot.

        Building a system that is expandable and over the top fast is not cheap.
      • Why are we still on these crap buses and memories ? video cards (ATI 9700 I mean) can do 20 GB / sec data transfer. If I had that throughput for the main CPU, the PC would be vastly more powerful

        Because the PCs would be a lot more expensive if 20 GB/s would have to provided to the CPU. First, the memory controller would have to be integrated into the CPU (which, mind you, AMD Hammer has, but not for b/w reasons, but latency.. the b/w is still the same), and would have to support 4 memory channels, increasing pin count by 200+ pins. That causes the CPU price to skyrocket. Then, the memory would have to run at DDRII-400MHz, instead of the 200 which you get from the fastest DDR available for PCs. That would increase the RAM prices dramatically. Finally, to handle those kinds of RAM speeds, the motherboards would have to grow to at least 6 layers, tripling the motherboard cost.

        The question is -- who's going to do all that development work to sell it to a very small number of people who are willing to pay for that performance?
    • If you want performance you've got to [ibm.com] know [hp.com] where [compaq.com] to [sgi.com] look. [sun.com]
      Unless you're playing games, real workstations blow away the fastest desktops.
      • The high-end IBM workstation from your link above is $32,000. Is it really that much better than a high-end desktop (around 10x the price of the best desktops)?
        • Re:Is it me.... (Score:3, Insightful)

          by Zathrus ( 232140 )
          As usual, the last 10% of performance does not equate to the last 10% of price.

          If you need the features/benefits that are available in that last 10%, then you're going to have to pay a huge premium. Or wait a few years for it to filter down to the other 90%.

          Go back a decade and try to get 3D video at 640x480 that runs close to 30 fps. You're talking an SGI Oxygen with a RealityEngine2 costing about $500k or so.

          Now I can buy a card that does all that, at a higher resolution and fps, with better textures for about $50.

          But if you needed that ability 10 years ago then you paid the price. Such is life at the bleeding edge.
        • I don't know where you got your figure (around $3200). When I hear "high-end desktop" I assume you mean a desktop that offers VERY high performance. A 3Dlabs Wildcat III 6110 is over $2000 alone. And it pretty much depends on your task. Let's say your task modeling the sheer and stress on high speed aircraft in CATIA. If it takes two days on the IBM box and two weeks on your cluster of 4 PC "workstations" then which costs less? These machines aren't for everyone. Ask the people that use them, they'll tell you why.
          • Ask the people that use them, they'll tell you why.

            That's why I asked. It wasn't meant to be a facetious question.

            I get the economics part of it, but was doubtful that the higher-end workstations (e.g., from the IBM link) could really reduce work time on certain tasks by that much. That was the basis of my question.
      • not true. check out some spec scores [specbench.org]... "real" workstations (where "real" often just means "real expensive") only become truly faster once you're spending *big* money. a cluster of PCs these days will outperform anything, $ for $, on any CPU-intensive task, and IMHO, are The Way To Go.

        empirically, we have low-end (dual) IBM p620's and p660's, and for our (CPU-intensive) applications, they are slower than most of my teams' desktops, whilst managing to be over an order of magnitude more expensive per unit.

        matt
    • "* Data Starved Processors - Why don't these modern server chipsets support 16MB or so of SRAM for L3 cache? Hell, they should probably support 64MB or so.

      I always wondered this. Even back in the days of the k6-3 (256k L2 cache on die), an L3 cache, which was actually the on-board L2 cache on the socket-7 board, improved performance significantly.
    • The P4 already enjoys really bad prediction penalties, which can only be offset by the highest memory bandwidth possible, so I don't know that L3 would actually help very much. Aside from that the added expense will keep this from happening any time soon in the main consumer market where every company is Scrooge.
      HyperThreading to me is the biggest con since MMX and AGP x (?) for 3D cards. It *requires* multithreaded applications to even work, the HyperT cpus eat more voltage and run hotter than cpus without it and as a result require different motherboard support than the slower-than 3.06GHz nonHyperT P4s. This thing won't even be as useful to a desktop user as a dual cpu Mac currently is for your average Mac user. In fact, if you need to run multithreaded applications (of which there are almost none in the standard software marketplace) you are *far* better off going dual-cpu SMP, and the performance difference would eat HyperT alive. What a con job this is turning out to be.
      Last, I wouldn't categorize HyperTransport, as "cheap, consumer crap," exactly...;) I think it's amazing to see it working down to the consumer desktop--High-end server systems have enjoyed its benefit for years.
    • * Data Starved Processors - and this is all about latency

      That's mostly a function of main memory. With 512K of cache, your hit rate is typically in the 95-98% range. Throwing tons of cache on the motherboard rarely helps much, since it usually only bumps your hit rate up to around 96-98%. Generally speaking, if your data set doesn't fit into 512KB of cache, it usually won't fit into any amount of cache, no matter how big. So what we really need is a type of memory that offers very low latency but is cheap enough that it can be used as main memory. Some technologies like prefetching can help hide this latency, but sooner or later, all those break down.

      Ohh, and to make L3 cache really effective, it would probably have to be hanging off a backside bus of the processor anyway, not off the chipset. I know that Intel talked about doing this with their current line of Xeons, but I don't know what ever came of those plans.

      # Improved hyperthreading support

      Hyperthreading performance is about 49.5% CPU, 49.5% software, and about 1% chipset. There is virtually nothing that can be done on the chipset to specifically improve hyperthreading performance. All the chipset manufacturers could do here are fairly generalized improvements that would end up helping out chips both with and without hyperthreading.

      where are nice stuff like block data copy between video and RAM (like the SGI chipsets for the Indy/O2 had)

      Uggg.. copying data back and forth between main memory and the video controller? That's a sure way to hurt your performance! The SGI solution only made sense because it was cheaper/easier for them to have a single high-bandwidth bus with and a single chunk of a GB of memory or so. However if they could have had a GB of video memory and a GB of main memory, with each having tons of bandwidth, they would have been better off. PCs are in that situation. These days, having a video card with 10-20GB/s of memory bandwidth and 128MB+ of memory is cheap, which essentially eliminates the need to read/write between (slow) main memory.

      There's really nothing wrong with AGP other than the fact that it's original design idea has become obsolete by the fact that memory is dirt-cheap now. Otherwise it offers over a GB/s of bandwidth for what amounts to essentially 1-time read/writes. After that, all the magic happens on the video card itself.

      # Standard interfaces for custom silicon

      Umm.. ok.. whatever. The market for this is approximately 2 people. Still, believe it or not, you're actually going to see just such a thing in about 6-months time with AMD's Hammer processors. These chips/chipsets will have Hypertransport links, which will offer a high-bandwidth connection directly to the chipset if you so desire. Of course, if you want to make use of it you're going to have to design your own motherboard from the ground up, because the market for what you're looking for is TINY, and no motherboard manufacturer is going to waste their money on such a thing.

  • Stuck with? (Score:2, Insightful)

    by centron ( 61482 )
    "...but are stuck with AGP 4X and ATA/100 support." Stuck with? AGP 8x and ATA/133 are very marginal improvements in most situations. Stuck with would have been having AGP 8x, ATA/133, and DDR266.
  • Intel is playing catch up and releasing some new boards with all the bells and whistles that the other guys have been releasing for some time now.

    However, Intel does release stable products(some have been flawed, i820) And in an enteprise a board with an an Intel chipset is usually the best way to go.

    But in the end who cares? As long as it works fine. As long as it is pretty quick, stable, and does as promised I am a pretty happy camper.

    Got other stuff to worry about than p4's with 333 ddr. DDR aint to cheap anyway. I got a gig of it in my athlon box. But I coulda got 4 gigs of SDR ram for the same cost and tricked out a mean little server with it.

    Jeez this aint news.

  • Intel's new "Blue Mountain" motherboard comes on a black PCB

    How long until someone makes a white PCB? With all the casemods, etc., I'm sure the modders would love a mobo that would colour itself to whatever lighting they have installed/turned on...
  • by jefftp ( 35835 ) on Monday October 07, 2002 @09:00AM (#4402342)
    If it doesn't have Serial ATA on the board, it isn't a new product. I can't be the only one holding off on their next major upgrade until they can get Serial ATA on a motherboard with an Intel chipset.

    So come on Intel, put Serial ATA on the board and you've got a sure sale. No more of this parallel ATA crap. While you're at it, get rid of the serial and parallel ports.
    • While you're at it, get rid of the serial and parallel ports.

      Then what would I plug my printer into ? We don't all have the leatest and greatest USB printers.

      • Then what would I plug my printer into ? We don't all have the leatest and greatest USB printers.

        USB parallel port adapters cost $11 on pricewatch.

        • Yes, but USB serial adapters are more like $35. I need at least two, one for my Garmin GPS and one for my PalmV. Adding $70 to a motherboard's price just for the cool "legacy-free" label just doesn't cut it for me. I guess my life just isn't legacy-free yet.
          • I agree that you should buy what is optimal for your situation (which may be different from other people's situations). If you regularly use two serial ports simultaneously, it will probably be cheaper for you to buy those serial ports on the motherboard if you are going to buy today.

            By the way, Aten sells USB serial adapters for $19 [pricewatch.com].

            That page also has a link to Centrix which, at $9 shipping per order, is remaindering USB serial adapters for $4 and USB parallel adapters for $2. I would not argue that that repesents an equilibrium price though.

      • A parallel-to-USB converter that is made and sold specifically for heritage printers. Ditto they have devices to serial-to-USB, presumably for modems (although I believe it virtualizes a COM port, so it will work with any device).

        The whole heritage bus has to go. That means goodbye PS/2, goodbye serial, goodbye parallel. Good riddance. :)
        • Parellel to USB converters dont work for every parallel printer, and rarely ever work for other things that interface over the parallel port.

          Same with USB emulated serial(COM) ports..

          I'm talking about stuff like PSX-N64 DexDrives/Dreamcast VMU/GB/GBA/NeoGeo Pocket/OpenXBox readers/flashers, all of which I have, and all of which have had 0 success trying to interface over anything but a true parallel or serial port.

          AFAIK, you cant get register level control over a USB/parallel port, or some such technical blibber-blabber. I just know it doesnt work. Nor can it drive the old bubblejet in my closet.

          So while I'm all for the idea of moving ahead, I want all my gizmos to work. There should be (and are) boards without the legacy stuff, and those with.

          BTW I need my FDC too, to move data to my SuperUFO32 SNES backup unit. And I still need maxell 650 discs burned at 1x, as they're the only media my TurboDuo reads correctly. So dont talk to me about 72x burners and bootable cds.

          I'm sure there are many other similar, if unrelated, situations where legacy stuff is necessary.
          • I suggest that you start looking for cheap, functional motherboards and other parts at used computer stores and the like then. Because the legacy ports are going bye-bye. Abit is certainly well ahead of the game here, but I'll be surprised if you can buy a system in 3 years that has a parallel, serial, game, or PS/2 port. They're redundant now and removing them can cut costs and simplify connections.

            If this wasn't true then we'd still be damned with 5 1/4" floppies and even 8" floppies. You'd still have MFM and RLL drive connections available.

            The floppy disk controller is likely to stick around for the forseeable future -- nobody has managed to replace it, and it's still needed even with bootable CDs and the like.

            Legacy hardware eventually becomes desupported, and unless you plan in advance you can get left holding the bag. Ask any of the numerous corporations that have data storage on tape formats for which the tape drives are no longer available.
            • It'll be gone from the workstations and commoner machines - but my point is so long as there are people willing to pay a premium for legacy devices, they'll exist. Supply and demand.

              My examples, in hindsight, were trivial. I could have mentioned the 10,000$ thermal-transfer labeller and 40,000$ diamond-tipped engraving rigup I coded a custom frontend for at a previous job. Both of these *required* legacy ports, no doubt for the same reasons my GBA flashlinker does.

              I seriously doubt that if that PC that drives those things breaks down, the company is going to piss away 50,000$ in peripherals because a 400$ PC did away with parallel/serial ports on mobo.

              Legacy ports will exist so long as the hardware to connect to them does. Even if its in the form of a PCI add-on card.

              PS/2 ports are superflouous, they're gone, noone ever need them in the first place (they were just smaller and prettier than AT style connectors).

              But there's a wide variety of hardware/software out there that relies on your box's LPT and COM ports. They wont disappear completely for some time, no more than fiber is going to replace all of this "obsolete" Cat5e cable we're using.

              Much for the same reasons that theres billions of lines of 30 year old COBOL and FORTRAN in the real world. Much to the PC geek's dismay, everyone doesn't buy new hardware just because it exists. Alot of the world still runs on the "if it ain't broke, dont fix it" axiom.

              • It probably won't exist on the motherboard. If you need com ports or LPT you will buy a PCI card for those. 1% of the market may need them and pay the extra, for everyone else they will be get the $2 cost saving of a motherboard without these ports.
    • The article says, "This mobo comes with an array of on-board multimedia and I/O features [...] including [...] Serial ATA RAID (courtesy of a Silicon Image controller chip) [...]".

      I don't see any indication from the article whether the Intel motherboard that uses the i845ge also provides Serial ATA. Also, I would be interested in knowing how this Silicon Image chip is attached. For example, if it is only connected by a 32-bit 33MHz PCI bus, then it will only be able to transfer data across the bus at 133 megabytes per second. No single disk drive goes that fast, but if it has a bunch of Serial ATA ports, it might be an issue. I saw some posting on slashdot that said that most recent chipsets do not physically attach their IDE interfaces through the PCI bus, but rather do something faster even though the devices logically look to the CPU like they are on the PCI bus.

    • The exact motherboard that supports Serial ATA is the D845PEBT2 [intel.com]

      The technical documentation is here [intel.com]

      What I found disappointing (maybe I just haven't read enough about serial ATA) is that it only supports two drives. Why only two? I thought serial ATA was supposed to be more like SCSI, with more drives (like 15)?

      Shango
      • Serial ATA is one drive per cable, which reduces the cost of going at 150MB/second and safely supporting hot plugging (the analog properties of the cable are better, nobody will add a drive to your current cable, so no need to protect the drive from that, no need to build drives with jumpers, reduced support costs from users calling in with misconfigured jumpers, etc.). The cables are thin and the connectors or small, so having a lot of them is not as big a deal as with IDE ribbon cables. I expect that once Serial ATA drives become common, you'll see motherboards with four or eight SATA connectors.

        If you want to connect a bunch of drives on a common fast serial connection, there is already a plethora of options, all of which basically serialize SCSI commands: FireWire, Universal Serial Bus 2.0, Fibre Channel, Serial SCSI Architecture (SSA), InfiniBand, and iSCSI.

    • If you'd bothered to read the article or look at the motherboard pictures you would've seen Serial ATA support and connectors listed.

      Now maybe we'll actually see some SATA drives for sale.
  • ATA/133 support is not important for performance. No one or two disk drives can saturate the bus, but... ATA/133 is currently the _only_ way to connect HDD greater than 137GB to you system and be able to use the extra space. IBM is @ 180, WD is @ 200, and Maxtor should be shipping their 320 any day now. For my servers at work and my media storage at home, 4x120 is not enough. Sure, I could use the provided PCI ATA/133 card, but... that's lame.
    • Incorrect. There are ATA 100 controllers out on the market now that use the 48 bit LBA. In fact, Maxtor is pretty much the only company that does any ATA/133 (and that is because it isn't a finalized standard yet). The IBM and WD drives that you cite are ATA 100.
  • by nenolod ( 546272 ) <{moc.liamg} {ta} {dolonen}> on Monday October 07, 2002 @10:12AM (#4402883) Homepage
    Anandtech has a very good review at http://www.anandtech.com/mb/showdoc.html?i=1723 [anandtech.com]. It compares and contrasts many motherboards with the chipsets on them, comparing features, etc, and it also has some very good benchmarking information. It also supports hyperthreading, which looks like it will be a very promising technology. It also points out some problems with some of these new motherboards. This chipset looks like it can offer great potential, for both the average home user, and the typical overclocker, especially the Albatron PX845PEV Pro, which has a interface that is similar to Softmenu 3. The ASUS P4PE also has great potential for overclocking, yet it doesnt look like it's as tough as the Albatron. Their technical support is also not as good. If it is USB that you are looking for though, the Gigabyte 8PE667 Ultra definately offers the most functionality (10 Ports, wow!). In all, this review is quite long, with 25 pages of content, which offers more information than the mentioned review.
  • ASUS has information on their P4PE Motherboard [asus.com]. In addition to using the Intel 845PE it supports:

    Serial ATA

    Gigabit Lan

    IEEE1394 (FireWire)

    RAID

    Multiple Overclocking features.

  • Yea so what if intels got a new chipset. look at all the great ones for amd. amd's 2800+ is coming out soon and i have money it will run faster than intels 2.8. go with amd, most amd boards have ddr 400 and agp 8x (exgigabyte 7vax-p) this board also has 1394.
  • Perhaps the RBLing (Realtime Black Hole) of msn.com recently, which
    prevented a large amount of mail going out for about 4 days, has had a
    positive influence in Redmond. They did agree to work on their anti-relay
    capabilities at their POPs to get the RBL lifted.
    -- Bill Campbell on Smail3-users

    - this post brought to you by the Automated Last Post Generator...

I had the rare misfortune of being one of the first people to try and implement a PL/1 compiler. -- T. Cheatham

Working...