Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware

Two Approaches to the Next-Generation Desktop 421

puppetman writes: "Tom's Hardware has a review up of a pre-production P4/2666 using 533 mhz Rambus memory (and shows it stomping the competition). The Pentium 4 needs memory bandwidth, and DDR doesn't supply it. Or does it? Anandtech, ironically, has a preview of the E7500 chipset from Intel - dual channel DDR with support for up to 16 gig of RAM. With a new bus architecture, this looks perfect for high-load databases that need wide pipes to hard-drives, memory, and ethernet. Both of these technologies look great for mid-range database servers. Anandtech claims that dual DDR200 will provide 3.2 gig/second bandwidth, where Tom claims that DDR266 (single channel) offers only 2.1 gig/second. Intel is sure hedging their bets. I wonder what AMD has up their sleeves."
This discussion has been archived. No new comments can be posted.

Two Approaches to the Next-Generation Desktop

Comments Filter:
  • Overclocking (Score:5, Interesting)

    by JPriest ( 547211 ) on Monday February 25, 2002 @09:37PM (#3068309) Homepage
    I am partly curious what kind of OC'ing results you will be able to get out of the 2666 MHz P4 w/ the 533 Mhz RDRAM, I would like to see it't benchmarks compared to the OC'd 2200 (to 3760 MHz) w/ slower FSB that was posted not so long ago.
  • Desktop?!? (Score:2, Insightful)

    by Anonymous Coward
    What exactly am I supposed to do with a machine like that? I develop Java software. My IDE, app server and build scripts each open their own JVM instance. I really haven't seen any performance problems with a 450mhz with 512MB ram.

    I know thats no reason to stop advancing hardware, but it seem a good enough reason to slow down on the hype.
    • Re:Desktop?!? (Score:2, Insightful)

      by Cyno ( 85911 )
      Hardware doesn't advance because geeks want to encode mp3s or write java apps. Hardware advances because of money, and hype is used to generate that. So, yes, my grandma NEEDS a 3Ghz system so one day I'll be able to afford a 10Ghz system. If we don't buy them, forcing our chipmakers out of business, then we won't have new hardware to play with next month.
      • Re:Desktop?!? (Score:2, Insightful)

        by pdp11e ( 555723 )
        I could not agree more!
        Every now and then one can find opinions like: Nobody normal needs such speed or that much of memory... I can still remember zealous defenders of "good ol' 286" and their arguments that 386 is "unnecessary complication".
        I agree that every new development of the cpu muscle is usually wasted on making office assistant doing more fancy tricks. However, a new hardware development eventually gets employed for the more useful purposes.
        Geeks that frequent this board often discuss things like DV cameras and editing video material. Only few years ago such things were reserved for expensive SGI-s. Today you can do it on a platform with the price tag below $1k (only hardware though).
        Bottom Line: Every new breakthrough in technology is Good Thing (TM). It means that by the time it hits consumer market, geeks will have plenty of inexpensive toys to play with.
      • Re:Desktop?!? (Score:3, Interesting)

        by Com2Kid ( 142006 )
        Screw that, I _NEED_ the extra performance.

        Hell, I was DISAPOINTED with the ABYSLEMAL results that came in.

        Huh?? You ask?

        Well yah.

        You see how that MPEG4 video took TIME to encode? Time that could be measured in MINUTES per video?

        Tell me when I can do MPEG4 encoding at over 1000x real time speed with shitloads of Virtual Dub filters running and without out my CPU even going up to 10% utilization, and THEN I will say that we have (maybe) gone fast enough.

        As it is I still have to hit the render button, wait... wait... wait... wait... wait... wait... wait... wait... wait... wait... Run it through my post production filters and repeat the waiting (seen above) if not for an even longer time, and THEN I get to compress it down to some sort of video stream (choose a codec folks).

        Ooooh great...

        Royal pain in the arse when rendering takes longer then creation.

        Oh yah, and did I mention that I am not even using over $1k of software here? I am not even running some sort of fancy high end effects house, I am just doing regular quicky animations. But rendering those Terrains sure is a pain in the arse, and then those realistic clouds, ooh ouchies MAJOR performance hit there folks.

        Heck even photoshop still takes times to run filters. Not even complex filters either, just single ones. (It has gotten A LOT better since the 'old days' of running Photoshop on those "brand new Pentium 166mhzs!!!" Oh man, that was /PAINFUL/. Running any sort of complex filters meant going out to a friggin lunch break, bleh).

        What about even transfering images from a digital camera? You know how bleeping long it takes to load previews of all images in a folder? You know, all 100-200 images? Or more? Most likely of varying resolutions to boot. How lovely.

        That _IS_ enough to annoy a Grandama and encourage her to upgrade to a new machine.

        You think people want to WAIT to encode their MP3 streams? Why? By the time we hit 10ghz or so (and if HD speeds hopefuly start scaling up a bit faster. :) ) we should have MP3 encoding speeds of a few MINUTES per second passed.

        Or at least we sure as friggin better, heh.

        Until then my 700mhz Duron OC'd to 950mhz/1ghz (depending on time of year ;) ) that cost me $40 per CPU (ok so at that price I bought two, wish I'd bougthen four or five. :) ) will have to suffice.

        (well that and my 80gb + 20gb Hds which are quickly filling up. Screw CPU time, I can always play Gameboy, but I _NEED_ more HD space damnit! I filed up 40GB in two weeks, and I wasn't even trying!! )
    • Re:Desktop?!? (Score:3, Insightful)

      by tcc ( 140386 )
      >What exactly am I supposed to do with a machine like that? I develop Java software. My IDE, app server and build scripts each open their own JVM instance. I really haven't seen any performance problems with a 450mhz with 512MB ram.
      ---

      I'm doing 3D animation, the fastest, the less hours I spend waiting for my renders to come out. That's ONE application... it's not because you're still playing tradewars in ascii on an XT that some other people won't benefit from advances in technologies.

      In your everyday life, other technologies benefit from it, CAD benefit from it, movies studio benefit from more power, Science, etc. I can't beleive some people are SO much self-centered that they pull out comments like this (neither moderators modding the parent up), I mean, if you have the IQ to come here and read the articles, how can you think like that?

      Granted, these changes are kinda pointless for most people, after 1GHZ cpu and a geforce2, you don't need much more to enjoy what most end user technologies have to offer, but there are still DESKTOP users out there that enjoys powerfull machines for other things than showing off :), just ask any hobbyist 3d animator for example, and no, buying a lot of cheap machines to do a renderfarm doesn't always cut it, at least not when you want to preview some effects like volumetrics before sending them to a final render.

      $0.02
    • Re:Desktop?!? (Score:2, Informative)

      by Anonymous Coward
      I'm running radiative transfer codes overnight, I wish I could get them to finish in about 20 minutes. If I had a 5 gig. system, that still wouldn't be fast enough. If it were, then I would crank up the resolution, and run it overnight. It won't be fast enough until I can run a weather model at a reasonbly high resoulution on my home PC in near real time. It will never be fast enough. You double the processor speed, I'll halve my grid sizes.
    • What exactly am I supposed to do with a machine like that?

      Distributed computing, of course. Lookie them blocks fly!
    • What exactly am I supposed to do with a machine like that?
      >>>>>>>>>>
      Run GNOME at a decent speed?
  • Need and want: (Score:5, Insightful)

    by swordboy ( 472941 ) on Monday February 25, 2002 @09:42PM (#3068331) Journal
    The Pentium 4 needs memory bandwidth, and DDR doesn't supply it.

    Do *users* need this memory bandwidth or does the proverbial Quake benchmark need it?

    Show me "desktop" (as the headline implies) application that requires this. Even the most cutting edge 3D games don't use current 3D processors to their potential, these days.
    • Re:Need and want: (Score:5, Insightful)

      by JPriest ( 547211 ) on Monday February 25, 2002 @09:51PM (#3068378) Homepage
      Actually what users *need* is faster HDD read/write time. 3 GHz will not make much of a difference for the users pulling the data from a 5400 RPM and running it on a $20 OEM motherboard.
    • Unfortunately, the 'desktop' application that needs this sort of power...

      iMovie
      iDVD
      Quartz (the displayPDF layer)

      The stuff that you would need a Mac for...
    • Uh...I have PC GAMESS on my box at home. It is a high level quantum chemistry program that kinda likes the power. It's not a resource hog like Gaussian, but I can easily hit the bandwidth and CPU wall. More is better with Quantum Chemistry.

      And, I also have Mathematica. I've done some huge calcs with it that would have been much more enjoyable with more memory bandwidth.
    • Re:Need and want: (Score:3, Insightful)

      by ergo98 ( 9391 )

      Even the most cutting edge 3D games don't use current 3D processors to their potential, these days.

      What games are you playing? Firstly, of course games are usually limited by the lowest common denominator (meaning that you severely limit the polygon count if 50% of the population is using the Virge 3D), but secondly there are some games that seriously tax current hardware: An excellent example is "Operation Flashpoint", which on a GeForce 3 Ti200 has a visibly stuttering frame rate at 1024x768/32-bit colour with a reasonable set of options (I'd say that the frame rate is from 10-25 FPS), yet even that game represents a massive set of compromises: Visibility is limited to 800m or so, there is a limited number of units in a set area, land is mostly defined by textures rather than polygons as polygons are too expensive. Even in the venerable Quake 3 with the mod urban terror, some of the maps (which still represent a massive collectin of compromises) send the previously mentioned video card begging for mercy in parts (and Q3 is OLD). And 1024x768 is hardly a great resolution, and of course if you want to use FSAA you'd better knock down to 800x600. Saying that "cutting edge" games don't use the hardware to its potential makes me presume that the most demanding game you've played is The Sims (though even it can get stuttery when you have fully decked out a multi-level house, and it is hardly an example of photo-realism).

      The "too much power" argument has always been flawed, going back to when the 486 was introduced and countless pundits exclaimed that a 386/33DX was all anyone needed. This same argument has gone on, foolishly, since the beginning of computers I'm sure. Actually probably back to the abacus.

  • Photoshop (Score:4, Funny)

    by spookysuicide ( 560912 ) on Monday February 25, 2002 @09:42PM (#3068335) Homepage
    But you know there is at least one photoshop filter that would run faster on a 1ghz g4 and i'm sure we'll see steve jobs demonstrating it at macworld 2004 proving that macs are still twice as fast. :)
    • Don't you mean 2 1GHz G4s?
      I suspect Jobs won't be using Photoshop to compared PCs with Macs. He'll be using things like iDVD, iMovie, and the ilk. I mean, those are the biggest reasons to buy a Mac, right now. At least on the consumer level.
    • yes i run a goth/punk/emo porn site [suicidegirls.com]


      I just got spam from you yesterday!

  • by Mr. Uptime ( 545980 ) <gregp@NOSPAM.lucent.com> on Monday February 25, 2002 @09:42PM (#3068336) Homepage
    As Tom Pabst, a speed addict himself, was quoted as saying, anything above 1Ghz is nothing but overkill for most users.

    The vast majority of systems that are being sold today are somewhere around the 1Ghz mark. They represent the "sweet spot" on the price/performance curve, and quite frankly, users just don't need anything better. Open source OS users, such as most of us here, don't need to ratchet up the speed to 1.5Ghz unless they're running a bleeding edge release of the bloated KDE 2. Windows XP runs just great (well, as well as Windows XP can run, anyway ;) on my Duron 900.

    Desktop users don't need anything faster than 1Ghz. So what's Intel's brilliant strategy? Why, they're going to develop chips that are even faster than the overpriced 2Ghz P4s they're having difficulties unloading right now.

    And that, my friends, is why AMD is well on its way to winning the war. Intel is putting a product on the market without bothering to notice that nobody needs anything faster. They will lose a lot of money doing this (a friend at Intel pegged the development costs for this chip at $3.7 billion). AMD is sitting tight and refining their core business: solid, stable, speedy, and inexpensive chips that consumers can afford and that consumers actually want to buy.

    If I were a stock broker, I would be telling all of my clients to short Intel and go long on AMD right about now. The revolution is underway and the underdog is winning.

    Mr. Uptime

    • by Anonymous Coward on Monday February 25, 2002 @09:51PM (#3068380)
      Your logic is astounding, sir. Or, it would be, if people didn't keep buying faster chips. Logic doesn't matter here.

      Neither does Tom pabst. No average consumer gives a damn what one computer guru says, or whoever the hell Tom pabst is. They care what the stoned, look-you-got-a-dell kid has to say, and in a couple months, he's gonna say you want that 2GHz Dell. Adveritisng is everything. And Intel's ad budget is big.
    • by darkwiz ( 114416 ) on Monday February 25, 2002 @09:53PM (#3068385)
      Nothing above 1GHz is needed right now for most users. However, history has shown us that every time we say this (processors are faster than they need to be, blah blah blah), someone comes out with killer apps that drive the need for faster machines. You know by the time Warcraft 4 (or its equivalent) comes out, 1GHz will be painful to run it on.

      Until we have machines that can perform (near) perfect speech control/dictation, face recognition (in real time, reading expressions), and can make realistic holograms (ala STNG Holodeck), I will not even begin to believe that CPU's have come far enough.

      In the meantime, AMD rides the gravy train.
      • I think you've nearly touched on an important point:

        We don't really need systems that are any faster, unless they're orders of magnitude faster .

        What we need now (until some bloke figures out something new & spiffy to tax a P10 or an athlon whatever) are systems that are rather more flexible. Right now cost is a pretty significant limit agent, as is reliability.

        Come to think of it what we really need are appliances that cost $99, work more reliably then my toaster & can, with minimal fuss & expense relapce my worprocessor, PVR, fax, email station, cd burning station etc.
      • Comment removed based on user account deletion
      • I think that speech data entry is inefficient and not appropriate in most office environments. Think of how noisy it would be if everyone spoke to their computers!

        What would be really wonderful is a Gregg Shorthand recognition system, for palmtop, laptop, and desktop digitizer pads. It would be a lot faster than the current text recognition systems, and maybe even faster than a keyboard for prose input. I don't think that Gregg is being taught as much as it used to be, but a freely available Gregg input system would bring it back for sure. There are already several gesture recognition programs out there. Gregg is something like that.

    • by quantaman ( 517394 ) on Monday February 25, 2002 @10:05PM (#3068423)
      "And that, my friends, is why AMD is well on its way to winning the war."

      I bought my computer with a 1.2Ghz Athalon in September. At that point about 1/3 of the computers in stock were AMD. Since about a month after that I've been in that and several other computer stores, multiple times and NOT ONCE have I seen a computer with an AMD chip. I'm sure the companies will only be too happy to oblige when you order (as I ended up doing with mine) but I've stopped seeing them in the stores. Could this have something to do with the fact that I'm in Canada, some bizarre business decision on AMD's part or perhaps we just like intel a lot more? Or is this happening generally in computer stores? If I recall this sudden shift away from AMD happened around the time of the release of the P4s. Don't underestimate the publics willingness to succumb to hype and a feeling of security. Most people will gladly hand over the extra one or two hundred to make sure their two grand machine can surf the net and doesn't explode.
      • Look up Cemtech. They supply desktop workstations to many gov't mistries and agencies. They take whatever's good on the market, build great stable and _upgradeable_ systems, and sell them at a decent price. A few years ago I had a P2-400, which was surprising built upon an Asus board with quality RAM and a real video card. Later on I ran off with the corporate credit card and built my own Athlon screamer (which is now obsolete of course). Cemtech makes PCs the way I'd make 'em (minus the GeForce GTS goodness and four-drive RAID-0 of course).

        Unfortunately my workplace's idiotic people seem to prefer crappy Dells these days, which are just the cheapest components soldered together and thrown into a pretty box. About 1/4 of them have extreme stability problems (shitty power supplies and bad ram), a few of them like to hang during POST!?

        Screw Dell. Yay Cemtech!
    • It is a well-known fact that the fastest P4 (2.2Ghz) is easily outperformed by the fastest Athlon (Athlon XP 2000).

      As the operations manager for a medium-sized business, I am responsible for approving or denying acquisition requests (ARs as we call them). And I will strongly encourage my employees to buy Intel machines over AMD machines if they want their AR approved. Why is that? Although I am very impressed with the speed of AMD chips, and very unimpressed with RDRAM and P4s' performance (did you know they reduce the cache memory clock as they increase the core speed to prevent overheating?), P4s are an order of magnitude more stable than Athlons. Having seen several Athlongs crash and burn in the past two years, I have been refreshing AnandTech every morning awaiting the release of a comparably speeded P4.

      Most businesses hire smart people, and there are probably thousands of people just like me who want the speed of an AMD chip, coupled with the reliability and quality of an Intel chip. Well, the day has finally come, and Intel will sell these chips faster than they can restock the shelves. Good for them.

      freebsd guy

    • Most people don't need $40K+ SUVs and what not, but they're fun. (Well, they look fun, I drive an old Buick and spend my money on technotoys...) Compared to the cost of one of those, what's $1K-$2K every year or two for a state-of-the-art PC? (Monitor extra, natch.)

      Which is AMDs contribution: bringing the price of heavy desktop computing firepower down to "Why not?" prices. And my HDTV PCI card chews serious CPU time, so having several hundred MHz to spare is rather nice.

      On investing in AMD stock: speaking as a 2+ year AMD shareholder, if you buy in, prepare yourself to be in it for the longhaul and for the insane price swings. AMD is one of the most manipulated stocks on the market. It's insanely undervalued right now, but there's absolutely no way to tell when its valuation will reflect reality.

      Maybe the Hammers will do the trick. At the least they'll beat the crap out of those souped-up P4s Intel let Tom play with :-).
    • Someone trolled:

      the overpriced 2Ghz P4s they're having difficulties unloading right now

      I wonder why my recent purchase of an IBM Intellistation M Pro was delayed for three weeks because of a shortage of PIV 2GHz processors?

      (Yes, it was worth the wait. No, I didn't pay for it.)
    • Won the desktop market that is. If I were to say, do something that did complex rendering, not for video games mind you, like produce Shrek, every proverbial inch matters. I bet the SETI people would wish for a farm of 3Ghz machines of 10 machines that could work on packets of data. Nvidia and intel could pair up in one way or another to produce a special video chip. Would they want to? Who knows. But imagine a video card so fast that yet again, the bus is too slow.

      Don't rule Intel out yet, but certainly give AMD its due props for making fast computers for so cheap. I remember my IBM DX4-100 costing $300, mb and chip. Memories.. o/~
    • by NMerriam ( 15122 ) <NMerriam@artboy.org> on Monday February 25, 2002 @10:42PM (#3068532) Homepage
      Desktop users don't need anything faster than 1Ghz. So what's Intel's brilliant strategy? Why, they're going to develop chips that are even faster than the overpriced 2Ghz P4s they're having difficulties unloading right now.


      You're missing the second part of the the story, here -- while increases in top-end processing speed are nice, they are not the only result of faster/more efficient processors.

      Another major feature is that for the same clock speed, it can be run on less power and with less heat, meaning that even if they only sold the chips to run at 1 GHz, they would be able to run on half or a third of the power that a current 1 GHz chip could.

      I recently replaced the 700 MHz celeron in my home entertainment machine with a 1.2 GHz Pentium 3 -- not because I needed more power, quite the contrary. I underclocked the P3 to 600 MHz and took off the processor fan, thereby reducing the total noise on the system. It's been running fine, only a few degrees warmer than the old chip with active cooling. Total power use and waste heat is down.

      In a few years, the 20 GHz chips mean that we'll be able to run our wristwatches off a battery for months at 600 MHz without any cooling at all. THAT is the point...
      • Just out of curiousity, I'm building two new Athlon systems, basically identical. However, I was planning to underclock one of them to reduce energy use and heat production. Can this only be done by changing the FSB? Or can the processor multiplier also be lowered? I assume a multiplier locked processor means locked from going both up AND down? (The chips are Athlon XP 1700+ btw, which I assume are multiplier locked)

        Thanks for any info.
      • It'd be nice if AMD would make it easy to do this. I'd love to have a little applet that I could use to toggle my CPU between underclocked with the fan shut off and full speed with the fan going full blast. Bring back the TURBO switch!
    • I don't think it's necessarially an Intel/AMD thing. The way I have always seen the demand for faster hardware is always determined by the software running it. The research and breakthroughs in the hardware industry are speeding up dramatically, while big software titles seem to have steadily taken the same amount of time for development and testing, throughout the evolution of computers. I personally would like to see Intel and AMD work like the automotive industry in how they introduce new automobile models on a set schedule (once a year). Just thoughts...
    • As the processors get faster the people do more. My home workstation(recently upgraded) is a 1.2Ghz PIII, 512Megs RAM, 80 Gig harddrive, dual 15" LCD monitors running WinXP. (which BTW runs far better than any Linux distribution I've ever seen)

      I do live Television recording off my WinTV card using Snapstream, and I can assure you the faster processor does matter. If I could have justified the price of a 2.2Ghz P4, I would have done so. I figure I'll wait for this next generation memory bus to come out and then look at upgrading next year.

      I also use VMWare, and I almost justified the price of going dual processor just for that.

      I guess the point there is that as the computers get faster, bigger, better, we find new applications to take advantage of them. I know I certainly have. I also like the fact that things occur nearly instantaneously on my system.

      Oh, and I bought the Intel over the AMD because I absolutely cannot stand it when my computer locks up. I want stability and reliability... Hence I also went with an Intel motherboard. Thus far no issues, this D815EPEA2U board is by far the best I have ever seen.

      I like that AMD is in the market, but until I start seeing some reviews which acknowledge them for something besides speed, I'm leary. I'm certainly not betting against Intel, companies also want stability and they buy far more computers than Linux nerds.

  • Good (Score:5, Interesting)

    by Knunov ( 158076 ) <eat@my.ass> on Monday February 25, 2002 @09:48PM (#3068362) Homepage
    Let Intel and AMD keep the GHz war smoking hot. I'm on a PIII 700 and it runs as fast as I need it to. I plan to upgrade to the PIII 1GHz (Slot 1/133), but right now it's going for about $350, which is a silly price.

    But flood the market with P4's and K-Whatevers
    at 3+GHz, and the price, she'll keep on droppin'.

    Thank God for the bleeding edge hardware buyers. They keep folks like me who consistently buy CPUs at the $/MHz sweet spot with enough left over for more memory :)

    Knunov
    • Actually, it's going for about half that price, or $170.

      For that price you could buy an Athlon XP 1600 and decent motherboard.
    • by GoRK ( 10018 )
      Actually, you'll probably see P3 1GHz go UP in price after a few months. I propose they'll not be cheaper than $120 new for quite a number of YEARS. Even a Pentium II 450 still goes for about 80 bucks, new. Used you can get them for about 15.

      For what a P3 will cost you, you could get a faster chip AND the motherboard to support it
    • ...that buying at the sweet spot is really expensive and not always the best investment in terms of $/mhz?

      Simple Example:

      Chip 1 0.9 ghz: $200
      Chip 2 1.5 ghz: $450
      Chip 3 2.0 ghz: $1500

      Knumov: buys Chip 1 at $200, waits a year, then buys Chip 2 when it reaches $200, and so on. Average $/mhz = 33 cents

      Smart guy: buys Chip 2 and skips a cicle. Average $/mhz spent = 30 cents

      Smart Guy saves some bucks and enjoyed the most demanding games and apps for a year while Knumov did not.
  • by stevarooski ( 121971 ) on Monday February 25, 2002 @09:55PM (#3068392) Homepage
    Memory has ALWAYS been the bottleneck of performance. If you give a 500mhz processor unlimited memory bandwidth then of course its going to blow away nearly any processor without. Give a 2.6 ghz memory-bandwidth-craving micro-pipelined P4 more mem bw to work with, and yeah its going to kick the crap out of the competition.

    That said, I just wish the process of speeding up memory wasn't so painfully slow! It'd also be nice to have some kind of standard mem that'd work anywhere, even tho both rambus and ddr techs look promising for the immediate future (and now also they're comparitively priced).
  • 16GB: 12GB unusable? (Score:3, Interesting)

    by milkmandan9 ( 190569 ) on Monday February 25, 2002 @09:57PM (#3068398)
    I could be wrong here, but it seems to me that you can only address 4GB of memory with the 32 address lines that the P4 has.

    The article didn't address (yuk yuk) this, and I'm certainly not on the cutting edge of chip design nowadays...can someone explain how you can use those upper 12GB? Is it increased address space on the P4 (seems unlikely) or some magic communication to the chipset (also seems unlikely), or something else entirely?
    • From the E7500 chipset specs:

      "Offers a maximum memory bandwidth of 3.2GB/s through a 144-bit wide, 200MHz dual data rate SDRAM memory interface supporting a maximum of 16GB of memory"

      I believe the chip has 36 address lines...
    • by G-funk ( 22712 ) <josh@gfunk007.com> on Monday February 25, 2002 @10:23PM (#3068488) Homepage Journal
      I could be wrong here, but it seems to me that you can only address 4GB of memory with the 32 address lines that the P4 has.

      Nope, all pentium chips since the ppro and pII have more than 32 address lines (not sure exactly how many, is it 40?).. To access this with 32bit registers requires the use of 4mb paging instead of 4kb paging. Check out http://x86.ddj.com/ [ddj.com] for more info.
    • I've posted as much before, but I'll repeat myself here:

      All newer Intel chips have a mode called PAE, Physical Address Extension, that allows them to access over 4GB of RAM. In fact it allows them to access a total of 64-GB of ram, 36-bits. Windows 2000 Advanced Server and Datacentre both support PAE in a mode they call WAE.

      What happens is that Windows presents every application with a 4GB virtual address space (2GB system 2GB app) regardless of the actual physical RAM installed. It does this even if you have less than 4GB, all virtual addressess are mapped to physical addressess by the kernal. Now when a PAE aware app is running on a system with more than 4GB of ram, what it does is setup a window in its address space for mapping the higher memory to. It then orders Windows to point that window at whatever area it happens to need to see at the time.

      It is a system very much like EMS/XMS of the days of DOS, however paging is handled on an app level, not on a Windows level. It is not a perminant solution, but provides a temporary stopgap for large databases that need over 4GB of memory until Intel has its 64-bit line out in earnest.

      If you wish to buy Intel server boards with support for more than 4GB, look at the SuperMicro P4DL6 motherboard. It supports a maximum of 16GB of DDR-SDRAM (with two way interleaving), 133mhz PCI-X, onboard SCSI, dual P4 Xeons and so on.

      http://www.supermicro.com/PRODUCT/MotherBoards/G C_ LE/P4DL6.htm
  • Right now I've got a 1.4Ghz Thunderbird with a 80g heat sink and and the top of the line fan of the time. I also have a room for my computers that is not optimal for air conditioning. (It's upstairs with a big ass window.) It's about 50 some degrees outside, and I've got the window open as wide as it goes and it's warmer up here than an part of the house (all heating up here is shut off).

    Looking at the pictures I see memory that requires fans. Granted I've 7 computers up here, and various other peice of equipment (monitors, printers, etc), but I don't think I can handle any more major BTU producers. At what point are the chip manfacturers going to be limited by the fact that the average person cannot provide the cooling required?
    • This is an extremely valid concern as processors get faster in a competitive climate. As AMD and Intel battle they are effectively over clocking their own processors to produce high clock frequencies. Apple does not seem to even be trying to compete on pure performance terms so are not suffering from a severe heating problem. Intel has been caught at this game by Tom's Hardware with the 1.13Ghz PIII, forcing its recall. In a less competitive environment, less factory over clocking will occur. As soon as Intel realizes their monopoly is gone and can never be regained, and AMD realizes they will never completely crush Intel either they may settle into a pattern of refining their processors for benchmarks other than Performance, such as low heat dissipation. The best place to find high performance processors that have been refined long enough to be efficient is in Laptops. In laptops you will find the true state of the art for non-factory over clocked processors. Some of those processors make it into the channel for purchase separately, usually destined for the 1U server market.
    • Bump "Install liquid nitrogen tank" up on your to-do list! Liquid nitro's cheaper than milk and it'll make you the coolest geek on the block!
  • by Zergwyn ( 514693 ) on Monday February 25, 2002 @10:04PM (#3068416)
    There have already been a bunch of the standard posts that seem to always come with this sort of topic. They all basically amount to "Most users have plenty of speed right now, we don't need more." Two arguments are used for this.

    1. Current applications run fine. This is not true. More and more people are doing things such as video editing(think iMovie, forinstance) DVD encoding, as well as the next generation of high powered games. It would be great if video effects and encoding could be done in real time or faster. It is only with faster chips that this is happening. Final Cut Pro 3 can now do some effects in real time on a G4. As chips get faster, current applications will speed up even more, which means less time waiting around and more time getting stuff done. And anyone who has seen the new Unreal2 engine benchmarks should know that nextgen games will require way more power, which leads nicely into second point...


    2. More power allows applications that we haven't even thought of yet, or are currently not feasable. Stuff like near perfect voice recognition, enhanced artificial intelligence/analysis, modeling, etc are all applications that can't be realized in general right now, but may be some day. How about a navigation system on the web that uses a U2 like engine. Look at what has happened so far. Many user interface enhancements have only been made possible by greater speed/memory. Given more power, it is impossible to predict what enterprising people will come up with. I mean, the argument that "well, things are just fine for word processing" holds true for a 1980 era IBM/Apple II machine. Is anyone seriously arguing that nothing that has happened since has been important for how people work with their computers?


    I know many of us on /., often hardcore gamers, web developers, video users, or just plain techies, aren't the typical crowd and our need for speed may not mirror the general population. But the applications of improved power can eventually apply to everyone. Of course, for me, I won't worry so much about speed when the entire solar system can simulated in real time down to the molecular level, as well as real time photorealistic rendering, while I listen to my mp3s/oggs and browse the web from my neural connection...Of course, YMMV.

    • What Zergwyn said.


      Every time there's a story on Slashdot about newer, faster hardware, somebody will say that it's more than anyone ever needs. That's as predictable as "how about a beowulf cluster of these" and as insightful as "640K of memory is all anyone will ever need."


      Not terribly long ago, CPUs had to struggle many seconds or even minutes just to display a JPEG image on the screen. Imagine the state of the web today if that were still the case. Not as long ago as that, CPUs didn't have enough power to decode MP3 in real time. Five-ten years from now, there will be something that our 10-100GHz CPUs do so quickly that we'll take it for granted.

    • As far as U2/New Doom is concerned, I think that it is foolhardy to buy a system with a game that is yet to be released in mind. I say wait until they release it and buy your system then. I can definatatly see that when U@ and Doom come out that people will buy a new system to play these games. Because let's face it, you will need it to play these games.

      On a similar note, the current DirectX 8.0/8.1 that divides the 2 major video cards ( Nvidia and ATI ) maybe a reason that these next gen games are taking so long to create. Of course it could also be that with the introduction of pixel shaders and other new features, that video graphics engines have gotten ten times harder to create.

      If they could only figure out some way to turn CPU power into bandwidth.....
  • by billstewart ( 78916 ) on Monday February 25, 2002 @10:10PM (#3068443) Journal
    This sounds like a really nice high-end compute server machine to support a herd of developers, as long as you give it enough RAM and Disk Drive and the 300 Watt Turbo-Charged Fan. I don't want it anywhere near my desk - put it in some server room somewhere. (In my current office environment, that means "back in the mailroom next to the Really Loud Xerox Machine".)

    Give me a desktop with no fan, lots of pixels and video RAM, and a reasonable-sized disk and a CD-burner. In a small case. And put the disk in one of those removable-drive drawers so it's easy to replace. If it needs more than 500 MHz, it belongs on the server in the back room. Desktops are for running X (or VNC if you don't have a real OS), and doing light development, and running MP3s. If I need to have a dedicated machine to do development on instead of a shared environment, (which I don't), it almost certainly needs to be a slower machine to emulate a random customer.


    Actually, my current desktop is a laptop running Win98. There's never enough RAM, and often not enough disk, but the 450MHz CPU is almost always fast enough.

    • Well, do what I am doing: get a Shuttle SV24 barebone aluminum mini-micro case.

      It has one external 5.25" bay, one PCI slot, built-in AGP video, sound, Firewire, USB, composite video out, a drawer-mounted hard drive bay, and only weighs 6 pounds. It measures about 11" in all dimensions.

      The main drawback is the CPU: you can install either a Celery or a PIII.

      BUT: I understand that on or about April 1, a new version of the case is coming out. Here's hoping for AMD support!
  • by Sivar ( 316343 ) <charlesnburns[@]gmail...com> on Monday February 25, 2002 @10:13PM (#3068459)
    To access data in a Rambus module, the request must pass through all modules in sequence up to the module that has the data and then must pass back through those modules to deliver the data to the northbridge. This is, BTW, why continuity RIMMs are required.
    As one can derive, this greatly increases latency as the number of modules increases. Servers, being systems that generally have lots of RAM, often have at least 8 modules available.
    Due to this increased latency as a function of the number of modules (and other factors), Rambus is therefore poor memory for servers.
    Note that this is per channel, meaning a dual channel Rambus system with eight modules has the memory latency of a four module system because the modules are split between the Rambus channels.
  • by DocSnyder ( 10755 ) on Monday February 25, 2002 @10:17PM (#3068472)
    The next-generation desktop which I'm thinking of doesn't need a single linuxkernel-in-less-than-one-minute-building numbercruncher. I would like to have a seamless multi-host cross-platform desktop, shared among e. g. a Sun running Solaris, a GNU/Linux workstation, a PDA, some recycled underpowered P100-class machines, an Apple Macintosh, maybe even a (ugh) w1nd0ze box. All of them would run different operating systems on many kinds of hardware.

    A modern desktop environment is built on many layers, lots of processes and daemons, many interfaces and abstractions, most of which could be delegated to and shared among other hosts. Poor performance? No need to throw away the old box, just add a new one. With open and interopable interfaces like X11, CORBA, XML, HTTP or whatever, a next-generation desktop of this kind should be possible, especially with Free software.

    In my view the most promising solution towards this concept is the GNU Network Object Model Environment (GNOME), largely based on CORBA, using only a few remaining locks which are likely to disappear within the next few years. If finally a common object model between GNOME, KDE, GNUstep and other backends can be established, the seamless multi-host cross-platform desktop could become reality.

    The 2.6 GHz machine could then be used to build SETI packages and Linux kernels to heat up the office ;-)
  • If I remember correctly, the NForce does Dual channel DDR right now for the Athlon platform, and is being planned to be released to the Intel platform soon.

    Of course the E7500 is in a different league than the Nforce, but the Dual Channel Idea is pretty much the same.

  • Not fast enough. (Score:4, Informative)

    by tshak ( 173364 ) on Monday February 25, 2002 @10:22PM (#3068485) Homepage
    With all of the posts saying that our 1GHz's are fast enough, I say until Quake n looks like Final Fantasy (the movie!) we don't have fast enough CPU,RAM,Video,[Insert Bottleneck Here].
  • Ram bandwidth (Score:2, Informative)

    by Anonymous Coward
    DDR 1600(200) does indeed provide 1.6GB/s of memory bandwidth, just as DDR 2100(266) provides 2.1GB/s, and DDR 2700(333) provides 2.6GB/s. The current P4 line of processors use a quad pumped 100Mhz pipeline capable of handling 3.2GB/s of memory bandwidth. This can be accomplished by a dual channel PC800 rambus memory controller or by a dual DDR 1600 memory controller (Which nobody currently has). The future specs for the quad pumped 133Mhz pipeline uses the new PC1066 rambus in a dual channel configuration. This same memory bandwidth can be acheived using a dual DDR2100 bus. However the new DDR 2700 can provide a 166Mhz quad pumped memory bus which would, in theory, be the fastest solution. If intel wanted to increase their lead in the market, they would be smart to experiment with a dual DDR 2700(333) configuration with their P4 platforms. Personally I prefer DDR as it doesn't have a proprietary intellectual property licensing scheme that rdram has. Just my $0.02
  • Unfair comparison (Score:5, Interesting)

    by Sivar ( 316343 ) <charlesnburns[@]gmail...com> on Monday February 25, 2002 @10:29PM (#3068496)
    Of course a not-yet-released equivalent of an overclocked P4 is going to beat the competition vs. AMD's AthlonXP which is out and available NOW.

    I would like to note that while the P4 did pounce the AthlonXP, take a look at the numbers (and i'm not talking about price, as I don't even want to know how much that P4 will cost!)

    AthlonXP 2000+ runs at 1,666MHz at a bus which is the equivalent of 266MHz.

    The P4 is running at 2666MHz (a full Gigahertz higher frequency) with a bus at the equivalent of 533MHz.

    The (essentiually overclocked) Pentium 4 has a full SIXTY PERCENT CPU clockspeed advantage and a ONE HUNDRED PERCENT front side bus (FSB) advantage, yet look at its real-world performance:

    MP3 encoding: 6.2% faster than the Athlon. (woop)

    DivX encoding: 30% (note that the program is highly optimized, by Intel themselves, for the P4. How many programmers have an Intel engineer handy?)

    Xinema 4D: 12.8%

    3DMark 2001: 4.9%

    Note that that Lightwave was not included--the only common test that runs faster on the P4 is the raytracing test. Guess which one Tom's Hardware used?

    I just thought I'd point out that the only conclusion that you can really draw from these tests is that, as many in the hardware community know, the P4's architecture is designed for high clockspeed, with zero regard to actual real-world performance. Which matters more to you?
    • Unfair post (Score:5, Insightful)

      by Glonk ( 103787 ) on Monday February 25, 2002 @11:22PM (#3068665) Homepage
      AthlonXP 2000+ runs at 1,666MHz at a bus which is the equivalent of 266MHz.

      The P4 is running at 2666MHz (a full Gigahertz higher frequency) with a bus at the equivalent of 533MHz.

      How come so many people rant and rant about how clockspeed isn't everything, then they go and use the same argument in a different way to establish the "clear superiority" of the Athlon? Who cares how many Hz one is than the other? (Don't argue about consumers here, that's for another discussion...).
      Sorry, but if you're going to paint it as an achievement that the Athlon performs so well 1000MHz slower than the 2.6GHz P4, then why can't the Intel fanboys paint the fact that the P4 runs at 2.6GHz as an achievement?

      The (essentiually overclocked) Pentium 4 has a full SIXTY PERCENT CPU clockspeed advantage and a ONE HUNDRED PERCENT front side bus (FSB) advantage, yet look at its real-world performance:
      "Essentially overclocked" Pentium 4? It's not a new Pentium 4 chip, it's a new motherboard. Of course it's an "essentially overclocked" Pentium 4. Why add in the negative connotations?

      I just thought I'd point out that the only conclusion that you can really draw from these tests is that, as many in the hardware community know, the P4's architecture is designed for high clockspeed, with zero regard to actual real-world performance. Which matters more to you?
      I dunno, looking at these benchmarks I'd say the Pentium 4's architecture is damn fast. It's scaling up incredibly fast. Remember when it was first released and everybody called it a disaster?

      Intel could easily release those 2.6GHz chips today, but they aren't doing it for marketing reasons. The architecture of the Pentium 4 is incredibly fast, but the management of the company is spreading out the releases over time. You can get a 2GHz today and overclock it to 2.6GHz. People are doing that all over.

      The Athlon is a different design: It's very fast. The Pentium 4 is another design: It's very fast. The Athlon is cheaper, by a fair margin, especially at the highest end chips. But painting the picture that the Pentium 4 is so very much slower than the Athlon, especially with benchmarks like this, are just plain stupid.
      • I don't think he said the P4 is slower; he said that the fully overclocked P4, with a 1 GHz advantage, was not really all that much faster for everyday tasks than a 2000+ Athlon.
      • "Intel could easily release those 2.6GHz chips today, but they aren't doing it for marketing reasons."

        And exactly why is this OK to you? Do you like being marketed at? Do you like being fed shite and being told it's ice cream?

        And before you talk about scaling, you should know that a processor "scales" well if you can run it at higher frequencies without increasing voltage or supercooling. At frequencies that AMD and Intel ship at, the processors benchmark similarly.

        If you were not so busy singing the praises of P4 you might also notice that the Tualatin core is overclocking as well as the Athlon, and surpasses them both in some benchmarks.

        Do some damn research before you post or start on about "fanboy blah blah fanboy" while being a fanboy.

        I cannot belive you were modded up for that flamebait.
  • by tcc ( 140386 ) on Monday February 25, 2002 @10:29PM (#3068497) Homepage Journal
    Don't be too impressed with the numbers of the Lightwave rendering benchmark, the scene used is heavily Radiosity-based, which Newtek (makers of lightwave) publicly said that was SSE2-optimized, if they'd run the same application benchmark but with any other math-intensive scenes like raytrace, etc.. the gap wouldn't be that impressive. I use Dual Xenon and Dual MPs at work, I've noticed the difference, and Tom being tom, he still goes on doing flawed benchmarks (flawed because he doesn't mention that little fact even if a lot of people told him).

    At least he does other benchmarks to round-up the possibilities of errors.

  • I don't know about anyone else, but I'm a bit worried about the sign of the beast coming up on my future BIOS screen everytime I reboot. I suppose it's a fitting follow-up to a blue screen of death.
  • These are for databases, web servers, etc.

    You don't run Quake 2 on a Sun E4500. True, Tom and Anand don't benchmark with Linux/Apache, Win2k/Oracle, Solaris/Netscape, but they should have.

    Our database is Oracle with dual P3 933s with 2 gig RAM. A E7500 with up to 16 gig of RAM would take our CPU usage on one of our database machines from 40% to about 20%.

    Why do people keep talking about Quake benchmarks, kernal compiles, etc?
  • There are a couple of flags in this review that raise my skepticism. For example,

    An interesting development in the market is in regard to the memory prices: currently, DDR SDRAM costs just as much as RDRAM. The high price of Rambus, which we have mentioned in many articles previously, should no longer be a purchase barrier.

    Mushkin prices for 256 MB DDR 2700 is $116 and Mushkin 256 MB RIMM is $149. Who knows how much the un-available 533MHZ RIMM will run but it's certainly going to be more than $149.

    Secondly, his benchmark charts don't jibe with other reviews where the 2000 XP is pitted against a 2.2 GHZ P4. He's got the P4 trouncing the Athlon whereas Anandtech is giving a only a slight edge to the P4.

    Maybe Tom's gone to the Steve Jobs School of Benchmarks?


  • to be blunt and without starting a flame. who cares? I'm as excited as the next guy for newer faster machines. But, who cares. I'm using a 500 mhz amd now and its just starting to show a bit of grey. With the exception of super duper digital video apps and photoshop and super number crunching what does anyone need these machines for? nice to have one but word or abiword or star office work the same at 500 mhz as at 1 or 2 or 10 ghz. whats the app that will make a machine this powerful useful for the great majority of pc users? I'm really curious. I want honest answers.

    When do i get to walk up to a screen and say "hey monkeyface whats my check balance" and have it respond "zilch, po-boy and who you callin monkeyface"? when i can get a system to do that then I'll give a damn.

  • Not to raise a stink, but I think of next generation as referring to a major change in system performance and design. For instance, the K7 was next generation from the K6's since the 700Mhz K7 was SIGNIFICANTLY better than a (albeit nonexistant) similarly clocked K6-III. It also involved a new processor core, socket, and a lot of hardware that we (at least for a while) couldn't get our hands on.

    Tom Pabst over there is using some new hardware (basically some fatty P4's, and some juiced up RAMBUS), but his mobo, cards, software, etc, are all things that /.'ers either have or can get shipped to them by tomorrow. This is more like "This week's fastest processor" than "Next-Generation". I like hardware upgrades as much as the next geek, but when I read the title, I was suspecting something cooler than 50% increase in "Office Performance".

    "My reports repaginate in .013 seconds, whereas your puny PIII machine takes almost a tenth of a second!!!"
  • Check out IBM's Summit or ServerWorks' Grand Champion HE chipsets; they have four PC1600 channels which adds up to 6.4 GB/s of memory bandwidth.
  • Absurd (Score:5, Interesting)

    by Perdo ( 151843 ) on Tuesday February 26, 2002 @12:42AM (#3068920) Homepage Journal
    His entire conclusion is absurd. Piece by piece:

    "Our detailed tests show that forthcoming P4 CPUs with 133 MHz FSB clock used in conjunction with the 845E chipset (DDR SDRAM support) will effectively be castrated."

    Intel castrated it their selves. Compare its performance to VIA's P4X266 Chipset's performance and you will see that Intel crippled it to prevent it from competing with Intel's Rambus chipset. Notice that Intel is suing VIA for that chipset because it ruins the facade that RDRAM is better than DDR. Also note that Intel has refused nVidia's request for an Intel license for a DDR chipset. Intel knows that a dual channel DDR chipset would show RDRAM for what it is: A fraudulent attempt to maintain a high performance monopoly. Whatever company "causes to be sold" the most RDRAM gets to own a controlling interest in Rambus Inc. At this point, Intel is the clear winner even though Sony made a race out of it by packaging Rambus with the Playstation 2. Intel suppresses their own DDR performance to make people believe that RDRAM is the fastest stuff out there. AMD would be committing suicide by using RDRAM to capitalize on Intel's marketing hype because that would place them directly under Intel's thumb.

    "This is because the Pentium 4 has a problem: the increase in clock speed (e.g. P4/2533 or P4/2666) will be rendered useless by the slow DDR SDRAM memory bus of the 845 platform".

    Again, this is Intel's doing for product placement purposes as was done with the Celeron when it competed with the Pentium III and was done by Apple on the new iMac's 100fsb 800mhz G4. A 133fsb does not cost any money, it is just an easily achievable clock frequency with available current chipsets.

    "And one shouldn't forget that even a dual DDR platform for P4 should be priced at a level that is similar to a Rambus system, considering that it's from Intel."

    Rephrased: "And one shouldn't forget that even a dual DDR platform for P4 will be priced as high as an RDRAM system because Intel will not license the platform to nVidia and Intel KNOWS it will outperform a Rambus system, ruining 2 years of carefully crafted marketing and gamesmanship" The fact is, a dual channel DDR chipset from Intel may be available for the Pentuim 4, but only for the Xeon, a processor not available except from Intel's favored OEM Parteners, such as Dell.

    Before you defend Intel remember that Craig Barrett, after AMD went from 10% market share to 40% in a year, said "the market is dropping" to justify Intel's reduced profits. Well, Intel is a bellwether stock and the market believed everything Craig said. The market did drop. We all lost our jobs. We can now say in hindsight that at least a part of the market was due to drop. But because of Craig's statement, it was the tech sector that was hit first, and hardest. Instead of simply saying, "Intel has reduced profits because of competitive pressure", he brought the entire tech sector down with him. The recession that was due could have been placed entirely on Enron's shoulders. The energy sector was in fact dropping. Enron's insiders were cashing out at the same time Craig made his statement. People got scared and pulled their money out of the market. There was less money in the market than there had been and it came out of the tech sector when it should have come out of energy.

    Go ahead and defend Intel. They have made poor greedy choices, sold inferior products at exorbitant prices and done it at the expense of all our livelihoods. Shame on them.

    Intel's 1.7 trillion dollar market cap has been cut by Tom Pabst on more than one occasion. A series of articles he has had deriding Rambus, causing the 1.13 Ghz recall, and showing the Pentium 4 for the paper tiger it is has seriously hurt Intel. But Tom, like all hardware websites is cash poor. Tom's hardware has resorted to doing marketing research among their readership for Socratic Technologies. Sometimes they have been overt, sometimes they have sent readers to secure servers just for simple popularity polls. Tom's latest revenue generation technique is the introduction of "Editorial Content Sponsorships" which I'm going to guess prompted the recent editorial change of heart toward Rambus. Please notice that in the most recent article no AMD processors were over clocked according to their projected roadmaps and the test is presented as if it was fiction. Unfortunately, it seems we have lost another fair and unbiased journalist. Another because Sharky's Extreme was the first to go into Intel's pocket, prompting Sharky himself to leave the website. Sharky's is owned by INT Media Group. Noteable investors in INT media include Dell Computer Corporation, International Business Machines Corporation, Lucent Technologies Inc., Macromedia Inc., Microsoft Corporation, Nortel Networks Corporation and Oracle Corporation.

    Expect wonderful reviews of Intel hardware on Sharky's and unfortunately now, Tom's. Look to [H]ard OCP, The Inquirer, The register, Anandtech and Ars Technica for relatively unbiased hardware news.

    Post Intelligently, Thanks :)

  • by jriskin ( 132491 ) on Tuesday February 26, 2002 @12:43AM (#3068923) Homepage
    It is important for a variety of reasons not to let up upon the current technological pace occurring today. There are so many factors to consider economically, scientifically, and sociologically. If we allowed a slow down of the current pace of technological advancement it could have a devastating impact on our society at large.

    First off, it is naive to think that current users wouldn't use or enjoy more powerful computers. It is the software industries fault that end users are unable to fully utilize the more powerful machines being built. Already plenty of comments have suggested a variety of applications from facial recognition to video editing that all would benefit from faster more powerful computers.

    It is actually important to me that regardless of the 'need' the average user has for more powerful computers, that the software industry does its job to drive the users to want more power.

    Only by nurturing and then feeding the publics appetite for technology does the industry continue to push us forward technologically. If millions of people and companies didn't demand the upgrades and new features that are available with more powerful systems we risk losing all the potential gains for the future that these desires produce.

  • Sysmark 2002: Applications Integrated

    The new Sysmark 2002 benchmark includes the following applications:
    <snip/>
    Office Productivity:
    <snip/>
    WinZip 8.0

    Neat, my 866 just is *way* too slow at zipping up those files.

  • I'm looking at the 533 mhz going, 'hm, econo-box testing? I remember when this company was going to put out 604s at that clock rate' and then I realise that is the BUS...

    I'm right now processing a track from 24 bit to 16 for an album remastering I'm doing, in the background, while reading slashdot, and my _CPU_ is barely as fast as the _bus_ of whatever they're looking at. My bus is more like 33mhz I think...

    If I can do this and not think too much of it, no wonder they're not going to sell one to me... I think I'm going to be waiting around for another year or so and then picking up one of the ol' blue and white G4s maybe... gotta love being several years behind the curve, you get the same amount done but for way cheaper. That will be the point when I start running OSX and programming in something more portable to Linux and BSDs... by then I ought to be up to speed with that...

  • My company sells many, many Pentium 4 cpu's and systems, we have tested time and time again RDRAM and DDR memory on this platform (admittedly, we haven't seen this newer tech yet). Anyway, our findings in the past will probably still hold true to these newer techs and that is that while RDRAM provides higher bandwidth the latency is so high that if your applications is retreiving small amounts of data very often, the performance is very much decreased. RDRAM works great for things like games, graphics, video, etc because retrieving "large" chunks of memory is far more optimized. Most database accesses are going to return much small amounts of data, and considering the high initial latency each time, I think that the DDR will really provide a much more responsive database server. (Of course this depends on the data you're storing....)

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...