Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel AMD Hardware

Today's Best CPUs Compared... To a Pentium 4 354

Dr. Damage writes "How do current $74 CPUs compare to the $133 ones? To exclusive $1K Extreme Editions? Interesting questions, but what if you took a five-year-old Pentium 4 at 3.8GHz and pitted it against today's CPUs in a slew of games and other applications? The results are eye-opening." Note that this voluminous comparison is presented over 18 pages with no single-page view in sight.
This discussion has been archived. No new comments can be posted.

Today's Best CPUs Compared... To a Pentium 4

Comments Filter:
  • P4 pride (Score:3, Funny)

    by dushkin ( 965522 ) on Wednesday February 17, 2010 @05:24AM (#31166982) Homepage

    I'm at work, where I have a P4 winXP machine.

    AND I'M PROUD OF IT.

    • Re: (Score:2, Funny)

      by MrNaz ( 730548 ) *

      I bought 5 surplus P4 machines with 512mb ram and 40gb HDDs for my community center's library. They have *CRT* monitors. Beat that!

      • Re:P4 pride (Score:4, Funny)

        by Shadow of Eternity ( 795165 ) on Wednesday February 17, 2010 @05:42AM (#31167076)

        My gaming setup used to be two computers (pentium4 and a q9450) both hooked up to dual-input FW9012 and P260 trinitron CRTs. That was two computers both running at 3500x1200 and putting a combined weight of about 300lbs on my desk.

        We almost didn't need to heat the apartment in winter.

      • P3 Pride! (Score:5, Insightful)

        by AliasMarlowe ( 1042386 ) on Wednesday February 17, 2010 @06:42AM (#31167380) Journal
        I still have a P3 working at home - it's a Dell Dimension XPS T450 from about 1998. It came originally with Windows 98, and over the years it has received extra RAM, new graphics, and so forth, so it now boasts 384MB RAM and an ATI Rage Pro, as well as a 20GB disk.

        Actually, it's really in semi-retirement, as it's a bit slow for modern applications, but it is still on our LAN and occasionally roused from its grave^Wslumber. At one time, it had Win2000, which it could run OK, but it was a little sluggish running Office2000. Nowadays, it dual boots between Ubuntu/Gnome and PCLinuxOS/KDE, which are about as responsive as Win2000 was. It's fine for most web browsing, IRC, file viewing (graphics, PDF, PS, etc.), text editing, and suchlike. It can handle Gimp and Inkscape once the files being edited aren't too big, and can even run LaTeX well enough, but it sucks rocks trying to run OpenOffice.
        • by dcam ( 615646 )

          Until very recently I had a number of dell P3 ~900MHz desktops that ran quite happily. I still have one, currently running debian and acting as my fileserver. Been running for ~4 years with no issues. Shortly to be replaced with a VM running on a dedicated VM host.

      • I've got a 750Mhz Duron machine with 512MB ram and a 20GB hdd at home right now hooked up to a 15" CRT that I bought in 1995. It's due to be gotten rid of at my next upgrade cycle, though.

        I've also still got my dual p3 motherboard with 2x450Mhz CPUs and 256MB ram in it sitting in a box, but it's not in a case or in use in any way. I had a couple of 200Mhz machines around until 2-3 years ago, too, and I managed to keep my family's first computer, a 33Mhz 486DX that we upgraded to a whopping 8MB of RAM and
    • Almost all our boxes at work are p4. But now I would like to move some of them into some sort of virtual infrastructure rather than upgrade them all.
    • by sznupi ( 719324 )

      AthlonXP (of the slower kind - 1700+ / 1.46 GHz) is fine too...as long as one chooses properly written software and keeps the machine clean; having few times more RAM and faster HDD than was common back then also helps greatly. I rarely see a typical, home machine which is more snappy, even though they have few times more processing power - but are almost universally held down by bloat, until quite recently by small RAM and, still, by slow HDDs in case of ever more popular laptops.

      Too bad the test didn't in

    • > I'm at work, where I have a P4 winXP machine.
      > AND I'M PROUD OF IT.

      Well, there is no need to be ashamed of the P4 part...

    • My sister still used the original 386 running Windows 3.1 that I set up for her many years ago. I have thrown out countless computers since then that would blow that system out of the water, but she has no interest in upgrading it.

      She just runs a few games and Word 6. I really have no argument to use to convince her to upgrade, because it still does what she wants. There isn't anything that she can't do now. (Obviously she doesn't access the Internet)

      • Re: (Score:3, Funny)

        (Obviously she doesn't access the Internet)

        Reading that made my arms itch like a junkie without a fix.

      • Re: (Score:3, Insightful)

        by tomhudson ( 43916 )
        Why "obviously"? Win3.1 + chameleon netsock worked fine for connecting to the net on any 386 with 2 megs of ram, (though it worked better with 4 or 8 megs).
  • by damburger ( 981828 ) on Wednesday February 17, 2010 @05:27AM (#31166996)

    From the article:

    For about the same price as the Core i3-530, the Athlon II X4 635 offers four cores that perform better in applications that rely heavily on multiple threads, such as video encoding, 3D rendering, and Folding@Home. In other uses, such as video games and image processing, these two CPUs perform almost identically. The Athlon II X4 635 leads slightly in overall performance and, as we established on the previous page, in terms of performance value. If that's all you care about when choosing a processor, then your decision has been made.

    How can game engines not take advantage of multiple cores? I had no idea this was the case, and find it very surprising given that the PS3 has 7 cores to work with. Are games so lazily programmed that they don't take advantage of that either?

    • Re: (Score:2, Informative)

      by zaibazu ( 976612 )
      The so called 7 Cores are pretty specialized sub units. With the lack of good middleware and development kits at the PS3 release, the platform is just now after years starting to get somewhat used
    • Re: (Score:2, Insightful)

      by Rhaban ( 987410 )

      advantages of multiple cores are not so evident when dealing with real-time physics/rendering/etc.
      If all your processes must communicate with each other constantly, you lose the benefits of having each process processed by a different core.

      • by hvm2hvm ( 1208954 ) on Wednesday February 17, 2010 @06:59AM (#31167454) Homepage
        Physics is very friendly to multithreading since most computations are done in parallel anyway. N objects interacting with each other would be simulated in a series of steps, and for each step you need to calculate the next attributes taking into account the previous ones of all the objects. Then, you would save this instance and start again. During each step, threads can more or less operate independent to each other.

        A very good example of this would be NVidia PhysX.
        • Re: (Score:3, Informative)

          by Mashdar ( 876825 )
          +1. Rhaban, physics/graphics is one of the MOST parallelizable operations we have. The "shared dataset" is the previous solved set, and no communication is needed so long as the previous set is in shared memory of some sort. The new data should be deterministically determined by the previous set. Graphics processors use this in a non-core-based system where specialized hardware modifies the data set in a pre-determined way massively in parallel.
      • by jittles ( 1613415 ) on Wednesday February 17, 2010 @08:43AM (#31168148)

        I work on flight simulators and we DEPEND on multiple core processors to get everything done at once. What used to take multiple racks of computers can now be done on a single computer with dual quad-core CPUs.

        You think IPC is slow on a single machine? Try using reflective memory across multiple computers. Of course we have to handle a bit more than your typical video game since we have to handle hundreds of buttons and switches from multiple crew member stations, night vision, FLIR and day TV cameras, as well as out the window displays.

      • Re: (Score:3, Insightful)

        by MikeBabcock ( 65886 )

        That's hardly insightful being completely wrong. Physics is one of those tasks that lends itself very well to multi-threading actually.

        Its just a completely different way of designing software. Its very hard to find good programmers. Its even harder to find good programmers who are skilled in threaded software design. Just guess how hard it is to find the ones who can debug it :-).

      • If all your processes must communicate with each other constantly, you lose the benefits of having each process processed by a different core.

        This statement is just flat wrong, and hardly insightful. The only time this condition is true is if you are dealing with processors *completely isolated* from each other's memory resources. To my knowledge, there is no such beast (cluster or multi-core system) and hasn't been since the days before MPI and OpenMP (or their predecessors) existed. The only bottlenecks in the above quoted situation are latency and bandwidth so that each process CAN communicate simultaneously with any other process, running

    • Re: (Score:3, Funny)

      by h00manist ( 800926 )

      Are games so lazily programmed that they don't take advantage of that either?

      Obviously it's not exactly easy to make programs that can run either on multiple cpu's or a single one just as well.

      • Actually, you just make a multi-threaded program and then set specified threads affinity to available CPUs or run them all on one if only one is available. No real difference here. Setting a thread affinity is one library call. Examining the number of cores and rudimentary thread distribution algorithm would be maybe 200 lines of code.

        Obviously, it's very difficult to distribute the load *equally* between cores. You can split AI thread from, physics, data preloaders, networking, input handling, audio, CPU-s

        • Re: (Score:2, Funny)

          by Anonymous Coward

          Yeah, you're right, it's all really about having multiple threads in your soft. All these deadlocks, stravations and races blahs are just there to frighten kiddies!

        • Shouldn't load balancing be the operating systems job?

    • Re: (Score:3, Informative)

      by Verunks ( 1000826 )

      How can game engines not take advantage of multiple cores? I had no idea this was the case, and find it very surprising given that the PS3 has 7 cores to work with. Are games so lazily programmed that they don't take advantage of that either?

      this was the case a couple of years ago, nowadays all major games(dragon age, mass effect 2, battlefield bad company 2, etc..) uses my dual core at 100%

      the frostbite engine(used in bfbc2 and bf1943) is even designed to use up to 16 threads http://repi.blogspot.com/2009/11/parallel-futures-of-game-engine.html [blogspot.com]

      • nowadays all major games(dragon age, mass effect 2, battlefield bad company 2, etc..) uses my dual core at 100%

        ORLY? 100%, you say? Presumably the games are running SETI@home or similar to eat up cycles while waiting for the GPU or the other core to finish. That's very kind of them.

    • by TheThiefMaster ( 992038 ) on Wednesday February 17, 2010 @06:43AM (#31167390)

      Speaking as a PS3 dev, the SPUs are very different to program for than a normal multi-core cpu (and you only get to use five and a half of them anyway, not 7).

      On the flip side, everything based on UE3 (which is most big cpu-hungry multi-platform titles these days) is multithreaded to two or three significant threads: Game, rendering, and possibly physics (depending on physics engine used). None of them are SPU threads (though they may use the SPUs for some tasks), so PS3 performance isn't generally as good as the 360's, but in most games it's a non-issue as both platforms go over the 30 fps cap.

      On PC, most UE3 games will run best on two cores, with anything above that being unnecessary.

    • Are games so lazily programmed that they don't take advantage of that either?

      Some games now are multi-threaded.

      The problem from perspective of game software is that it has to be near real-time. Synchronizing multiple threads in the time available to render a single frame (e.g. at 25fps that 40ms, or at 40fps - 25ms) is a very tricky task. It is more rewarding to invest into optimizing single-threaded engine, while optimizing multi-threaded variant is quite risky, often with bugs showing up only after the game reaches wide masses.

      P.S. Same applied btw to video playback softwar

    • How can game engines not take advantage of multiple cores?

      Several reasons.

      • If your game will run on both single-core and multi-core machines, it might not make sense to optimize for the latter, which would likely make the former slower - and the former is already slower, so you care more about it.
      • Many games care more about responsiveness than throughput. While you can run N threads on N cores, getting full utilization of your resources, games usually have a loop in which input feeds into the logic system, which feeds into the physics system, which feeds into the
      • by AuMatar ( 183847 )

        I'd also question how much is really gained by multithreading here. Gaming is not easily broken down into independent problems- AI, rendering, physics all touch the same data structures. My guess is that multi-threading does some speedup, but not a huge amount due to waiting for data locks. The real advantage of multiple cores is being able to run a browser on a 2nd monitor (or alt tab to it) so you can do more than just game on the thing.

        • by tepples ( 727027 )
          So what if you double-buffer large parts of the state of the game world? Have AI, rendering, and physics all look at one read-only copy of the state, and then have physics produce the next frame's state. Coming up with a consistent way to freeze state in this way also helps with the quicksave code.
    • by aaaaaaargh! ( 1150173 ) on Wednesday February 17, 2010 @07:21AM (#31167584)
      The main reasons:
      • Many problems cannot be parallelized at all. If a problem is sequential in nature, multiple cores cannot solve it faster.
      • Even when a task can be parallelized, this is at times complicated. Many developers lack the skills to implement or even invent efficient parallel algorithms. It's not just about spawing a few additional threads, there are usually complicated interprocess communication problems involved.
      • Since mainstream machines currently may contain everything from 1 to 8 cores (including the virtual ones created by hyperthreading), developing for n cores is always going to involve tradeoffs. The program should still run well on a single core machine.
      • Many game engines in use by studios are not yet updated to take full advantage of multiple cores and it is completely non-trivial or too expensive to change them accordingly.
      • And now for reality. (Score:5, Informative)

        by Anonymous Coward on Wednesday February 17, 2010 @08:16AM (#31167854)

        As somebody working in the gaming industry, let me correct you on each of your points.

        1) A great many game-related problems can be parallelized quite well. It differs by genre, but most games today could easily split graphics, audio, input processing, game logic and AI into separate threads. Some gaming engines have started to do this. AI is one area that really benefits from multiple threads of execution, so that we can simulate several different outcomes at a time.

        2) This was true in the 1970s. We've come a long way since then. From compiler-assisted technology like OpenMP to a variety of higher-level approaches and techniques, multithreaded programming doesn't have to be difficult. Even just making your data immutable, like functional programmers have been trying to teach us for decades, removes many of the IPC woes you mention.

        3) This isn't a problem at all. Aside from netbooks, most consumer laptops and virtually all consumer desktops sold since 2006 have had at least two cores. Intel's Core i7 has been out for over a year now, and has seen very good adoption rates. The average number of virtual CPUs (ie. physical, cores or threads) on the average gaming PC today is roughly 2.7. Besides, games shouldn't care how many CPUs are present. They adapt to the available resources. If you have one CPU, we do everything on it. If you have 8, we'll distribute the load appropriately.

        4) Where did you hear this from? Again, this was true in 2003, but things have changed a lot since then. Virtually every engine written since then, by a half-decent team, has included mulitprocessor support.

    • Re: (Score:3, Insightful)

      by MrNemesis ( 587188 )

      As noted, the PS3 is more of a single core PPC processor plus 6 SSE-on-steroids units. Whilst it's true that parallelism needs to be incorporated into the engine design, the tasks you'd farm out to the SPE's or whatever they're called are very different from what you'd ask core3 to do on your x86 processor.

      The CPU in the 360, however, is a genuine triple-core PPC processor.

    • by Pojut ( 1027544 )

      Note: This is only my opinion. I have no evidence to back this up.

      For the same reason why every channel on TV isn't in HD yet...developers are waiting for the public to catch up with the industry. While most PC games still aren't built from the ground up for multiple cores, they are taking increasing advantage of them. As single core CPUs are phased out (which, except for mobile and extremely-low-budget set ups, they have been), newer games are more and more optimized for it. It's only a matter of time

    • by dcam ( 615646 )

      Writing multi-threaded code is hard. Writing high performance, non-buggy multi-threaded code is very hard. Of the order of a magnitude harder.

      In addition some things are easily parallelised, eg web servers. They have multiple users and each page that is hit represents a single isolated request, so even just one user accessing a site is easily parallelised.

      With games you have one user and things need to happen in a sequence. You code to handle the physics of a bullet trajectory must syncronise with the code

    • by Eskarel ( 565631 )

      Primarily because parallel programming(taking advantage of multiple cores) is substantially more complex and difficult than the more traditional single core variety. It's not a matter of just flipping a switch.

      Add to that the fact that multicore systems have only really become common over the last few years and most game engines in current games were begun well before this was the case. At that time dedicating serious resources to multicore was probably considered a waste.

    • Re: (Score:3, Insightful)

      I would guess a fairly big factor is because generally the game logic which runs of the processor doesn't degrade well. With graphics you can lower resolutions, change texture sizes and add additional lighting effects which are optional so the game just looks a bit worse but plays the same. Trying to do the same with game logic is much harder, maybe some adaptive AI could be made to play better on faster hardware plus some extra graphical effects probably need some extra processor time but these changes w

    • Re: (Score:3, Funny)

      by Lord Ender ( 156273 )

      Left 4 Dead and Team Fortress 2 both have options to use multiple cores. I believe that, when enabled, the other cores to "physics processing." My understanding is that "physics processing" is geek-speak for "making the bodies of your slain foes collapse into realistic piles of death as they hit the ground."

  • Comment removed (Score:5, Informative)

    by account_deleted ( 4530225 ) on Wednesday February 17, 2010 @05:33AM (#31167024)
    Comment removed based on user account deletion
  • Eye-opening? (Score:5, Insightful)

    by spge ( 783687 ) on Wednesday February 17, 2010 @05:33AM (#31167026) Homepage
    I had a job keeping my eyes open at all, reading that over-long, poorly structured article with no useful conclusion.
    • Re: (Score:3, Interesting)

      by Anonymous Coward

      The conclusion I made is that liberal arts majors have no business trying to convey technical information. (Jump to the conclusions page to verify that the author was a liberal arts major.)

  • especially for gfx cards (the discrepancies between performance and price are enormous) and hard disks (3D, with price, speed and size)

  • P4 and MythTV (Score:5, Interesting)

    by Yeechang Lee ( 3429 ) on Wednesday February 17, 2010 @05:46AM (#31167100)

    I've been using a Pentium 4 3.0GHz-powered box as a MythTV frontend/backend [gossamer-threads.com] for more than four years. It often records four high-definition over-the-air or FireWire MPEG-2 streams while playing back another.

    For the first three years I used an Nvidia video card with Xv output to play the recordings at very good quality with 50-70% CPU usage. A year ago I moved to VDPAU [mythtv.org], which gives me even better playback with under 5% CPU usage, and will do the same with h.264 recordings (generated by the Hauppauge HD-PVR [mythtv.org], for example). Thanks to VDPAU, there's every possibility I'll be able to use the Pentium 4 box for another four years.

    • by vlm ( 69642 )

      I've been using a Pentium 4 3.0GHz-powered box as a MythTV frontend/backend for more than four years.

      Yeechang fails to mention that is roughly the sweet spot for a mythtv frontend. I have plenty of experience trying to get slower boxes to do myth, which can be done at some difficulty. One GHz Via C7 or whatever its called, with a semi-supported openchrome driver, now that was a challenge, but it eventually worked.

      A P4 roughly 3 Gigs with about a gig of ram is enough that its no effort to set up at all. Just set up a plain old linux box and it'll work even with the plain jane VESA driver. Now you can do

      • Yeechang fails to mention that [a P4 3GHz] is roughly the sweet spot for a mythtv frontend.

        Yes, it was indeed the sweet spot when I bought it more than four years ago. I certainly wouldn't buy a new P4 today, even if it were possible. I'd get an ION-based Aspire Revo for $200-300; that's clearly today's sweet spot.

        My larger point stands; most people wouldn't expect that a box that was state of the art five years ago would still be adequate for recording and playing 1080i and even 1080p high-definition video

        • Re:P4 and MythTV (Score:5, Informative)

          by Big_Breaker ( 190457 ) on Wednesday February 17, 2010 @10:27AM (#31169670)

          With 100 watts of power consumption at ~10 cents a kilowatt hour you would be spending about $88 a year to run your backend 24x7. That doesn't count the extra draw for air conditioning in summer months (the benefit in winter is pretty minor). Different costs per kwh or power consumption scale accordingly. Hopefully your P4 is a northwood and not a prescott! At some point the reduction in power costs will justify a switch to something like the Revo. My total power costs are about $0.30c a kwh (don't get me started!) so I could pay for the switch in a year.

          There is a great product called the "Kill-a-watt" that will measure the power consumption of a device simply by plugging it in through the kill-a-watt box. My Q6600 rig draws 120-140 watts for a good fraction of the day as measured by my kill-a-watt. It's a non-trivial cost and a 45nm chip might pay for itself in a year and a half.

      • You can spend more money on an even faster system for myth. But its just money down the drain, unless you're doing something totally exotic with high def, or trying to do more than five things at once like Yeechang, or attempting to do dual simultaneous displays, or trying to run a backend on the frontend machine, etc.

        Pretty much true. I have a frontend-only host that does fine with MythTV high-def (1080i OTA). Where it falls hard is playing flash. YouTube HD stuff is passable, but Hulu is an exercise in

    • Re: (Score:3, Interesting)

      by wvmarle ( 1070040 )

      I have the same idea. The slowest CPU on the market is way fast enough for almost anything, unless you have very specific needs. The CPU speed issue is solved and done with. I lost interest some 10 years ago, and started to get more interested in what we are actually doing with it: the software that runs on it, and user interfaces.

  • by distantbody ( 852269 ) on Wednesday February 17, 2010 @05:49AM (#31167120) Journal
    And its constantly growing. check it out: http://www.anandtech.com/bench/default.aspx?b=2&c=1 [anandtech.com]
  • by macraig ( 621737 ) <mark.a.craig@gmCOMMAail.com minus punct> on Wednesday February 17, 2010 @06:26AM (#31167306)

    There's an easy way to thwart that advertising blackmail for users of Firefox: the AutoPager extension. Antipagination would probably still work for older versions of Firefox.

  • by MasJ ( 594702 ) on Wednesday February 17, 2010 @07:00AM (#31167464) Homepage

    Isn't this what the article summary gets at ? I couldn't find anywhere in the conclusion how the P4 actually compares to present day processors.
    I'm not about to read through 17 pages of all of that just to open my eyes.

    Oh, and for CPU comparisons, I usually use:
    http://www.cpubenchmark.net/cpu_list.php [cpubenchmark.net]

    It's quite reliable for my choices. I just need everything to boil down to a number these days. Too much choice out there. Was simpler when you could just look at Ghz and know which is better. Now a P7700 and T8600 (examples I just made up..) could be at the same clock speed, be called Core 2 Duo and have totally different performance numbers. Confusing!

  • by indre1 ( 1422435 )
    P4 3,2Ghz Northwood @3,6Ghz and a decent graphics card can easilly run Modern Warfare 2 @ 1280x1024 - what else do you need from a processor on a desktop computer?
    All these multicores barely give any real advantage to a regular gamer/desktop user at the moment.
    • by Pojut ( 1027544 )

      All these multicores barely give any real advantage to a regular gamer/desktop user at the moment.

      One very significant advantage they provide to people like me is driving a game on my main display while playing back a video on my secondary monitor.

      Dragon Age + Aqua Teen Hunger Force = Made of win

    • Re: (Score:2, Informative)

      by thejynxed ( 831517 )

      You do realize that overclocking Northwood core CPUs is a bad idea, right?

      They have been known to suffer from random heat death, even with water cooling. They also tend to have computational errors and actually suffer worse performance when overclocked. This last bit is very batch dependent though - it really depends on where the chip was manufactured. The heat issue is still valid for every Northwood. There's a good reason most OEMs blocked overclocking in BIOS for their Northwood equipped systems.

  • by pjrc ( 134994 ) <paul@pjrc.com> on Wednesday February 17, 2010 @07:10AM (#31167518) Homepage Journal

    Did anyone else notice how the Q9550 and Q9650 are absent from this article?

    Probably the last thing Intel wants is these previous generation (and attractively priced) chips appearing in the "overall performance per dollar" chart on "Page 17 - The value proposition". Instead, we get a graph where only the i5 and i7 chips appear to perform well beyond any of the older options, but it's a carefully crafted illusion because the faster (and attractively priced) versions of those older chips weren't tested.

  • Haven't read TFA but probably better, this dual core crap is slow as hell.

  • Other factors (Score:5, Insightful)

    by HalfFlat ( 121672 ) on Wednesday February 17, 2010 @07:23AM (#31167598)

    The article makes a strong case for the i3-530 and the i5-750, but unlike the comparable AMD processors, they have no support for ECC.

    If you're using a computer just for game playing and email, that's fine. On the other hand, if you are doing anything which requires reliability — both in terms of machine stability and the consistency of results and data — ECC is a must. The premium that Intel charge for what should be a standard feature prices them out of the value computing market.

    • If you're using a computer just for game playing and email, that's fine. On the other hand, if you are doing anything which requires reliability -- both in terms of machine stability and the consistency of results and data -- ECC is a must. The premium that Intel charge for what should be a standard feature prices them out of the value computing market.

      Yep. Basically if you want to do anything reliable and minimize cost, you need AMD. You also have to take care to get the right motherboard - I've only disc

      • Re: (Score:3, Informative)

        You wouldn't happen to know the sorts or reductions in errors running registered memory brings (compared to just ECC)? If you must run registered as well, it's a comparison between Opterons and Xeons.

        My understanding is that registered memory is less about error correction and more about being able to plug in way too many DIMMs per memory channel, so you don't want it unless you need ridiculous amount of memory.

        If you are concerned about data integrity you might also want to look at an operating system that has ZFS - which means OpenSolaris or FreeBSD, and running mirrored or RAIDZ.

        Or use Btrfs; ZFS isn't the only option with integrity checks.

    • by moeinvt ( 851793 )

      "AMD processors, they have no support for ECC."

      Interesting. I'm an EE, but not a microprocessor architecture guy. Is this ECC for on-board cache memory where the processor is implementing an encoding mechanism between the processor and onboard cache read/write? Does it do something similar between onboard cache and DRAM or external cache, or would that be something implemented at the OS level? I'm guessing that the SER for built in cache has to be ridiculously low (a few per year?). If you can say, wha

      • Re:Other factors (Score:4, Interesting)

        by Timothy Brownawell ( 627747 ) <tbrownaw@prjek.net> on Wednesday February 17, 2010 @08:39AM (#31168094) Homepage Journal
        AMD processors and the newer (i3, i5, i7) Intel processors have the memory (DRAM) controller built in. The ECC here is for the DRAM, I have no idea about internal cache. Google released a study a few months ago with various information about actual observed memory error rates... after a bit of crunching on their numbers [reddit.com], I came up with an expected 1/5 chance of a single random bit-flip over a 6-year lifespan, and a 1/3 chance of part of your memory going bad (and causing crashes, corruption, etc, if not caught with ECC) after a couple years.
      • Re: (Score:3, Informative)

        by SIGBUS ( 8236 )

        The ECC support involves the motherboard RAM itself - each DIMM has extra chips to carry the error-correcting information. It's mainly used in servers that run 24x7. Single-bit errors are automatically corrected, and, if they occur, multiple-bit errors are at least detected. The point is of course to keep the server from crashing, or worse, silently corrupting data.

        Up until about the mid-1990s, most PCs had parity memory, which provides error detection but not correction. But, in the rush to make things che

        • Re: (Score:3, Informative)

          by Entrope ( 68843 )

          To be precise, multi-bit errors are *usually* detected. Any ECC scheme will accept faults that happen to convert stored data from one valid pattern (called a codeword in the literature) and another. They just trade off the likelihood of correctable, detectable-but-not-correctable and undetected faults (according to some model of what causes faults) versus the space and time overhead. The fault origin models are pretty good at matching what most servers see, and the standard ECC schemes are enormously val

  • I have a X31 (see http://www.thinkwiki.org/wiki/Category:X31 [thinkwiki.org] ) and I am thinking about upgrading to a X100e, X200, X201/X210 -- but I am not sure how my trusty X31 compares to current low-end hardware.

    Hard requirements:

    * At _least_ 3-4 hours of run time with normal workload (KDE4, konsole, half a dozen ssh sessions, no flash)
    * TrackPoint - I hate touchpads
    * sturdy - those things are there to be used, not pampered. I don't abuse them needlessly, but I will not go out of my way to make sure the purty purty th

  • My old work computer (Score:2, Interesting)

    by British ( 51765 )

    I'm still using a HP zd7000, a P4 laptop from several years ago as my main PC. The battery has long since died, but it's still perfect for general use with the docking station.

    I've considered plunking down $300 for a modern laptop, but it never seemed to be an issue. This laptop is still "good enough".

  • I remember all the PC World/PC Magazine/Computer Shopper articles on the Pentium, P-II, P-III and the numbers they threw out. The numbers made sense, given a baseline of a 100 MHz Pentium or even a 66 MHz DX/2.

    I would like to see the exact same tests run with these chips. The software may be old - Word 2.0/Photoshop 4.0 - but it should still work.
  • Site appears to be down;Coral mirror: http://techreport.com.nyud.net/articles.x/18448 [nyud.net]
  • by yourlord ( 473099 ) on Wednesday February 17, 2010 @04:18PM (#31176210) Homepage

    Is this little jewel on page 14:

    Still, although PC hardware gets faster over time, software often gets slower. If you go look at our review from back in the day, the Pentium 4 670 rendered this same scene in 309 seconds using a single thread. Now it's taken over 600 seconds to do it with POV-Ray 3.7. Just to make sure we didn't have a configuration problem, I installed an old version of POV-Ray 3.6.1 64-bit from March, 2005 on our LGA775 test system. Lo and behold, the P4 670 completed the render in about the same time we'd measured way back when.

    This to me is the most telling thing in the review. The bloat that has crept into the software made the same cpu take twice as long to render the same scene. This is why we have machines now that make the machines we used 10 years ago look stupid by the numbers, while they don't really offer that much of an improvement in experience due to the incredible amounts of software bloat eating all the extra resources available. This one little paragraph should make the people involved with POVray bow their heads in shame.

I use technology in order to hate it more properly. -- Nam June Paik

Working...