Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel in the GHz Game Again - Skulltrail Hits 5 GHz 229

An anonymous reader writes "Intel's Skulltrail dual-socket enthusiast platform has been making the rounds on the web for half a year or so, but we haven't seen many details yet. TG Daily got a close look at an almost complete prototype, which surely sounds almost like a production ready version, judging from the article. Everything that TG Daily describes sounds like Skulltrail PCs will be very limited in availability and insanely expensive. Intel also has said it has developed 'special' Xeon processors with desktop processor attributes just for Skulltrail. These chips are currently running at a stable 5 GHz."
This discussion has been archived. No new comments can be posted.

Intel in the GHz Game Again - Skulltrail Hits 5 GHz

Comments Filter:
  • I guess... (Score:5, Funny)

    by cesman ( 74566 ) on Wednesday October 31, 2007 @03:05PM (#21188005) Homepage
    I guess the skulls in it's trail are the heads of AMD execs.
    • by arivanov ( 12034 ) on Wednesday October 31, 2007 @03:15PM (#21188155) Homepage
      No... They are the skulls of White Bears who have fallen through the ice because the water in the arctic got warmed up too much by the water cooling kit this beast requires to operate.
      • Re: (Score:3, Funny)

        by Hal_Porter ( 817932 )

        No... They are the skulls of White Bears who have fallen through the ice because the water in the arctic got warmed up too much by the water cooling kit this beast requires to operate.
        You could probably render photorealistic White Bears in realtime on this beasty. And do the AI and physics too. Certainly good enough to use them as enemies in a arctic themed FPS.

        So it all works out in the long run.
  • by User 956 ( 568564 ) on Wednesday October 31, 2007 @03:07PM (#21188025) Homepage
    Everything that TG Daily describes sounds like Skulltrail PCs will be very limited in availability and insanely expensive.

    Obviously, it's the only architecture hand-designed by Dethklok.
  • Excessive? (Score:2, Interesting)

    by Vexor ( 947598 )
    So this is most likely targeted at gamers because we all know that games generally speaking are the most intensive software ever run on a PC. As far as I know though there is no game on the market that requires anywhere near that kind of horsepower. Not that more is a bad thing but I'm running a Intel E6750 at 3gigs and even that rates a 5.9 on the Vista-Meter (not that Vista is a "reliable" benchmark).

    On the other hand...will this be out in time for Crysis?

    • Re:Excessive? (Score:5, Insightful)

      by drix ( 4602 ) on Wednesday October 31, 2007 @03:15PM (#21188157) Homepage

      we all know that games generally speaking are the most intensive software ever run on a PC
      Not even close. Games, after all, run in realtime. There are many, many applications out there that have no problem pegging top-of-the-line hardware for hours on end: DV editing, raytracing, scientific computing. In fact, the whole reason I'm posting this is because I'm waiting for my PC to solve a big math problem :-)
      • by Tim C ( 15259 )
        Not only that, but generally speaking the single most demanding pat of a game is the graphics, and we have dedicated GPUs for that.

        I agree with you; games are power-hungry, but by no means the most power-hungry things you can do with a PC. Mind you, I'm weird - I've actually done proper scientific numerical simulation work (and had to leave it running overnight to finish). I've also done video transcoding, and while that doesn't take as long it wasn't quite real-time last time I did it, so there's definitel
        • Re: (Score:3, Funny)

          by PitaBred ( 632671 )
          Get a faster machine. My laptop will transcode from a DVD to H264 at right near real-time (sans ratio changes), and it's nothing terribly special.
      • by LWATCDR ( 28044 )
        Most people will say games because they are just about the only programs they use that they have to wait on.
        But yes your correct. You did leave out data mining and a few other applications.
        What is amazing is the size of problems that we are now willing to tackle with desktop hardware.

      • Gamers are probably the only users that are more or less immune to price/performance considerations.
      • Re: (Score:3, Funny)

        by p3d0 ( 42270 )

        we all know that games generally speaking are the most intensive software ever run on a PC
        Not even close. Games, after all, run in realtime.
        So? That's because they are tuned that way. I haven't played this sort of game in a while, but in the day, I remember you could tune the game for your system, and it would take 100% of your CPU, GPU, ALU, FPU, and any other U you wanted to throw at it. How long it runs is entirely irrelevant.
      • we all know that games generally speaking are the most intensive software ever run on a PC

        Not even close. Games, after all, run in realtime. There are many, many applications out there that have no problem pegging top-of-the-line hardware for hours on end: DV editing, raytracing, scientific computing.

        Maybe by "PC" the GP meant "desktop PC" and not "workstation." Of course, the Skulltrail platform is just workstation hardware (dual Xeons, ECC FB-DIMMS) with some modifications for uber-gamers that have more money than common sense.

        Isn't a standard workstation (dual workstation CPUs, ECC RAM, workstation graphics card) more appropriate for DV editing, raytracing, and scientific computing? Isn't "desktop" hardware (fast single desktop CPU, faster non-ECC DDR2, "gamer" graphics card/cards) more appropriat

      • by drgonzo59 ( 747139 ) on Wednesday October 31, 2007 @07:29PM (#21190977)
        I'm posting this is because I'm waiting for my PC to solve a big math problem :-)

        You are wasting your time, the answer will always be 42....

      • Games, after all, run in realtime.

        Which is exactly why more processing power is important. You can't wait hours for it to render the next scene. Then again it really is just a semantics issue on what he meant by intensive. You could always just throw your computer into an infinite loop and peg the processor as well.
      • by dbIII ( 701233 )
        In some cases make that weeks per processing job on quite a few nodes at once. What is nice is that gear that really spins off from the mass market for gamers gets shoved into racks at a relatively cheap price to do this sort of stuff. Sun's new multicore machines are nice but you can get what is effectively a few gaming Godboxes for less to get more done with CPU bound stuff.

        One bizzare thing is I ran effectively the same job on an Intel 8 CPU machine and an AMD 8 CPU machine of very similar specs - ther

    • Re: (Score:3, Informative)

      by everphilski ( 877346 )
      I bought a top of the line processor and quadrupled my RAM the beginning of last year - not for video gaming (although it sure didn't hurt, I play occasionally not very hardcore anymore) but to do scientific computing for my thesis. I did a 6DOF model of a guided bullet, with this spiffy guidance model. 500 monte carlo runs took about 2 hours. I needed to do a ton of sets. All in all, my entire master's dissertation worth of sets took about a month worth of running 16 hours a day on a dual-core machine. And
    • by Svet-Am ( 413146 )
      flight simulator x would certainly use all of that hardware. how well is a different matter, but FSX is upward scalable for memory, CPU, and GPU.
  • Traslation (Score:5, Insightful)

    by king-manic ( 409855 ) on Wednesday October 31, 2007 @03:08PM (#21188037)

    These chips are currently running at a stable 5 GHz.
    A practical translation:

    It will be 20% faster, 200% hotter, needs a 300% nosier fan, consumes 500% as much power.
    • Yes, but (Score:5, Funny)

      by paranode ( 671698 ) on Wednesday October 31, 2007 @03:11PM (#21188085)
      The silicon pathways are provided by Monster Cable.
    • These chips are currently running at a stable 5 GHz.
      A practical translation:
      It will be 20% faster, 200% hotter, needs a 300% nosier fan, consumes 500% as much power.
      And yet it will only deliver 33% of the performance.
    • It will be 20% faster, 200% hotter, needs a 300% nosier fan, consumes 500% as much power.


      Don't you mean 500% hotter? For any practical purpose, heat produced in an IC is equivalent to the amount of power it draws.
    • Re: (Score:3, Funny)

      by Sebastopol ( 189276 )
      power = heat

      if you could make something require 500% more power but convert 200% more energy to heat (ignore photonic emissions), you'd have yourself a nobel prize.

      i'm just sayin.

    • by dbIII ( 701233 )

      needs a 300% nosier fan

      That's pretty rough - by that stage the fans become stalkers.

    • Tom's Hardware just did a series of power-consumption tests on various overclocks of a Q9650, the first available 45nm processor.

      At 3GHz, it uses 8.79W when doing nothing, and 73W when running all four cores flat-out

      At 4GHz, it uses 16.83W to do nothing, and 135W with all four cores flat-out; on the other hand this required a voltage increase to 1.44V from the 1.25V that sufficed up to 3.33GHz.

      Fitting curves suggests that you would be using something like 350W for four cores at 5GHz, which is quite impressi
  • by Anonymous Coward on Wednesday October 31, 2007 @03:09PM (#21188061)
    And will be obsolete in a year. Honestly, who spends thousands of dollars every year for the most advanced stuff? Even if you did have a Skulltrail, the rest of you system would bottleneck it. 3 8800GTX's would be the bottleneck, 8GB's of the fastest DDR3 ram would bottleneck, and your harddrive would bottleneck too. The only thing Skulltrail gives you is bragging rights.
    • by DreadSpoon ( 653424 ) on Wednesday October 31, 2007 @03:22PM (#21188249) Journal
      To many people that's all they're looking for. It's like buying an F-350 when the most you use a car for is getting groceries, or getting the biggest house you can possibly afford even though you're a small family of three, and so on.

      Remember, it's not just the spammers that profit off of people with small penises. Auto manufacturers, TV manufacturers, home builders, and now Intel all profit off of them too. :)
      • Re: (Score:2, Funny)

        by Anonymous Coward

        Remember, it's not just the spammers that profit off of people with small penises. Auto manufacturers, TV manufacturers, home builders, and now Intel all profit off of them too. :)

        Ha! I don't buy things from any of those people, and my penis is tiny!

    • This is only talking about a 20% clockspeed jump over a current overclocked workstation platform. Sure, that won't translate into a 20% performance jump overall, but it'll still be a performance jump. I'm not really up for spending an extra $1000 for an 8% framerate increase, but for those people who like that plan this is a perfectly reasonable product. Further, it sets the bar higher for future products - and as a technology enthusiast I can always get behind rasing the bar.

    • by Kaenneth ( 82978 )
      Game Developers can would buy it.

      Insanely expensive now will be standard consumer performance over a major game platform development lifecycle.
    • I am annoyed by references to general "bottlenecks", because it's not just Anonymous Coward who comes up with this.

      It's just not that simple. A 5GHz CPU will be faster than a 3GHz CPU and 3 video cards will be faster than 1 video card almost regardless of other components. The only real bottlenecks you can talk about are the system busses and at the moment, that's not a problem either. HyperTransport 3.0 and intel's quad-pumped busses are still plenty wide enough for 5GHz processors, no sweat.

      I completely u
  • by porkThreeWays ( 895269 ) on Wednesday October 31, 2007 @03:12PM (#21188105)
    Measuring computer performance in Hz is like buying a car based on red line RPMs. It only tells you one component that is meaningless by itself. Just like a car needs torque to give rpm's context, processors need how many instructions can be completed per cycle to be compared to the frequency. I've lost faith in the MHz race and generally look at benchmarks closest to the intended purpose of the processor.
    • by blueZ3 ( 744446 ) on Wednesday October 31, 2007 @03:20PM (#21188223) Homepage
      You just made an almost-sensible car analogy. I didn't think that was allowed here.
      • You just made an almost-sensible car analogy. I didn't think that was allowed here
        Ghz is like measuring a car's RPM, it's meaningless unless you also describe how loud the radio can go.
    • Is there anyone who doesn't know this by now? I think we all figured this out back in the P4 days. The point is the C2D has a high IPC as well as a high MHz.
    • I have no need for a machine more powerful than mine. I would rather buy a silent one.
    • Measuring computer performance in Hz is like buying a car based on red line RPMs. Just like a car needs torque to give rpm's context, processors need how many instructions can be completed per cycle to be compared to the frequency. I've lost faith in the MHz race and generally look at benchmarks closest to the intended purpose of the processor.

      That's exactly how I explained it to my father, back in the K7 vs Pentium 4 days, and as far as he knew MHz was how fast it went and Intel was selling chips with big
  • MHz wars are over (Score:3, Insightful)

    by geekoid ( 135745 ) <{moc.oohay} {ta} {dnaltropnidad}> on Wednesday October 31, 2007 @03:15PM (#21188153) Homepage Journal
    Please, it's all about cores.

    Look at the history of processors speed. We've been pretty flat, and will stay that way in all practical manner for a while.
    Before someone throws the quote like they are smart, Moore's law refers to transistor not speed.

    1) Faster chips require better fabs. Fabs are having difficulty producing better platters with a few enough flaws to produce mass quantities. Strides are being made, but know massive breakthroughs.

    2) Multiples cores and real parallel processing development is just starting to become expected knowledge for the average application developer. Lets be honest, a lot of developers don't bother to understand multi-threading and avoid it like a plague. Fortunately there are some IDEs that make it easier for developers.
    • by AuMatar ( 183847 ) on Wednesday October 31, 2007 @03:27PM (#21188323)
      Cores only help so much- if your problem is not paralelizable, or if it is only minimally so, a billion cores won't help. A word processor is not going to work any faster on a 1000 core machine than on a 1 core machine. Video games might see a small speed up from a multicore, but not that much of one- it doesn't break down into equally weighted threads. For the vast majority of users, 2 cores aren't even really utalized (email and web browsing doesn't use 2 cores). I doubt any home user will see much improvement beyond 2 cores, and absolutely none after 4 even for hardcore multitaskers. Business and scientific apps will see some beyond that, but memory tends to be the bottleneck there- we'd be better off increasing memory bandwidth and latency than clock speed.
      • Re:MHz wars are over (Score:5, Interesting)

        by Firethorn ( 177587 ) on Wednesday October 31, 2007 @03:53PM (#21188647) Homepage Journal
        This sounds a lot like a '640k' quote to me.

        A properly functioning word processor can already do pretty much everything 99.99% of what a user asks of it as fast as the user can tell it to do something, even on the bottom line processor.

        Today's video games, sure, aren't going to benefit much from multicore. But I disagree that the benefits for future games will top out at 2. I mean - you could have 1 core handling user input and processing, 1 core handling the physics enviroment, 1 core for unit AI, 1 core for graphics information. There's a quad core right there.

        Business and scientific apps will see some beyond that, but memory tends to be the bottleneck there- we'd be better off increasing memory bandwidth and latency than clock speed

        Then they can start worrying about beefing up memory bandwidth - I've read about some technologies in the pipe that will help with this. And the scientific community can always use more bandwidth - they are one of the larger users of supercomputers, and this might take a project from 'Need to rent 24hrs on the supercomputer for $$$' to 'I can run this on my work computer for a month/week to get the same results for $'.
        • You're certainly right in your video games point. A few patches ago, the Mac client for World of Warcraft went multi-threaded. If you have two cores, the game engine runs on one of them while the other assists with the graphics processing. If you have a low end GPU (as I do in my MacBook), this made a TREMENDOUS difference in performance. In early October, they also added a built in video recorder to the Mac client, and it too is multi-threaded. I use my quad-core Mac Pro to record boss fights on occas
      • Video games might see a small speed up from a multicore, but not that much of one- it doesn't break down into equally weighted threads.
        Not at all true. Most of the heavy physics and graphics processing is extremely parallelizable. I've even seen AI computations parallelized. Taking advantage of multi-core is a very hot topic in the game industry right now and I assure you we're far from lacking ideas.
      • You want an AI assistant? Thousands of small processors.
         
      • by Surt ( 22457 )
        Video games are very parallelizable at the render and physics layers. They'll have no problem usefully scaling to hundreds of cores.
        • That's true, but the algorithms / program designs that work great with a hundred cores work like crap on one or two cores. Personally, I expect to see video games designed to be truly concurrent just as soon as low-end gamers have quad core machines (and high-end gamers have 32-thread systems).

          • by Surt ( 22457 )
            Absolutely. My point was only that video games are not in a category of 'can't use multiple cores'. At all. They would love to have tons of cores. Whether or not their user base has multiple cores, is an entirely different question.
      • Cores only help so much- if your problem is not paralelizable, or if it is only minimally so, a billion cores won't help.

        This is true.

        Here's the thing: Every one of the applications that people commonly run on a desktop PC can be parallelized.

        The real problem is that programmers who are used to single-thread designs cringe when they see the parallel version. Not only is it moderately more complex, but to generalize to many cores a design frequently entails a 10% to 50% performance penalty compared to the

        • Right now I only have a dual core system; what I see happening is that a CPU hungry program like a game ends up on one core while the other stuff like CPU and background systems end up on the other. So a dual core does help me out a fair bit, but doesn't double performance.

          It'll get better if they improve game threading, but to truly double performance I'll probably need 4 cores.

          I figure many(IE more than four) core systems are about four years away. An eternity in computer terms, of course.
      • by dbIII ( 701233 )
        A lot of things are. Digital video where you apply the same or similar transforms to every frame is one. Image manipulatation in general is another. Sound processing is yet another once you get beyond mono.

        Word processing is a bad example because the software is so badly written and ridulously resource hungry while providing less features than a Desktop Publishing program that ran on a 286 or even one that ran on an Atari ST.

    • by TheRaven64 ( 641858 ) on Wednesday October 31, 2007 @04:22PM (#21189029) Journal
      We have had about 40 years of practice getting one processing unit to pretend to be n, and we're pretty good at it now. We have no good ways (even in theory) of getting n processing units to pretend to be one in the general case. If you have a 5GHz core then you can run two processes on it happily with only a small amount of overhead. If you have two 2.5GHz cores and only one process, you will end up running that process at half of the theoretical speed of your CPU.

      Fewer faster cores will always be more flexible than more slower ones. The reason we go with more slower ones is that slower cores use less power (power scales much worse than linearly with speed, so two 1GHz cores will use a lot less power than one 2GHz one). Some workloads are intrinsically parallel (e.g. web serving) and so having lots of cores using less power is a big win. Others are not and so extra cores are just a waste (although you can often consolidate multiple serial tasks onto one machine with lots of cores).

      • We have had about 40 years of practice getting one processing unit to pretend to be n, and we're pretty good at it now.

        We've also had decades of practice solving problems using multiple processing units. In fact, for the sort of problems that today's processors can just barely handle (i.e. those problems that processing power is still an issue on) we've had *more* practice solving them on compute arrays than we have solving them on single processors.

        • Re: (Score:3, Insightful)

          by TheRaven64 ( 641858 )
          You are missing the point. Any problem that can be solved on a parallel machine can be solved on a serial machine of the same computational power in the same time. The converse, that any problem that can be solved on a serial machine can be solved on a parallel machine of the same processing power, is not true. At the abstract level, any nondeterministic finite automaton can be reduced trivially to a deterministic equivalent, but an arbitrary degree of nondeterminism can not be trivially added to a DFA.
          • The example you've used, 16 2.4ghz cores vs 64 600mhz cores is a pretty typical example.

            If you go on mhz alone, the 16 core machine should process work units at the same speed as the 64 core machine (1/4th the number of processors, but 4x faster cpus, vs 4x more processors and 1/4th speed cpus). But that's not taking into account faster bus speeds, better architecture, improved floating point performance etc.

            However, a 96 core machine at 600mhz would process far more units of work than the opteron machine,
    • Please, it's all about cores.

      I think to you it's all about throwing around bullshit unfounded opinions.

      There are many tasks, some of them on the dekstop, which will never be parallelizable. Single-core performance has been and will remain absolutely crucial, even when everyone and their mom can write code in a parallelizing toolkit.

      Faster chips require better fabs. Fabs are having difficulty producing better platters with a few enough flaws to produce mass quantities. Strides are being made, but know massive breakthroughs.

      My phoniness meter just exploded.

  • by Thagg ( 9904 ) <thadbeier@gmail.com> on Wednesday October 31, 2007 @03:16PM (#21188169) Journal
    I've seen this before, I've never understood it. What does it mean?

    Thad
    • Re: (Score:3, Insightful)

      by Kjella ( 173770 )
      Servers are about making a lot of people happy at a reasonable speed. Desktops are about making one user happy at an extreme speed. A lot of crap is single-threaded or not suitable for parallelization, and the best solution is to push that single thread at maximum speed. That's the only desktop quality of significance I know of. With that said, I have a quad-core and my biggest annoyance right now is disk thrashing. My CPU is usually almost idle, but having a lot of tasks using the disks at the same time sl
      • by geekoid ( 135745 )
        "...and the best solution is to push that single thread at maximum speed"

        Many developers believe this, and in any practical application they are wrong.

        I have never seen an application where real world performance, and more importantly "perceived performance increase", isn't improved with threading. Too many times I have seen scenarios where a user is sitting waiting for the computer to finish. When they could be doing some other work.

        I have seen developers wither under the request to add some multitasking.
        • Hell, try opening a large email over a non-local connection in Outlook. Even Microsoft withers in the multitasking arena. The POS locks up like a deer in headlights until it downloads the entire email.
      • "My CPU is usually almost idle, but having a lot of tasks using the disks at the same time slows everything down. I really look forward to SSDs and near penalty-free random access."

        You could always store your mp3s on one disk, another application on another disk, etc etc. It's a kluge though, because it requires more power to do so.
    • I've seen this before, I've never understood it. What does it mean?

      Thad


      I'm not intimately familiar with the specifics in this case, but starting with a server chip and "adding desktop processor attributes" would typically entail:

      adding the inability to use ECC.
      adding a reduction in cache.
      adding a lack of fault tolerance or error checking capabilities.
      adding the feature of being impossible to use with > 2 sockets.
      adding a whizzy new marketing name.

      And, the enthusiast desktop parts are often easy to overclock, while server parts assume you'll just buy a faster CPU instead of wasting time fiddling with something that may catch fire.

      BTW, hey, I remember you from alt.movies.visual-effects "back in the day" before the death of Usenet. good to see you haven't fallen off the face of the planet. I'm not in the process of working on a compositing demo reel so I can try to jump from straight IT to visual effects in the near future. I blame this career change in part on all your interesting and informative posts getting stuck in my head. :)
      • by Thagg ( 9904 )
        Forkazoo said > I'm not in the process of working on a compositing demo reel so I can try to jump from straight IT to visual effects in the near future. I blame this career change in part on all your interesting and informative posts getting stuck in my head.

        Sorry about that whole visual effects problem... Hope it works out.

        I thought your description of the difference between the server chips and desktop chips was right on.

        I'm going to be building a 8-core AMD machine in a few days, and I'll use the "ser
  • Stable (Score:5, Funny)

    by raddan ( 519638 ) on Wednesday October 31, 2007 @03:21PM (#21188241)
    As long as you have an ample supply of liquid nitrogen.
  • by Joe The Dragon ( 967727 ) on Wednesday October 31, 2007 @03:24PM (#21188279)
    single-slot graphics cards, FBDIMMS and you will need 4 of them to get the max system out of the memory system, SLI useing Nvidia nForce 100 chips over a pci-e x16 1.1 bus split to 2 x16 slots, dual eps power in, 3 chip sets chips that driver up cost and power use.

    The dual amd system that this will be like this will use DESKTOP RAM, have 2 or more chipset choices. Also the amd setup lets you have 2 full Northbridge chipsets for even more i/o the nForce 680a uses this and nvidia will likey have a new chipset with pci-e 2.0. The old has a x16 x8 x8 x16 pci-e with a total of 56 PCI-E lanes.

    The new amd chipet is also comeing and you may even see a board with 2 Northbridges = 82 pci-e lanes.

    790FX

            * Codenamed RD790, final name revealed to be "AMD 790FX chipset"
            * Dual-socket (Quad FX, Dual Socket Direct Connect Architecture) or single AMD processor configuration
            * Maximum four physical PCI-E x16 slots and discrete PCI-E x4 slot , the chipset provides a total of 52 PCI-E lanes, with 41 lanes in Northbridge
            * HyperTransport 3.0 with support for HTX slots and PCI Express 2.0
            * ATI CrossFire X, see below
            * AutoXpress, see below
            * Extreme overclocking, reported to have achieved about 420 MHz bus for overclocking an Athlon 64 FX FX-62 processor, from originally 200 MHz.
            * Discrete chipset cache memory of at least 16 KB to reduce the latencies and increase the bandwidth
            * Supports Dual Gigabit Ethernet, and teaming option
            * Reference board codenamed "Wahoo" for dual-processor system reference design board with three physical PCI-E x16 slots, and "HammerHead" for single-socket system reference design board with four physical PCI-E x16 slots, also notable was the reference boards includes two ATA ports and only four SATA 3.0 Gbit/s ports (as being paired with SB600 southbridge), but the final product with SB700 or SB750 southbridge (see below) should support up to six SATA ports
            * Northbridge made on 65 nm process, manufactured by TSMC, and runs at 3 W when idle, and maximum 10 W under load, nominal 8 W power consumption, the northbridge was seen on reference design with single passive cooling heatsink only instead of connecting to heat pipes which are frequently used on current mainstream motherboard offers, the combination of 790FX northbridge with SB600 southbridge consumes normally less than 15 W
            * Enthusiast discrete multi-graphics segment

    Even if the Intel system is faster the amd system with less costly MB and much cheaper ram will likely be a better buy.
    • Yeah, this sound like a well balanced system. It's not the fastest memory around, and the CPU will probably be beat just a bit by the Intel counterpart, but it is at least affordable. It also seems to use a lot less power/generate heat. As this is the enthusiast market, Intel must have a clear winner here.
  • Progress (Score:3, Funny)

    by Duncan3 ( 10537 ) on Wednesday October 31, 2007 @04:07PM (#21188849) Homepage
    Another huge technology gain for virgins living in their parents basements worried about their small penis.
    • "Another huge technology gain for virgins living in their parents basements worried about their small penis."

      I don't care what motivates their purchases. They are paying for the hardware the rest of us will buy cheaply later in its product life cycle. The basement lifestyle must make for lots of disposable income...
  • by Junior J. Junior III ( 192702 ) on Wednesday October 31, 2007 @05:07PM (#21189589) Homepage
    The AMD Skullfucker-64 5300+ will 0wn this.
  • Cuz thats the only thing i can think of as a use for a 5 ghz computer in an average household.
  • Phase change and LN2 cooled quads are already running [xtremesystems.org] at over 5GHZ. This is just intel themselves overclocking [xtremesystems.org] as is mentioned in the article. Is this even news? It's not like Intel is actually going to be selling a phase change or LN2 cooler to go along with their new platform. And even if they were this doesn't sound like any sort of advance in silicon as is implied by the article summary.
  • 8GB RAM + SLI? (Score:3, Interesting)

    by TheLink ( 130905 ) on Thursday November 01, 2007 @02:00AM (#21193631) Journal
    Just curious would SLI video cards and popular games actually work well with 64 bit Windows?

    Correct me if I'm wrong but if you're stuck with 32 bit windows there's no point having much more than 2GB RAM if you're doing SLI, given you have 4GB addressing space and the video cards would take a large chunk of that addressing space.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...